threads
listlengths
1
2.99k
[ { "msg_contents": ">I think it's also important to suggest to the users how they can turn\n>on hot_standby on their standby. So, perhaps-a-bit-verbose hint would\n>be like this.\n>\"Either start this standby from base backup taken after setting\n>wal_level to \\\"replica\\\" on the primary, or turn off hot_standby\n>here.\"\n>This this make sense?Can you help me understand what [setting wal_level to \\\"replica\\\"] helpfor this startup from basebackup?\nDo you mean set wal_level on basebackup or on the database we dobasebackup?\n>I think it's also important to suggest to the users how they can turn>on hot_standby on their standby. So, perhaps-a-bit-verbose hint would>be like this.>\"Either start this standby from base backup taken after setting>wal_level to \\\"replica\\\" on the primary, or turn off hot_standby>here.\">This this make sense?Can you help me understand what [setting wal_level to \\\"replica\\\"] helpfor this startup from basebackup?Do you mean set wal_level on basebackup or on the database we dobasebackup?", "msg_date": "Fri, 15 Jan 2021 21:01:23 +0800", "msg_from": "<lchch1990@sina.cn>", "msg_from_op": true, "msg_subject": "Re: Wrong HINT during database recovery when occur a minimal wal." } ]
[ { "msg_contents": "I learned a few things when working on the key management patch that I\nwant to share here in case it helps anyone:\n\n* git diff effectively creates a squashed diff of all commits/changes\n* git format-patch wants to retain each commit (no squash)\n* git format-patch has information about file name changes\n (add/rename/remove) that git diff does not\n\n* git apply and git am cannot process context diffs, only unified diffs\n* git apply only applies changes to the files and therefore cannot\n record file name changes in git, e.g., git add\n* git am applies and merges changes, including file name changes\n\n* to create a squashed format-patch, you have to create a new branch\n and merge --squash your changed branch into that, then use git\n format-patch\n* to create a squashed git format-patch on top of a lower branch\n you have to make a copy of the lower branch, merge --squash on the\n upper branch on top of that, and then use git format-patch comparing\n the lower branch to the upper one\n\nMaybe everyone else knew these things, but I didn't. I can provide more\ndetails if desired.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 15 Jan 2021 13:39:49 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Git, diffs, and patches" }, { "msg_contents": "On Fri, Jan 15, 2021 at 01:39:49PM -0500, Bruce Momjian wrote:\n> I learned a few things when working on the key management patch that I\n> want to share here in case it helps anyone:\n...\n> Maybe everyone else knew these things, but I didn't. I can provide more\n> details if desired.\n\nOne more learning is that git diff compares two source trees and outputs\na diff, while format-patch compares two trees with a common commit and\noutputs a diff for each commit. This is why format-patch can output\nfile name changes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 23 Jan 2021 15:16:12 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Git, diffs, and patches" } ]
[ { "msg_contents": "I wrote this patch last year in response to a customer issue and I\nthought I had submitted it here, but evidently I didn't. So here it is.\n\nThe short story is: in commit 5364b357fb11 we increased the size of\npg_commit (n�e pg_clog) but we didn't increase the size of pg_commit_ts\nto match. When commit_ts is in use, this can lead to significant buffer\nthrashing and thus poor performance.\n\nSince commit_ts entries are larger than pg_commit, my proposed patch uses\ntwice as many buffers.\n\nSuffice it to say once we did this the customer problem went away.\n\n(Andrey Borodin already has a patch to change the behavior for\nmultixact, which is something we should perhaps also do. I now notice\nthat they're also reporting a bug in that thread ... sigh)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)", "msg_date": "Fri, 15 Jan 2021 19:07:44 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "increase size of pg_commit_ts buffers" }, { "msg_contents": "On Fri, Jan 15, 2021 at 07:07:44PM -0300, Alvaro Herrera wrote:\n> I wrote this patch last year in response to a customer issue and I\n> thought I had submitted it here, but evidently I didn't. So here it is.\n> \n> The short story is: in commit 5364b357fb11 we increased the size of\n> pg_commit (n�e pg_clog) but we didn't increase the size of pg_commit_ts\n> to match. When commit_ts is in use, this can lead to significant buffer\n> thrashing and thus poor performance.\n> \n> Since commit_ts entries are larger than pg_commit, my proposed patch uses\n> twice as many buffers.\n\nThis is a step in the right direction. With commit_ts entries being forty\ntimes as large as pg_xact, it's not self-evident that just twice as many\nbuffers is appropriate. Did you try other numbers? I'm fine with proceeding\neven if not, but the comment should then admit that the new number was a guess\nthat solved problems for one site.\n\n> --- a/src/backend/access/transam/commit_ts.c\n> +++ b/src/backend/access/transam/commit_ts.c\n> @@ -530,7 +530,7 @@ pg_xact_commit_timestamp_origin(PG_FUNCTION_ARGS)\n\nThe comment right above here is outdated.\n\n> Size\n> CommitTsShmemBuffers(void)\n> {\n> -\treturn Min(16, Max(4, NBuffers / 1024));\n> +\treturn Min(256, Max(4, NBuffers / 512));\n> }\n> \n> /*\n\n\n", "msg_date": "Mon, 15 Feb 2021 02:40:04 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: increase size of pg_commit_ts buffers" }, { "msg_contents": "\n\n> 16 янв. 2021 г., в 03:07, Alvaro Herrera <alvherre@alvh.no-ip.org> написал(а):\n> \n> Andrey Borodin already has a patch to change the behavior for\n> multixact, which is something we should perhaps also do. I now notice\n> that they're also reporting a bug in that thread ... sigh\n\nI've tried in that thread [0] to do more intelligent optimisation than just increase number of buffers.\nThough, in fact bigger memory had dramatically better effect that lock tricks.\n\nMaybe let's make all SLRUs buffer sizes configurable?\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/b4911e88-9969-aaba-f6be-ed57bd5fec36%40darold.net#ecfdfc8a40af563a0f8b1211266b6fcc\n\n", "msg_date": "Mon, 15 Feb 2021 15:55:49 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: increase size of pg_commit_ts buffers" }, { "msg_contents": "On Mon, Feb 15, 2021 at 11:56 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 16 янв. 2021 г., в 03:07, Alvaro Herrera <alvherre@alvh.no-ip.org> написал(а):\n> > Andrey Borodin already has a patch to change the behavior for\n> > multixact, which is something we should perhaps also do. I now notice\n> > that they're also reporting a bug in that thread ... sigh\n>\n> I've tried in that thread [0] to do more intelligent optimisation than just increase number of buffers.\n> Though, in fact bigger memory had dramatically better effect that lock tricks.\n>\n> Maybe let's make all SLRUs buffer sizes configurable?\n\n+1\n\nI got interested in the SLRU sizing problem (the lock trick and CV\nstuff sounds interesting too, but I didn't have time to review that in\nthis cycle). The main problem I'm aware of with it is the linear\nsearch, so I tried to fix that. See Andrey's thread for details.\n\n\n", "msg_date": "Fri, 26 Mar 2021 17:14:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: increase size of pg_commit_ts buffers" }, { "msg_contents": "Hi,\n\nI think this is ready to go. I was wondering why it merely doubles the \nnumber of buffers, as described by previous comments. That seems like a \nvery tiny increase, so considering how much the hardware grew over the \nlast few years it'd probably fail to help some of the large boxes.\n\nBut it turns out that's not what the patch does. The change is this\n\n > -\treturn Min(16, Max(4, NBuffers / 1024));\n > +\treturn Min(256, Max(4, NBuffers / 512));\n\nSo it does two things - (a) it increases the maximum from 16 to 256 (so \n16x), and (b) it doubles the speed how fast we get there. Until now we \nadd 1 buffer per 1024 shared buffers, and the maximum would be reached \nwith 128MB. The patch lowers the steps to 512, and the maximum to 1GB.\n\nSo this actually increases the number of commit_ts buffers 16x, not 2x. \nThat seems reasonable, I guess. The increase may be smaller for systems \nwith less that 1GB shared buffers, but IMO that's a tiny minority of \nproduction systems busy enough for this patch to make a difference.\n\nThe other question is of course what overhead could this change have on \nworkload that does not have issues with commit_ts buffers (i.e. it's \nusing commit_ts, but would be fine with just the 16 buffers). But my \nguess is this is negligible, based on how simple the SLRU code is and my \nprevious experiments with SLRU.\n\nSo +1 to just get this committed, as it is.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Nov 2021 18:39:13 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: increase size of pg_commit_ts buffers" }, { "msg_contents": "On Fri, 12 Nov 2021 at 17:39, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n> So +1 to just get this committed, as it is.\n\nThis is a real issue affecting Postgres users. Please commit this soon.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 30 Nov 2021 12:04:19 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: increase size of pg_commit_ts buffers" }, { "msg_contents": "On 2021-Nov-30, Simon Riggs wrote:\n\n> On Fri, 12 Nov 2021 at 17:39, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n> > So +1 to just get this committed, as it is.\n> \n> This is a real issue affecting Postgres users. Please commit this soon.\n\nUh ouch, I had forgotten that this patch was mine (blush). Thanks for\nthe ping, I pushed it yesterday. I added a comment.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 1 Dec 2021 12:57:41 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: increase size of pg_commit_ts buffers" } ]
[ { "msg_contents": "Hi\n\nI released pspg 4.0.0 https://github.com/okbob/pspg/releases/tag/4.0.0\n\nNow with the possibility to export content to file or clipboard in CSV,\nTSVC, text or INSERT formats.\n\npspg is a pager like \"less\" or \"more\" designed specially for usage in TUI\ndatabase clients like \"psql\". It can work like a CSV viewer too.\n\nhttps://github.com/okbob/pspg\n\nRegards\n\nPavel\n\nHiI released pspg 4.0.0 https://github.com/okbob/pspg/releases/tag/4.0.0Now with the possibility to export content to file or clipboard in CSV, TSVC, text or INSERT formats.pspg is a pager like \"less\" or \"more\" designed specially for usage in TUI database clients like \"psql\". It can work like a CSV viewer too.https://github.com/okbob/pspgRegardsPavel", "msg_date": "Sat, 16 Jan 2021 06:22:04 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "new release pspg" }, { "msg_contents": "This is really cool. .... Now I just need to figure out how to\nintegrate it with using Emacs for my terminal. I still want to use\nemacs enter and edit my queries but it would be cool to be able to hit\na key and launch an xterm and send the query output to pspg....\n\n\n", "msg_date": "Sun, 21 Mar 2021 02:39:31 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: new release pspg" }, { "msg_contents": "Hi\n\nne 21. 3. 2021 v 7:40 odesílatel Greg Stark <stark@mit.edu> napsal:\n\n> This is really cool. .... Now I just need to figure out how to\n> integrate it with using Emacs for my terminal. I still want to use\n> emacs enter and edit my queries but it would be cool to be able to hit\n> a key and launch an xterm and send the query output to pspg....\n>\n\na) you can run psql in emacs - and inside psql you can redirect output to\npipe. pspg can run in stream mode and can read data from pipe.\n\nb) pspg can be used as a postgres client - the query can be pasted as a\ncommand line argument.\n\nMaybe stream mode can be enhanced about possibility to read queries from\npipe (not just data like now)\n\nRegards\n\nPavel\n\nHine 21. 3. 2021 v 7:40 odesílatel Greg Stark <stark@mit.edu> napsal:This is really cool. .... Now I just need to figure out how to\nintegrate it with using Emacs for my terminal. I still want to use\nemacs enter and edit my queries but it would be cool to be able to hit\na key and launch an xterm and send the query output to pspg....a)  you can run psql in emacs - and inside psql you can redirect output to pipe.  pspg can run in stream mode and can read data from pipe.b) pspg can be used as a postgres client - the query can be pasted as a command line argument. Maybe stream mode can be enhanced about possibility to read queries from pipe (not just data like now) RegardsPavel", "msg_date": "Sun, 21 Mar 2021 07:48:40 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: new release pspg" }, { "msg_contents": "Hi\n\nne 21. 3. 2021 v 7:40 odesílatel Greg Stark <stark@mit.edu> napsal:\n\n> This is really cool. .... Now I just need to figure out how to\n> integrate it with using Emacs for my terminal. I still want to use\n> emacs enter and edit my queries but it would be cool to be able to hit\n> a key and launch an xterm and send the query output to pspg....\n>\n\npspg 4.5.0 supports --querystream mode\n\nrun in one terminal /pspg --querystream -f ~/pipe --hold-stream=2 -h\nlocalhost\n\nand from any application, where you can write to stream, you can send a\nqueries separated by GS char 0x1d or ^] on separate line\n\n[pavel@localhost src]$ cat /dev/tty > ~/pipe\nselect 1\n^]\nselect * from pg_class limit\n10\n^]\n\nRegards\n\nPavel\n\nHine 21. 3. 2021 v 7:40 odesílatel Greg Stark <stark@mit.edu> napsal:This is really cool. .... Now I just need to figure out how to\nintegrate it with using Emacs for my terminal. I still want to use\nemacs enter and edit my queries but it would be cool to be able to hit\na key and launch an xterm and send the query output to pspg....pspg 4.5.0 supports --querystream moderun in one terminal /pspg --querystream -f ~/pipe  --hold-stream=2 -h localhostand from any application, where you can write to stream, you can send a queries separated by GS char 0x1d or ^] on separate line[pavel@localhost src]$ cat /dev/tty > ~/pipeselect 1^]select * from pg_class limit10^]RegardsPavel", "msg_date": "Tue, 23 Mar 2021 18:08:23 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: new release pspg" } ]
[ { "msg_contents": "I've been looking into the planner failure reported at [1].\nThe given test case is comparable to this query in the\nregression database:\n\nregression=# select i8.*, ss.v, t.unique2\n from int8_tbl i8\n left join int4_tbl i4 on i4.f1 = 1\n left join lateral (select i4.f1 + 1 as v) as ss on true\n left join tenk1 t on t.unique2 = ss.v\nwhere q2 = 456;\nERROR: failed to assign all NestLoopParams to plan nodes\n\nThe core of the problem turns out to be that pull_varnos() returns\nan incorrect result for the PlaceHolderVar that represents the ss.v\noutput. Because the only Var within the PHV's expression is i4.f1,\npull_varnos() returns just the relid set (2), implying that the\nvalue can be calculated after having scanned only the i4 relation.\nBut that's wrong: i4.f1 here represents an outer-join output, so it\ncan only be computed after forming the join (1 2) of i8 and i4.\nIn this example, the erroneous calculation leads the planner to\nconstruct a plan with an invalid join order, which triggers a\nsanity-check failure in createplan.c.\n\nThe relid set (2) is the minimum possible join level at which such\na PHV could be evaluated; (1 2) is the maximum level, corresponding\nto the PHV's syntactic position above the i8/i4 outer join. After\nthinking about this I've realized that what pull_varnos() ideally\nought to use is the PHV's ph_eval_at level, which is the join level\nwe actually intend to evaluate it at. There are a couple of\nproblems, one that's not too awful and one that's a pain in the\nrear:\n\n1. pull_varnos() can be used before we've calculated ph_eval_at, as\nwell as during deconstruct_jointree() which can change ph_eval_at.\nThis doesn't seem fatal. We can fall back to the conservative\nassumption of using the syntactic level if the PlaceHolderInfo isn't\nthere yet. Once it is (i.e., within deconstruct_jointree()) I think\nwe are okay, because any given PHV's ph_eval_at should have reached\nits final value before we consider any qual involving that PHV.\n\n2. pull_varnos() is not passed the planner \"root\" data structure,\nso it can't get at the PlaceHolderInfo list. We can change its\nAPI of course, but that propagates to dozens of places.\n\nThe 0001 patch attached goes ahead and makes those API changes.\nI think this is perfectly reasonable to do in HEAD, but it most\nlikely is an unacceptable API/ABI break for the back branches.\nThere's one change needed in contrib/postgres_fdw, and other\nextensions likely call one or more of the affected functions too.\n\nAs an alternative back-branch fix, we could consider the 0002\npatch attached, which simply changes pull_varnos() to make the\nmost conservative assumption that ph_eval_at could wind up as\nthe PHV's syntactic level (phrels). The trouble with this is\nthat we'll lose some valid optimizations. There is only one\nvisible plan change in the regression tests, but it's kind of\nunpleasant: we fail to remove a join that we did remove before.\nSo I'm not sure how much of a problem this'd be in the field.\n\nA third way is to preserve the existing pull_varnos() API in\nthe back branches, changing all the internal calls to use a\nnew function that has the additional \"root\" parameter. This\nseems feasible but I've not attempted to code it yet.\n\n(We might be able to get rid of a lot of this mess if I ever\nfinish the changes I have in mind to represent outer-join outputs\nmore explicitly. That seems unlikely to happen for v14 at this\npoint, and it'd certainly never be back-patchable.)\n\nOne loose end is that I'm not sure how far to back-patch. The\ngiven test case only fails back to v12. I've not bisected, but\nI suspect that the difference is the v12-era changes to collapse out\ntrivial Result RTEs (4be058fe9 and follow-ons), such as the lateral\nsub-select in this test case. The pull_varnos() calculation is\nsurely just as wrong for a long time before that, but perhaps it's\nonly a latent bug before that? I've not managed to construct a test\ncase that fails in v11, but I don't have a lot of confidence that\nthere isn't one.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/237d2b72-6dd0-7b24-3a6f-94288cd44b9c@bfw-online.de", "msg_date": "Sat, 16 Jan 2021 21:12:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Calculation of relids (pull_varnos result) for PlaceHolderVars" }, { "msg_contents": "I wrote:\n> ...\n> 2. pull_varnos() is not passed the planner \"root\" data structure,\n> so it can't get at the PlaceHolderInfo list. We can change its\n> API of course, but that propagates to dozens of places.\n> ...\n> The 0001 patch attached goes ahead and makes those API changes.\n> I think this is perfectly reasonable to do in HEAD, but it most\n> likely is an unacceptable API/ABI break for the back branches.\n> ...\n> A third way is to preserve the existing pull_varnos() API in\n> the back branches, changing all the internal calls to use a\n> new function that has the additional \"root\" parameter. This\n> seems feasible but I've not attempted to code it yet.\n\nHere's a proposed fix that does it like that. The 0001 patch\nis the same as before, and then 0002 is a delta to be applied\nonly in the back branches. What I did there was install a layer\nof macros in the relevant .c files that cause calls that look like\nthe HEAD versions to be redirected to the \"xxx_new\" functions.\nThe idea is to keep the actual code in sync with HEAD, for\nreadability and to minimize back-patching pain. It could be\nargued that this is too cute and the internal references should\njust go to the \"new\" functions in the back branches.\n\nI did not bother to preserve ABI for these two functions:\n\tindexcol_is_bool_constant_for_query()\n\tbuild_implied_join_equality()\nbecause I judged it highly unlikely that any extensions are\ncalling them. If anybody thinks differently, we could hack\nthose in the same way.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 20 Jan 2021 14:15:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Calculation of relids (pull_varnos result) for PlaceHolderVars" } ]
[ { "msg_contents": "After a short time (ahem, several years) of badgering of me my a\ncertain community member, I've finally gotten around to putting up a\ncgit instance on our git server, to allow for browsing of the git\nrepositories. You can find this at:\n\nhttps://git.postgresql.org/cgit/\n\nor specifically for the postgresql git repo:\n\nhttps://git.postgresql.org/cgit/postgresql.git/\n\n\nFor the time being we're running both this and gitweb, and all the\nredirects will keep pointing to gitweb, as well as the default\nredirect if you just go https://git.postgresql.org/.\n\nIf people prefer it, we can discuss changing that in the future, but\nlet's start with some proper full scale testing to see that it doesn't\nactually just break for some people :)\n\n//Magnus\n\n\n", "msg_date": "Sun, 17 Jan 2021 14:48:04 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "cgit view availabel" }, { "msg_contents": "вс, 17 янв. 2021 г. в 14:48, Magnus Hagander <magnus@hagander.net>:\n\n> After a short time (ahem, several years) of badgering of me my a\n> certain community member, I've finally gotten around to putting up a\n> cgit instance on our git server, to allow for browsing of the git\n> repositories. You can find this at:\n>\n> https://git.postgresql.org/cgit/\n>\n> or specifically for the postgresql git repo:\n>\n> https://git.postgresql.org/cgit/postgresql.git/\n>\n\nLooks nice!\n\nFirst thing I've noted:\n\n\nhttps://git.postgresql.org/cgit/postgresql.git/commit/960869da0803427d14335bba24393f414b476e2c\n\nsilently shows another commit.\n\nIs it possible to make the scheme above work?\nOur gitweb (and also github) is using it, so I assume people are quite used\nto it.\n\n\n\n-- \nVictor Yegorov\n\nвс, 17 янв. 2021 г. в 14:48, Magnus Hagander <magnus@hagander.net>:After a short time (ahem, several years) of badgering of me my a\ncertain community member, I've finally gotten around to putting up a\ncgit instance on our git server, to allow for browsing of the git\nrepositories. You can find this at:\n\nhttps://git.postgresql.org/cgit/\n\nor specifically for the postgresql git repo:\n\nhttps://git.postgresql.org/cgit/postgresql.git/\nLooks nice!First thing I've noted:   https://git.postgresql.org/cgit/postgresql.git/commit/960869da0803427d14335bba24393f414b476e2csilently shows another commit.Is it possible to make the scheme above work?Our gitweb (and also github) is using it, so I assume people are quite used to it. -- Victor Yegorov", "msg_date": "Sun, 17 Jan 2021 17:00:37 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cgit view availabel" }, { "msg_contents": "On Sun, Jan 17, 2021 at 5:00 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n>\n> вс, 17 янв. 2021 г. в 14:48, Magnus Hagander <magnus@hagander.net>:\n>>\n>> After a short time (ahem, several years) of badgering of me my a\n>> certain community member, I've finally gotten around to putting up a\n>> cgit instance on our git server, to allow for browsing of the git\n>> repositories. You can find this at:\n>>\n>> https://git.postgresql.org/cgit/\n>>\n>> or specifically for the postgresql git repo:\n>>\n>> https://git.postgresql.org/cgit/postgresql.git/\n>\n>\n> Looks nice!\n>\n> First thing I've noted:\n>\n> https://git.postgresql.org/cgit/postgresql.git/commit/960869da0803427d14335bba24393f414b476e2c\n>\n> silently shows another commit.\n\nWhere did you get that URL from?\n\nAnd AFAICT, and URL like that in cgit shows the latest commit in the\nrepo, for the path that you entered (which in this case is the hash\nput int he wrong place).\n\n> Is it possible to make the scheme above work?\n> Our gitweb (and also github) is using it, so I assume people are quite used to it.\n\nI guess we could capture a specific \"looks like a hash\" and redirect\nthat, assuming we would never ever have anything in a path or filename\nin any of our repositories that looks like a hash. That seems like\nmaybe it's a bit of a broad assumption?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 17 Jan 2021 17:19:03 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: cgit view availabel" }, { "msg_contents": "вс, 17 янв. 2021 г. в 17:19, Magnus Hagander <magnus@hagander.net>:\n\n> > First thing I've noted:\n> >\n> >\n> https://git.postgresql.org/cgit/postgresql.git/commit/960869da0803427d14335bba24393f414b476e2c\n> >\n> > silently shows another commit.\n>\n> Where did you get that URL from?\n>\n\nI've made it up manually, comparing cgit and gitweb links.\n\n\n\n> And AFAICT, and URL like that in cgit shows the latest commit in the\n> repo, for the path that you entered (which in this case is the hash\n> put int he wrong place).\n>\n\nYes, that's what I've noted too.\n\n\nI guess we could capture a specific \"looks like a hash\" and redirect\n> that, assuming we would never ever have anything in a path or filename\n> in any of our repositories that looks like a hash. That seems like\n> maybe it's a bit of a broad assumption?\n>\n\nI thought maybe it's possible to rewrite requests in a form:\n\n/cgit/*/commit/*\n\ninto\n\n/cgit/*/commit/?id=&\n\n?\n\n-- \nVictor Yegorov\n\nвс, 17 янв. 2021 г. в 17:19, Magnus Hagander <magnus@hagander.net>:> First thing I've noted:\n>\n>    https://git.postgresql.org/cgit/postgresql.git/commit/960869da0803427d14335bba24393f414b476e2c\n>\n> silently shows another commit.\n\nWhere did you get that URL from?I've made it up manually, comparing cgit and gitweb links.  \nAnd AFAICT, and URL like that in cgit shows the latest commit in the\nrepo, for the path that you entered (which in this case is the hash\nput int he wrong place).Yes, that's what I've noted too.\nI guess we could capture a specific \"looks like a hash\" and redirect\nthat, assuming we would never ever have anything in a path or filename\nin any of our repositories that looks like a hash. That seems like\nmaybe it's a bit of a broad assumption?I thought maybe it's possible to rewrite requests in a form:/cgit/*/commit/*into/cgit/*/commit/?id=&?-- Victor Yegorov", "msg_date": "Sun, 17 Jan 2021 17:46:05 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cgit view availabel" }, { "msg_contents": "On Sun, Jan 17, 2021 at 5:46 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n>\n> вс, 17 янв. 2021 г. в 17:19, Magnus Hagander <magnus@hagander.net>:\n>>\n>> > First thing I've noted:\n>> >\n>> > https://git.postgresql.org/cgit/postgresql.git/commit/960869da0803427d14335bba24393f414b476e2c\n>> >\n>> > silently shows another commit.\n>>\n>> Where did you get that URL from?\n>\n>\n> I've made it up manually, comparing cgit and gitweb links.\n>\n>\n>>\n>> And AFAICT, and URL like that in cgit shows the latest commit in the\n>> repo, for the path that you entered (which in this case is the hash\n>> put int he wrong place).\n>\n>\n> Yes, that's what I've noted too.\n>\n>\n>> I guess we could capture a specific \"looks like a hash\" and redirect\n>> that, assuming we would never ever have anything in a path or filename\n>> in any of our repositories that looks like a hash. That seems like\n>> maybe it's a bit of a broad assumption?\n>\n>\n> I thought maybe it's possible to rewrite requests in a form:\n>\n> /cgit/*/commit/*\n>\n> into\n>\n> /cgit/*/commit/?id=&\n\nThat would break any repository that has a directory called \"commit\"\nin it, wouldn't it?\n\nThat said we might be able to pick it up as a top level entry only,\nbecause those subdirs would be expected to be under /tree/*/commit/*.\n\nBut we could also not do /cgit/<one level>/commit/* -- for example\nhttps://git.postgresql.org/cgit/postgresql.git/commit/src/backend/tcop/postgres.c?id=960869da0803427d14335bba24393f414b476e2c\nis a perfectly valid url to show the part of the patch that affects\njust this one part of the path.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 17 Jan 2021 18:58:02 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: cgit view availabel" }, { "msg_contents": "On 01/17/21 08:48, Magnus Hagander wrote:\n> I've finally gotten around to putting up a\n> cgit instance on our git server, to allow for browsing of the git\n> repositories. You can find this at:\n> \n> https://git.postgresql.org/cgit/\n\nInteresting!\n\nHaving never actively compared gitweb and cgit, what are the nicest\nfunctional benefits I should be looking for?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 17 Jan 2021 13:11:22 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: cgit view availabel" } ]
[ { "msg_contents": "The below references are already properly documented in\n\n https://www.postgresql.org/docs/current/catalog-pg-event-trigger.html\n\nbut missing in src/tools/findoidjoins/README.\n\nJoin pg_catalog.pg_event_trigger.evtowner => pg_catalog.pg_authid.oid\nJoin pg_catalog.pg_event_trigger.evtfoid => pg_catalog.pg_proc.oid\n\nI'm not sure what the process of updating the README is,\nthe git log seems to indicate this is usually part of the release cycle,\nso perhaps it's OK this file is out-of-sync between releases?\n\nBut if so, that won't explain these two, since they have been around for ages.\n\n/Joel\nThe below references are already properly documented in   https://www.postgresql.org/docs/current/catalog-pg-event-trigger.htmlbut missing in src/tools/findoidjoins/README.Join pg_catalog.pg_event_trigger.evtowner => pg_catalog.pg_authid.oidJoin pg_catalog.pg_event_trigger.evtfoid => pg_catalog.pg_proc.oidI'm not sure what the process of updating the README is,the git log seems to indicate this is usually part of the release cycle,so perhaps it's OK this file is out-of-sync between releases?But if so, that won't explain these two, since they have been around for ages./Joel", "msg_date": "Sun, 17 Jan 2021 17:38:35 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "evtfoid and evtowner missing in findoidjoins/README" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> The below references are already properly documented in\n> https://www.postgresql.org/docs/current/catalog-pg-event-trigger.html\n> but missing in src/tools/findoidjoins/README.\n> Join pg_catalog.pg_event_trigger.evtowner => pg_catalog.pg_authid.oid\n> Join pg_catalog.pg_event_trigger.evtfoid => pg_catalog.pg_proc.oid\n\nYup, no surprise given the way findoidjoins works: it could only\ndiscover those relationships if pg_event_trigger had some entries in\nthe end state of the regression database. Of course it doesn't, and\nI'd be against leaving a live event trigger in place in that DB.\n(I suspect there are other similar gaps in the coverage.)\n\nI kind of wonder whether findoidjoins has outlived its purpose and\nwe should just remove it (along with the oidjoins test script).\nIMO it was intended to find mistakes in the initial catalog data,\nbut given the current policy that the .dat files shall not contain\nnumeric OID references, that type of mistake is impossible anymore.\nCertainly, it's been so long since that test script has caught\nanything that it doesn't seem worth the annual-or-so maintenance\neffort to update it.\n\nA different line of thought would be to try to teach findoidjoins\nto scrape info about catalog references out of catalogs.sgml, and\nuse that instead of or in addition to its current methods. That\nseems like a fair amount of work though, and again I can't get\nexcited that it'd be worth the trouble.\n\nAlso, I recall mutterings on -hackers about adding foreign-key\nentries to pg_constraint to document the catalog reference\nrelationships. (In my possibly-faulty recollection, the idea\nwas that these'd only be documentation and would lack enforcement\ntriggers; but perhaps we'd allow the planner to trust them for\npurposes of optimizing multi-catalog queries.) If we had those,\nwe could make findoidjoins use them instead of trawling the data,\nor maybe throw away findoidjoins per se and let the oidjoins.sql\nscript read the FK entries to find out what to check dynamically.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Jan 2021 12:16:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: evtfoid and evtowner missing in findoidjoins/README" }, { "msg_contents": "On Sun, Jan 17, 2021, at 18:16, Tom Lane wrote:\n> I kind of wonder whether findoidjoins has outlived its purpose and\n> we should just remove it (along with the oidjoins test script).\n\nA bit of background:\nI'm working on an extension where I need SQL access to this reference information.\nMy extension is successfully automatically helping me to find problems in extension update-scripts,\nwhere an update from a version to a version would give a different result than directly installing the to-version from scratch.\n\nCurrently, I'm parsing findoidjoins/README and importing the \"from column\" and \"to column\"\nto a lookup-table, which is used by my extension.\n\nIf findoidjoins is removed, I would be happy as long as this reference information\ncontinues to be provided in some other simple machine-readable way,\nlike a CSV-file somewhere in the repo, or even better: making this information\navailable from SQL via a new lookup-table in pg_catalog.\n\nI can see how parsing catalogs.sgml would be doable,\nbut proper SGML parsing is quite complex since it's a recursive language,\nand can't be reliably parsed with e.g. simple regexes.\n\nSo I think adding this as a lookup table in pg_catalog is the best solution,\nsince extension writers could then use this information in various ways.\n\nThe information is theoretically already available via catalogs.sgml,\nbut a) it's not easy to parse, and b) it's not available from SQL.\n\nAre there any other hackers who ever wished they would have had SQL\naccess to these catalog references?\n\nIf desired by enough others, perhaps something along these lines could work?\n\nCREATE TABLE pg_catalog.pg_references (\ncolfrom text,\ncolto text,\nUNIQUE (colfrom)\n);\n\nWhere \"colfrom\" would be e.g. \"pg_catalog.pg_class.relfilenode\"\nand \"colto\" would be \"pg_catalog.pg_class.oid\" for that example.\n\nNot sure about the column names \"colfrom\"/\"colto\" though,\nsince the abbreviation for columns seems to be \"att\" in the pg_catalog context.\n\n/Joel\nOn Sun, Jan 17, 2021, at 18:16, Tom Lane wrote:> I kind of wonder whether findoidjoins has outlived its purpose and> we should just remove it (along with the oidjoins test script).A bit of background:I'm working on an extension where I need SQL access to this reference information.My extension is successfully automatically helping me to find problems in extension update-scripts,where an update from a version to a version would give a different result than directly installing the to-version from scratch.Currently, I'm parsing findoidjoins/README and importing the \"from column\" and \"to column\"to a lookup-table, which is used by my extension.If findoidjoins is removed, I would be happy as long as this reference informationcontinues to be provided in some other simple machine-readable way,like a CSV-file somewhere in the repo, or even better: making this informationavailable from SQL via a new lookup-table in pg_catalog.I can see how parsing catalogs.sgml would be doable,but proper SGML parsing is quite complex since it's a recursive language,and can't be reliably parsed with e.g. simple regexes.So I think adding this as a lookup table in pg_catalog is the best solution,since extension writers could then use this information in various ways.The information is theoretically already available via catalogs.sgml,but a) it's not easy to parse, and b) it's not available from SQL.Are there any other hackers who ever wished they would have had SQLaccess to these catalog references?If desired by enough others, perhaps something along these lines could work?CREATE TABLE pg_catalog.pg_references (colfrom text,colto text,UNIQUE (colfrom));Where \"colfrom\" would be e.g. \"pg_catalog.pg_class.relfilenode\"and \"colto\" would be \"pg_catalog.pg_class.oid\" for that example.Not sure about the column names \"colfrom\"/\"colto\" though,since the abbreviation for columns seems to be \"att\" in the pg_catalog context./Joel", "msg_date": "Sun, 17 Jan 2021 21:15:45 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: evtfoid and evtowner missing in findoidjoins/README" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> A bit of background:\n> I'm working on an extension where I need SQL access to this reference\n> information. My extension is successfully automatically helping me to\n> find problems in extension update-scripts, where an update from a\n> version to a version would give a different result than directly\n> installing the to-version from scratch.\n\n> Currently, I'm parsing findoidjoins/README and importing the \"from\n> column\" and \"to column\" to a lookup-table, which is used by my\n> extension.\n\nHmm. That README was certainly never intended to be used that way ;-)\n\n> So I think adding this as a lookup table in pg_catalog is the best solution,\n> since extension writers could then use this information in various ways.\n\nI'm definitely -1 on adding a catalog for that. But it seems like the\nidea of not-really-enforced FK entries in pg_constraint would serve your\npurposes just as well (and it'd be better from the standpoint of getting\nthe planner to be aware of these relationships).\n\n> The information is theoretically already available via catalogs.sgml,\n> but a) it's not easy to parse, and b) it's not available from SQL.\n\nWell, SGML is actually plenty easy to parse as long as you've got xml\ntooling at hand. We'd never want to introduce such a dependency in the\nnormal build process, but making something like findoidjoins depend on\nsuch tools seems within reason. I recall having whipped up some one-off\nPerl scripts of that sort when I was doing that massive rewrite of the\nfunc.sgml tables last year. I didn't save them though, and in any case\nI'm the world's worst Perl programmer, so it'd be better for someone\nwith more Perl-fu to take point if we decide to go that route.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Jan 2021 15:32:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: evtfoid and evtowner missing in findoidjoins/README" }, { "msg_contents": "On Sun, Jan 17, 2021, at 21:32, Tom Lane wrote:\n>Well, SGML is actually plenty easy to parse as long as you've got xml\n>tooling at hand. We'd never want to introduce such a dependency in the\n>normal build process, but making something like findoidjoins depend on\n>such tools seems within reason. I recall having whipped up some one-off\n>Perl scripts of that sort when I was doing that massive rewrite of the\n>func.sgml tables last year. I didn't save them though, and in any case\n>I'm the world's worst Perl programmer, so it'd be better for someone\n>with more Perl-fu to take point if we decide to go that route.\n\nI went a head and implemented the parser, it was indeed easy.\n\nPatch attached.\n\n Add catalog_oidjoins.pl -- parses catalog references out of catalogs.sgml\n\n Since doc/src/sgml/catalogs.sgml is the master where catalog references\n are to be documented, if someone needs machine-readable access to\n such information, it should be extracted from this document.\n\n The added script catalog_oidjoins.pl parses the SGML and extract\n the fields necessary to produce the same output as generated\n by findoidjoins, which has historically been copy/pasted to the README.\n\n This is to allow for easy comparison, to verify the correctness\n of catalog_oidjoins.pl, and if necessary, to update the README\n based on the information in catalogs.sgml.\n\n Helper-files:\n\n diff_oidjoins.sh\n Runs ./catalog_oidjoins.pl and compares its output\n with STDIN. Shows a diff of the result, witout any context.\n\n test_oidjoins.sh\n Runs ./diff_oidjoins.sh for both the README\n and the output from ./findoidjoins regression.\n\n bogus_oidjoins.txt\n List of bogus diff entires to ignore,\n based on documentation in README.\n\nAfter having run make installcheck in src/test/regress,\nthe test_oidjoins.sh can be run:\n\n$ ./test_oidjoins.sh\nREADME:\n+ Join pg_catalog.pg_attrdef.adnum => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_class.relrewrite => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_constraint.confkey []=> pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_constraint.conkey []=> pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_db_role_setting.setrole => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_default_acl.defaclnamespace => pg_catalog.pg_namespace.oid\n+ Join pg_catalog.pg_default_acl.defaclrole => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_event_trigger.evtfoid => pg_catalog.pg_proc.oid\n+ Join pg_catalog.pg_event_trigger.evtowner => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_extension.extconfig []=> pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_foreign_data_wrapper.fdwhandler => pg_catalog.pg_proc.oid\n+ Join pg_catalog.pg_foreign_data_wrapper.fdwvalidator => pg_catalog.pg_proc.oid\n+ Join pg_catalog.pg_foreign_table.ftrelid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_foreign_table.ftserver => pg_catalog.pg_foreign_server.oid\n+ Join pg_catalog.pg_index.indkey => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_partitioned_table.partattrs => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_policy.polroles []=> pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_publication.pubowner => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_publication_rel.prpubid => pg_catalog.pg_publication.oid\n+ Join pg_catalog.pg_publication_rel.prrelid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_range.rngmultitypid => pg_catalog.pg_type.oid\n+ Join pg_catalog.pg_seclabel.classoid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_shseclabel.classoid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_statistic.staattnum => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_statistic_ext.stxkeys => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_subscription.subdbid => pg_catalog.pg_database.oid\n+ Join pg_catalog.pg_subscription.subowner => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_subscription_rel.srrelid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_subscription_rel.srsubid => pg_catalog.pg_subscription.oid\n+ Join pg_catalog.pg_trigger.tgattr => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_type.typsubscript => pg_catalog.pg_proc.oid\n+ Join pg_catalog.pg_user_mapping.umserver => pg_catalog.pg_foreign_server.oid\n+ Join pg_catalog.pg_user_mapping.umuser => pg_catalog.pg_authid.oid\nfindoidjoins:\n+ Join pg_catalog.pg_attrdef.adnum => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_class.relrewrite => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_constraint.confkey []=> pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_constraint.conkey []=> pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_db_role_setting.setrole => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_default_acl.defaclnamespace => pg_catalog.pg_namespace.oid\n+ Join pg_catalog.pg_default_acl.defaclrole => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_event_trigger.evtfoid => pg_catalog.pg_proc.oid\n+ Join pg_catalog.pg_event_trigger.evtowner => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_extension.extconfig []=> pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_foreign_data_wrapper.fdwhandler => pg_catalog.pg_proc.oid\n+ Join pg_catalog.pg_foreign_data_wrapper.fdwvalidator => pg_catalog.pg_proc.oid\n+ Join pg_catalog.pg_foreign_table.ftrelid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_foreign_table.ftserver => pg_catalog.pg_foreign_server.oid\n+ Join pg_catalog.pg_index.indkey => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_partitioned_table.partattrs => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_policy.polroles []=> pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_publication.pubowner => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_publication_rel.prpubid => pg_catalog.pg_publication.oid\n+ Join pg_catalog.pg_publication_rel.prrelid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_seclabel.classoid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_statistic_ext.stxkeys => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_subscription.subdbid => pg_catalog.pg_database.oid\n+ Join pg_catalog.pg_subscription.subowner => pg_catalog.pg_authid.oid\n+ Join pg_catalog.pg_subscription_rel.srrelid => pg_catalog.pg_class.oid\n+ Join pg_catalog.pg_subscription_rel.srsubid => pg_catalog.pg_subscription.oid\n+ Join pg_catalog.pg_trigger.tgattr => pg_catalog.pg_attribute.attnum\n+ Join pg_catalog.pg_user_mapping.umserver => pg_catalog.pg_foreign_server.oid\n+ Join pg_catalog.pg_user_mapping.umuser => pg_catalog.pg_authid.oid\ndiff of diffs:\n21d20\n< + Join pg_catalog.pg_range.rngmultitypid => pg_catalog.pg_type.oid\n23,24d21\n< + Join pg_catalog.pg_shseclabel.classoid => pg_catalog.pg_class.oid\n< + Join pg_catalog.pg_statistic.staattnum => pg_catalog.pg_attribute.attnum\n31d27\n< + Join pg_catalog.pg_type.typsubscript => pg_catalog.pg_proc.oid", "msg_date": "Mon, 18 Jan 2021 09:41:17 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: evtfoid and evtowner missing in findoidjoins/README" } ]
[ { "msg_contents": "As of 257836a75, this returns:\n\n|postgres=# SELECT pg_collation_actual_version(123);\n|ERROR: cache lookup failed for collation 123\n|postgres=# \\errverbose \n|ERROR: XX000: cache lookup failed for collation 123\n|LOCATION: get_collation_version_for_oid, pg_locale.c:1754\n\nI'm of the impression that's considered to be a bad behavior for SQL accessible\nfunctions.\n\nIn v13, it did the same thing but with different language:\n\n|ts=# SELECT pg_collation_actual_version(123);\n|ERROR: collation with OID 123 does not exist\n|ts=# \\errverbose \n|ERROR: 42704: collation with OID 123 does not exist\n|LOCATION: pg_collation_actual_version, collationcmds.c:367\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 17 Jan 2021 15:59:40 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" }, { "msg_contents": "On Mon, Jan 18, 2021 at 10:59 AM Justin Pryzby > |postgres=# SELECT\npg_collation_actual_version(123);\n> |ERROR: cache lookup failed for collation 123\n\nYeah, not a great user experience. Will fix next week; perhaps\nget_collation_version_for_oid() needs missing_ok and found flags, or\nsomething like that.\n\nI'm also wondering if it would be better to name that thing with\n\"current\" rather than \"actual\".\n\n\n", "msg_date": "Mon, 18 Jan 2021 11:22:13 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" }, { "msg_contents": "On Mon, Jan 18, 2021 at 11:22 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Jan 18, 2021 at 10:59 AM Justin Pryzby > |postgres=# SELECT\n> pg_collation_actual_version(123);\n> > |ERROR: cache lookup failed for collation 123\n>\n> Yeah, not a great user experience. Will fix next week; perhaps\n> get_collation_version_for_oid() needs missing_ok and found flags, or\n> something like that.\n\nHere's a patch that gives:\n\npostgres=# select pg_collation_actual_version(123);\nERROR: no collation found for OID 123\n\nIt's a bit of an odd function, it's user-facing yet deals in OIDs.\n\n> I'm also wondering if it would be better to name that thing with\n> \"current\" rather than \"actual\".\n\nHere's a patch to do that (note to self: would need a catversion bump).\n\nWhile tidying up around here, I was dissatisfied with the fact that\nthere are three completely different ways of excluding \"C[.XXX]\" and\n\"POSIX\" for three OSes, so here's a patch to merge them.\n\nAlso, here's the missing tab completion for CREATE COLLATION, since\nit's rare enough to be easy to forget the incantations required.", "msg_date": "Wed, 17 Feb 2021 15:08:36 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" }, { "msg_contents": "On Wed, Feb 17, 2021 at 03:08:36PM +1300, Thomas Munro wrote:\n> \t\ttp = SearchSysCache1(COLLOID, ObjectIdGetDatum(oid));\n> \t\tif (!HeapTupleIsValid(tp))\n> +\t\t{\n> +\t\t\tif (found)\n> +\t\t\t{\n> +\t\t\t\t*found = false;\n> +\t\t\t\treturn NULL;\n> +\t\t\t}\n> \t\t\telog(ERROR, \"cache lookup failed for collation %u\", oid);\n> +\t\t}\n> \t\tcollform = (Form_pg_collation) GETSTRUCT(tp);\n> \t\tversion = get_collation_actual_version(collform->collprovider,\n> \t\t\t\t\t\t\t\t\t\t\t NameStr(collform->collcollate));\n> +\t\tif (found)\n> +\t\t\t*found = true;\n> \t}\n\nFWIW, we usually prefer using NULL instead of an error for the result\nof a system function if an object cannot be found because it allows\nusers to not get failures in a middle of a full table scan if things\nlike an InvalidOid is mixed in the data set. For example, we do that\nin the partition functions, for objectaddress functions, etc. That\nwould make this patch set simpler, switching\nget_collation_version_for_oid() to just use a missing_ok argument.\nAnd that would be more consistent with the other syscache lookup\nfunctions we have here and there in the tree.\n--\nMichael", "msg_date": "Wed, 17 Feb 2021 16:04:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" }, { "msg_contents": "On Wed, Feb 17, 2021 at 8:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Feb 17, 2021 at 03:08:36PM +1300, Thomas Munro wrote:\n> > tp = SearchSysCache1(COLLOID, ObjectIdGetDatum(oid));\n> > if (!HeapTupleIsValid(tp))\n> > + {\n> > + if (found)\n> > + {\n> > + *found = false;\n> > + return NULL;\n> > + }\n> > elog(ERROR, \"cache lookup failed for collation %u\", oid);\n> > + }\n> > collform = (Form_pg_collation) GETSTRUCT(tp);\n> > version = get_collation_actual_version(collform->collprovider,\n> > NameStr(collform->collcollate));\n> > + if (found)\n> > + *found = true;\n> > }\n>\n> FWIW, we usually prefer using NULL instead of an error for the result\n> of a system function if an object cannot be found because it allows\n> users to not get failures in a middle of a full table scan if things\n> like an InvalidOid is mixed in the data set. For example, we do that\n> in the partition functions, for objectaddress functions, etc. That\n> would make this patch set simpler, switching\n> get_collation_version_for_oid() to just use a missing_ok argument.\n> And that would be more consistent with the other syscache lookup\n> functions we have here and there in the tree.\n\nI guess I was trying to preserve a distinction between \"unknown OID\"\nand \"this is a collation OID, but I don't have version information for\nit\" (for example, \"C.utf8\"). But it hardly matters, and your\nsuggestion works for me. Thanks for looking!", "msg_date": "Thu, 18 Feb 2021 10:45:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" }, { "msg_contents": "On Thu, Feb 18, 2021 at 10:45:53AM +1300, Thomas Munro wrote:\n> I guess I was trying to preserve a distinction between \"unknown OID\"\n> and \"this is a collation OID, but I don't have version information for\n> it\" (for example, \"C.utf8\"). But it hardly matters, and your\n> suggestion works for me. Thanks for looking!\n\nCould you just add a test with pg_collation_current_version(0)?\n\n+ pg_strncasecmp(\"POSIX.\", collcollate, 6) != 0)\nI didn't know that \"POSIX.\" was possible.\n\nWhile on it, I guess that you could add tab completion support for\nCREATE COLLATION foo FROM. And shouldn't CREATE COLLATION complete\nwith the list of existing collation?\n--\nMichael", "msg_date": "Thu, 18 Feb 2021 16:15:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" }, { "msg_contents": "On Thu, Feb 18, 2021 at 8:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Could you just add a test with pg_collation_current_version(0)?\n\nDone.\n\n> + pg_strncasecmp(\"POSIX.\", collcollate, 6) != 0)\n>\n> I didn't know that \"POSIX.\" was possible.\n\nYeah, that isn't valid on my (quite current) GNU or FreeBSD systems,\nand doesn't show up in their \"locale -a\" output, but I wondered about\nthat theoretical possibility and googled it, and that showed that it\ndoes exist out there, though I don't know where/which versions,\npossibly only a long time ago. You know what, let's just forget that\nbit, it's not necessary. Removed.\n\n> While on it, I guess that you could add tab completion support for\n> CREATE COLLATION foo FROM.\n\nGood point. Added.\n\n> And shouldn't CREATE COLLATION complete\n> with the list of existing collation?\n\nRght, fixed.", "msg_date": "Mon, 22 Feb 2021 18:34:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" }, { "msg_contents": "On Mon, Feb 22, 2021 at 06:34:22PM +1300, Thomas Munro wrote:\n> On Thu, Feb 18, 2021 at 8:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Could you just add a test with pg_collation_current_version(0)?\n> \n> Done.\n> \n>> + pg_strncasecmp(\"POSIX.\", collcollate, 6) != 0)\n>>\n>> I didn't know that \"POSIX.\" was possible.\n> \n> Yeah, that isn't valid on my (quite current) GNU or FreeBSD systems,\n> and doesn't show up in their \"locale -a\" output, but I wondered about\n> that theoretical possibility and googled it, and that showed that it\n> does exist out there, though I don't know where/which versions,\n> possibly only a long time ago. You know what, let's just forget that\n> bit, it's not necessary. Removed.\n> \n>> While on it, I guess that you could add tab completion support for\n>> CREATE COLLATION foo FROM.\n> \n> Good point. Added.\n> \n>> And shouldn't CREATE COLLATION complete\n>> with the list of existing collation?\n> \n> Rght, fixed.\n\nLooks good to me, thanks!\n--\nMichael", "msg_date": "Mon, 22 Feb 2021 16:27:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" }, { "msg_contents": "On Mon, Feb 22, 2021 at 8:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Looks good to me, thanks!\n\nPushed, with one further small change: I realised that tab completion\nshould use a \"schema\" query.\n\n\n", "msg_date": "Tue, 23 Feb 2021 00:30:18 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_collation_actual_version() ERROR: cache lookup failed for\n collation 123" } ]
[ { "msg_contents": "Hi,\n\nI find that the outputstr variable in logicalrep_write_tuple() only use in\n`else` branch, I think we can narrow the scope, just like variable outputbytes\nin `if` branch (for more readable).\n\n /*\n * Send in binary if requested and type has suitable send function.\n */\n if (binary && OidIsValid(typclass->typsend))\n {\n bytea *outputbytes;\n int len;\n\n pq_sendbyte(out, LOGICALREP_COLUMN_BINARY);\n outputbytes = OidSendFunctionCall(typclass->typsend, values[i]);\n len = VARSIZE(outputbytes) - VARHDRSZ;\n pq_sendint(out, len, 4); /* length */\n pq_sendbytes(out, VARDATA(outputbytes), len); /* data */\n pfree(outputbytes);\n }\n else\n {\n pq_sendbyte(out, LOGICALREP_COLUMN_TEXT);\n outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]);\n pq_sendcountedtext(out, outputstr, strlen(outputstr), false);\n pfree(outputstr);\n }\n\nAttached is a samll patch to fix it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Mon, 18 Jan 2021 15:46:32 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Narrow the scope of the variable outputstr in logicalrep_write_tuple" }, { "msg_contents": "On Mon, Jan 18, 2021 at 1:16 PM japin <japinli@hotmail.com> wrote:\n>\n>\n> Hi,\n>\n> I find that the outputstr variable in logicalrep_write_tuple() only use in\n> `else` branch, I think we can narrow the scope, just like variable outputbytes\n> in `if` branch (for more readable).\n>\n> /*\n> * Send in binary if requested and type has suitable send function.\n> */\n> if (binary && OidIsValid(typclass->typsend))\n> {\n> bytea *outputbytes;\n> int len;\n>\n> pq_sendbyte(out, LOGICALREP_COLUMN_BINARY);\n> outputbytes = OidSendFunctionCall(typclass->typsend, values[i]);\n> len = VARSIZE(outputbytes) - VARHDRSZ;\n> pq_sendint(out, len, 4); /* length */\n> pq_sendbytes(out, VARDATA(outputbytes), len); /* data */\n> pfree(outputbytes);\n> }\n> else\n> {\n> pq_sendbyte(out, LOGICALREP_COLUMN_TEXT);\n> outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]);\n> pq_sendcountedtext(out, outputstr, strlen(outputstr), false);\n> pfree(outputstr);\n> }\n>\n> Attached is a samll patch to fix it.\n\n+1. Binary mode uses block level variable outputbytes, so making\noutputstr block level is fine IMO.\n\nPatch basically looks good to me, but it doesn't apply on my system.\nLooks like it's not created with git commit. Please create the patch\nwith git commit command.\n\ngit apply /mnt/hgfs/Shared/narrow-the-scope-of-the-variable-in-logicalrep_write_tuple.patch\nerror: corrupt patch at line 10\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 13:29:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Narrow the scope of the variable outputstr in\n logicalrep_write_tuple" }, { "msg_contents": "On Mon, 18 Jan 2021 at 15:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Mon, Jan 18, 2021 at 1:16 PM japin <japinli@hotmail.com> wrote:\n>>\n>>\n>> Hi,\n>>\n>> I find that the outputstr variable in logicalrep_write_tuple() only use in\n>> `else` branch, I think we can narrow the scope, just like variable outputbytes\n>> in `if` branch (for more readable).\n>>\n>> /*\n>> * Send in binary if requested and type has suitable send function.\n>> */\n>> if (binary && OidIsValid(typclass->typsend))\n>> {\n>> bytea *outputbytes;\n>> int len;\n>>\n>> pq_sendbyte(out, LOGICALREP_COLUMN_BINARY);\n>> outputbytes = OidSendFunctionCall(typclass->typsend, values[i]);\n>> len = VARSIZE(outputbytes) - VARHDRSZ;\n>> pq_sendint(out, len, 4); /* length */\n>> pq_sendbytes(out, VARDATA(outputbytes), len); /* data */\n>> pfree(outputbytes);\n>> }\n>> else\n>> {\n>> pq_sendbyte(out, LOGICALREP_COLUMN_TEXT);\n>> outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]);\n>> pq_sendcountedtext(out, outputstr, strlen(outputstr), false);\n>> pfree(outputstr);\n>> }\n>>\n>> Attached is a samll patch to fix it.\n>\n> +1. Binary mode uses block level variable outputbytes, so making\n> outputstr block level is fine IMO.\n>\n> Patch basically looks good to me, but it doesn't apply on my system.\n> Looks like it's not created with git commit. Please create the patch\n> with git commit command.\n>\n> git apply /mnt/hgfs/Shared/narrow-the-scope-of-the-variable-in-logicalrep_write_tuple.patch\n> error: corrupt patch at line 10\n>\n\nThanks for reviewing! Attached v2 as you suggested.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Mon, 18 Jan 2021 16:09:17 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Narrow the scope of the variable outputstr in\n logicalrep_write_tuple" }, { "msg_contents": "On Mon, Jan 18, 2021 at 1:39 PM japin <japinli@hotmail.com> wrote:\n>\n>\n> On Mon, 18 Jan 2021 at 15:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Mon, Jan 18, 2021 at 1:16 PM japin <japinli@hotmail.com> wrote:\n> >>\n> >>\n> >> Hi,\n> >>\n> >> I find that the outputstr variable in logicalrep_write_tuple() only use in\n> >> `else` branch, I think we can narrow the scope, just like variable outputbytes\n> >> in `if` branch (for more readable).\n> >>\n> >> /*\n> >> * Send in binary if requested and type has suitable send function.\n> >> */\n> >> if (binary && OidIsValid(typclass->typsend))\n> >> {\n> >> bytea *outputbytes;\n> >> int len;\n> >>\n> >> pq_sendbyte(out, LOGICALREP_COLUMN_BINARY);\n> >> outputbytes = OidSendFunctionCall(typclass->typsend, values[i]);\n> >> len = VARSIZE(outputbytes) - VARHDRSZ;\n> >> pq_sendint(out, len, 4); /* length */\n> >> pq_sendbytes(out, VARDATA(outputbytes), len); /* data */\n> >> pfree(outputbytes);\n> >> }\n> >> else\n> >> {\n> >> pq_sendbyte(out, LOGICALREP_COLUMN_TEXT);\n> >> outputstr = OidOutputFunctionCall(typclass->typoutput, values[i]);\n> >> pq_sendcountedtext(out, outputstr, strlen(outputstr), false);\n> >> pfree(outputstr);\n> >> }\n> >>\n> >> Attached is a samll patch to fix it.\n> >\n> > +1. Binary mode uses block level variable outputbytes, so making\n> > outputstr block level is fine IMO.\n> >\n> > Patch basically looks good to me, but it doesn't apply on my system.\n> > Looks like it's not created with git commit. Please create the patch\n> > with git commit command.\n> >\n> > git apply /mnt/hgfs/Shared/narrow-the-scope-of-the-variable-in-logicalrep_write_tuple.patch\n> > error: corrupt patch at line 10\n> >\n>\n> Thanks for reviewing! Attached v2 as you suggested.\n\nThanks. v2 patch LGTM.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 17:18:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Narrow the scope of the variable outputstr in\n logicalrep_write_tuple" }, { "msg_contents": "japin <japinli@hotmail.com> writes:\n> I find that the outputstr variable in logicalrep_write_tuple() only use in\n> `else` branch, I think we can narrow the scope, just like variable outputbytes\n> in `if` branch (for more readable).\n\nAgreed, done.\n\nFor context, I'm not usually in favor of making one-off stylistic\nimprovements: the benefit seldom outweighs the risk of creating\nmerge hazards for future back-patching. But in this case, the\ncode involved is mostly new in v14, so improving it now doesn't\ncost anything in that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jan 2021 15:58:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Narrow the scope of the variable outputstr in\n logicalrep_write_tuple" } ]
[ { "msg_contents": "Hi all\n\nThe attached comments-only patch expands the signal handling section in\nmiscadmin.h a bit so that it mentions ProcSignal, deferred signal handling\nduring blocking calls, etc. It adds cross-refs between major signal\nhandling routines and the miscadmin comment to help readers track the\nvarious scattered but inter-related code.\n\nI hope this helps some new developers in future.", "msg_date": "Mon, 18 Jan 2021 15:51:30 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[PATCH] Cross-reference comments on signal handling logic" }, { "msg_contents": "\n\n> On Jan 17, 2021, at 11:51 PM, Craig Ringer <craig.ringer@enterprisedb.com> wrote:\n> \n> <v1-0001-Comments-and-cross-references-for-signal-handling.patch>\n\nIn src/backend/postmaster/interrupt.c:\n\n+ * These handlers are NOT used by normal user backends as they do not support\n\nvs.\n\n+ * Most backends use this handler.\n\nThese two comments seem to contradict. If interrupt.c contains handlers that normal user backends to not use, then how can it be that most backends use one of the handlers in the file?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Mar 2021 10:22:35 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Cross-reference comments on signal handling logic" }, { "msg_contents": "> On 1 Mar 2021, at 19:22, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n>> On Jan 17, 2021, at 11:51 PM, Craig Ringer <craig.ringer@enterprisedb.com> wrote:\n>> \n>> <v1-0001-Comments-and-cross-references-for-signal-handling.patch>\n> \n> In src/backend/postmaster/interrupt.c:\n> \n> + * These handlers are NOT used by normal user backends as they do not support\n> \n> vs.\n> \n> + * Most backends use this handler.\n> \n> These two comments seem to contradict. If interrupt.c contains handlers that normal user backends to not use, then how can it be that most backends use one of the handlers in the file?\n\nI'm closing this as Returned with Feedback as it there has been no response to\nthe review comment during two commitfests. Please reopen in a future\ncommitfest if you still would like to pursue this patch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 27 Sep 2021 14:21:39 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Cross-reference comments on signal handling logic" } ]
[ { "msg_contents": "Hi folks\n\nThe attached patch expands the xfunc docs and bgworker docs a little,\nproviding a starting point for developers to learn how to do some common\ntasks the Postgres Way.\n\nIt mentions in brief these topics:\n\n* longjmp() based exception handling with elog(ERROR), PG_CATCH() and\nPG_RE_THROW() etc\n* Latches, spinlocks, LWLocks, heavyweight locks, condition variables\n* shm, DSM, DSA, shm_mq\n* syscache, relcache, relation_open(), invalidations\n* deferred signal handling, CHECK_FOR_INTERRUPTS()\n* Resource cleanup hooks and callbacks like on_exit, before_shmem_exit, the\nresowner callbacks, etc\n* signal handling in bgworkers\n\nAll very superficial, but all things I really wish I'd known a little\nabout, or even that I needed to learn about, when I started working on\npostgres.\n\nI'm not sure it's in quite the right place. I wonder if there should be a\nseparate part of xfunc.sgml that covers the slightly more advanced bits of\npostgres backend and function coding like this, lists relevant README files\nin the source tree, etc.\n\nI avoided going into details like how resource owners work. I don't want\nthe docs to have to cover all that in detail; what I hope to do is start\nproviding people with clear references to the right place in the code,\nREADMEs, etc to look when they need to understand specific topics.", "msg_date": "Mon, 18 Jan 2021 15:56:47 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "On Mon, Jan 18, 2021 at 1:27 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> Hi folks\n>\n> The attached patch expands the xfunc docs and bgworker docs a little, providing a starting point for developers to learn how to do some common tasks the Postgres Way.\n>\n> It mentions in brief these topics:\n>\n> * longjmp() based exception handling with elog(ERROR), PG_CATCH() and PG_RE_THROW() etc\n> * Latches, spinlocks, LWLocks, heavyweight locks, condition variables\n> * shm, DSM, DSA, shm_mq\n> * syscache, relcache, relation_open(), invalidations\n> * deferred signal handling, CHECK_FOR_INTERRUPTS()\n> * Resource cleanup hooks and callbacks like on_exit, before_shmem_exit, the resowner callbacks, etc\n> * signal handling in bgworkers\n>\n> All very superficial, but all things I really wish I'd known a little about, or even that I needed to learn about, when I started working on postgres.\n>\n> I'm not sure it's in quite the right place. I wonder if there should be a separate part of xfunc.sgml that covers the slightly more advanced bits of postgres backend and function coding like this, lists relevant README files in the source tree, etc.\n>\n> I avoided going into details like how resource owners work. I don't want the docs to have to cover all that in detail; what I hope to do is start providing people with clear references to the right place in the code, READMEs, etc to look when they need to understand specific topics.\n\nThanks for the patch.\n\nHere are some comments:\n\n[1]\n background worker's main function, and must be unblocked by it; this is to\n allow the process to customize its signal handlers, if necessary.\n- Signals can be unblocked in the new process by calling\n- <function>BackgroundWorkerUnblockSignals</function> and blocked by calling\n- <function>BackgroundWorkerBlockSignals</function>.\n+ It is important that all background workers set up and unblock signal\n+ handling before they enter their main loops. Signal handling in background\n+ workers is discussed separately in <xref linkend=\"bgworker-signals\"/>.\n </para>\n\nIMO, we can retain the statement about BackgroundWorkerUnblockSignals\nand BackgroundWorkerBlockSignals, but mention the link to\n\"bgworker-signals\" for more details and move the statement \"it's\nimportant to unblock signals before enter their main loop\" to\n\"bgworker-signals\" section and we can also reason there the\nconsequences if not done.\n\n[2]\n+ interupt-aware APIs</link> for the purpose. Do not\n<function>usleep()</function>,\n+ <function>system()</function>, make blocking system calls, etc.\n+ </para>\n\nIs it \"Do not use <function>usleep()</function>,\n<function>system()</function> or make blocking system calls etc.\" ?\n\n[3] IMO, we can remove following from \"bgworker-signals\" if we retain\nit where currently it is, as discussed in [1].\n+ Signals can be unblocked in the new process by calling\n+ <function>BackgroundWorkerUnblockSignals</function> and blocked by calling\n+ <function>BackgroundWorkerBlockSignals</function>.\n\n[4] Can we say\n+ The default signal handlers set up for background workers <emphasis>do\n+ default background worker signal handlers, it should call\n\ninstead of\n+ The default signal handlers installed for background workers <emphasis>do\n+ default background worker signal handling it should call\n\n[5] IMO, we can have something like below\n+ request, etc. Set up these handlers before unblocking signals as\n+ shown below:\n\ninstead of\n+ request, etc. To install these handlers, before unblocking interrupts\n+ run:\n\n[6] I think logs and errors either elog() or ereport can be used, so how about\n+ Use <function>elog()</function> or <function>ereport()</function> for\n+ logging output or raising errors instead of any direct stdio calls.\n\ninstead of\n+ Use <function>elog()</function> and <function>ereport()</function> for\n+ logging output and raising errors instead of any direct stdio calls.\n\n[7] Can we use child processes instead of subprocess ? If okay in\nother places in the patch as well.\n+ and should only use the main thread. PostgreSQL generally\nuses child processes\n+ that coordinate over shared memory instead of threads - for\ninstance, see\n+ <xref linkend=\"bgworker\"/>.\n\ninstead of\n+ and should only use the main thread. PostgreSQL generally\nuses subprocesses\n+ that coordinate over shared memory instead of threads - see\n+ <xref linkend=\"bgworker\"/>.\n\n[8] Why should file descriptor manager API be used to execute\nsubprocesses/child processes?\n+ To execute subprocesses and open files, use the routines provided by\n+ the file descriptor manager like <function>BasicOpenFile</function>\n+ and <function>OpenPipeStream</function> instead of a direct\n\n[9] \"should always be\"? even if it's a blocking extesion, does it\nwork? If our intention is to recommend the developers, maybe we should\navoid using the term \"should\" in the patch in other places as well.\n+ Extension code should always be structured as a non-blocking\n\n[10] I think it is\n+ you should avoid using <function>sleep()</function> or\n<function>usleep()</function>\n\ninstead of\n+ you should <function>sleep()</function> or\n<function>usleep()</function>\n\n\n[11] I think it is\n+ block if this happens. So cleanup of resources is not\nentirely managed by PostgreSQL, it\n+ must be handled using appropriate callbacks provided by PostgreSQL\n\ninstead of\n+ block if this happens. So all cleanup of resources not already\n+ managed by the PostgreSQL runtime must be handled using appropriate\n\n[12] I think it is\n+ found in corresponding PostgreSQL header and source files.\n\ninstead of\n+ found in the PostgreSQL headers and sources.\n\n[13] I think it is\n+ Use PostgreSQL runtime concurrency and synchronisation primitives\n\n+ between the PostgreSQL processes. These include signals and\nProcSignal multiplexed\n\ninstead of\n+ Use the PostgreSQL runtime's concurrency and synchronisation primitives\n\n+ between PostgreSQL processes. These include signals and\nProcSignal multiplexed\n\n[14] Is it \"relation/table based state management\"?\n+ Sometimes relation-based state management for extensions is not\n\n[15] I think it is\n+ use PostgreSQL shared-memory based inter-process communication\n\ninstead of\n+ use PostgreSQL's shared-memory based inter-process communication\n\n[16] I think it is\n+ or shared memory message queues (<acronym>shm_mq</acronym>). Examples\n+ usage of some of these features can be found in the\n+ <filename>src/test/modules/test_shm_mq/</filename> sample\nextension. Others\n\ninstead of\n+ or shared memory message queues (<acronym>shm_mq</acronym>). Examples\n+ of the use of some these features can be found in the\n+ <filename>src/test/modules/test_shm_mq/</filename> example\nextension. Others\n\n[17] I think it is\n+ syscache entries, as this can cause subtle bugs. See\n\ninstead of\n+ syscache cache entries, as this can cause subtle bugs. See\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 20:03:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "Hi\n\nThanks so much for reading over this!\n\nWould you mind attaching a revised version of the patch with your edits?\nOtherwise I'll go and merge them in once you've had your say on my comments\ninline below.\n\nBruce, Robert, can I have an opinion from you on how best to locate and\nstructure these docs, or whether you think they're suitable for the main\ndocs at all? See patch upthread.\n\nOn Tue, 19 Jan 2021 at 22:33, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n>\n> Here are some comments:\n>\n> [1]\n> background worker's main function, and must be unblocked by it; this is\n> to\n> allow the process to customize its signal handlers, if necessary.\n> - Signals can be unblocked in the new process by calling\n> - <function>BackgroundWorkerUnblockSignals</function> and blocked by\n> calling\n> - <function>BackgroundWorkerBlockSignals</function>.\n> + It is important that all background workers set up and unblock signal\n> + handling before they enter their main loops. Signal handling in\n> background\n> + workers is discussed separately in <xref linkend=\"bgworker-signals\"/>.\n> </para>\n>\n\nI wasn't sure either way on that, see your [3] below.\n\n[2]\n> + interupt-aware APIs</link> for the purpose. Do not\n> <function>usleep()</function>,\n> + <function>system()</function>, make blocking system calls, etc.\n> + </para>\n>\n> Is it \"Do not use <function>usleep()</function>,\n> <function>system()</function> or make blocking system calls etc.\" ?\n>\n\nRight. Good catch.\n\n[3] IMO, we can remove following from \"bgworker-signals\" if we retain\n> it where currently it is, as discussed in [1].\n> + Signals can be unblocked in the new process by calling\n> + <function>BackgroundWorkerUnblockSignals</function> and blocked by\n> calling\n> + <function>BackgroundWorkerBlockSignals</function>.\n>\n\nIf so, need to mention that they start blocked and link to the main text\nwhere that's mentioned.\n\nThat's part of why I moved this chunk into the signal section.\n\n[4] Can we say\n> + The default signal handlers set up for background workers <emphasis>do\n> + default background worker signal handlers, it should call\n>\n> instead of\n> + The default signal handlers installed for background workers\n> <emphasis>do\n> + default background worker signal handling it should call\n>\n\nHuh?\n\nI don't understand this proposal.\n\ns/install/set up/g?\n\n[5] IMO, we can have something like below\n> + request, etc. Set up these handlers before unblocking signals as\n> + shown below:\n>\n> instead of\n> + request, etc. To install these handlers, before unblocking interrupts\n> + run:\n>\n\nDitto\n\n[6] I think logs and errors either elog() or ereport can be used, so how\n> about\n> + Use <function>elog()</function> or <function>ereport()</function>\n> for\n> + logging output or raising errors instead of any direct stdio\n> calls.\n>\n> instead of\n> + Use <function>elog()</function> and\n> <function>ereport()</function> for\n> + logging output and raising errors instead of any direct stdio\n> calls.\n>\n\nOK.\n\n[7] Can we use child processes instead of subprocess ? If okay in\n> other places in the patch as well.\n>\n\nFine with me. The point is really that they're non-postgres processes being\nspawned by a backend, and that doing so must be done carefully.\n\n[8] Why should file descriptor manager API be used to execute\n> subprocesses/child processes?\n> + To execute subprocesses and open files, use the routines provided\n> by\n> + the file descriptor manager like\n> <function>BasicOpenFile</function>\n> + and <function>OpenPipeStream</function> instead of a direct\n>\n\nYeah, that wording is confusing, agreed. The point was that you shouldn't\nuse system() or popen(), you should OpenPipeStream(). And similarly, you\nshould avoid fopen() etc and instead use the Pg wrapper BasicOpenFile().\n\n\"\nPostgreSQL backends are required to limit the number of file descriptors\nthey\nopen. To open files, use postgres file descriptor manager routines like\nBasicOpenFile()\ninstead of directly using open() or fopen(). To open pipes to or from\nexternal processes,\nuse OpenPipeStream() instead of popen().\n\"\n\n?\n\n\n> [9] \"should always be\"? even if it's a blocking extesion, does it\n> work? If our intention is to recommend the developers, maybe we should\n> avoid using the term \"should\" in the patch in other places as well.\n>\n\nThe trouble is that it's a bit ... fuzzy.\n\nYou can get away with blocking for short periods without responding to\nsignals, but it's a \"how long is a piece of string\" issue.\n\n\"should be\" is fine.\n\nA hard \"must\" or \"must not\" would be misleading. But this isn't the place\nto go into all the details of how time sensitive (or not) interrupt\nhandling of different kinds is in different places for different worker\ntypes.\n\n\n> [11] I think it is\n> + block if this happens. So cleanup of resources is not\n> entirely managed by PostgreSQL, it\n> + must be handled using appropriate callbacks provided by PostgreSQL\n>\n> instead of\n> + block if this happens. So all cleanup of resources not already\n> + managed by the PostgreSQL runtime must be handled using\n> appropriate\n>\n\nI don't agree with the proposed new wording here.\n\nDelete the \"So all\" from my original, or\n\n... Cleanup of any resources that are not managed\n> by the PostgreSQL runtime must be handled using appropriate ...\n>\n\n?\n\n\n> [12] I think it is\n> + found in corresponding PostgreSQL header and source files.\n>\n> instead of\n> + found in the PostgreSQL headers and sources.\n>\n\nSure.\n\n\n> [13] I think it is\n> + Use PostgreSQL runtime concurrency and synchronisation primitives\n>\n> + between the PostgreSQL processes. These include signals and\n> ProcSignal multiplexed\n>\n> instead of\n> + Use the PostgreSQL runtime's concurrency and synchronisation\n> primitives\n>\n> + between PostgreSQL processes. These include signals and\n> ProcSignal multiplexed\n>\n\nOK.\n\n[14] Is it \"relation/table based state management\"?\n> + Sometimes relation-based state management for extensions is not\n>\n\nHopefully someone who's writing an extension knows that relation mostly ==\ntable. A relation could be a generic xlog relation etc too. So I think this\nis correct as-is.\n\n\n> [15] I think it is\n> + use PostgreSQL shared-memory based inter-process communication\n>\n> instead of\n> + use PostgreSQL's shared-memory based inter-process communication\n>\n\nProbably a linguistic preference, I don't mind.\n\n[16] I think it is\n> + or shared memory message queues (<acronym>shm_mq</acronym>).\n> Examples\n> + usage of some of these features can be found in the\n> + <filename>src/test/modules/test_shm_mq/</filename> sample\n> extension. Others\n>\n> instead of\n> + or shared memory message queues (<acronym>shm_mq</acronym>).\n> Examples\n> + of the use of some these features can be found in the\n> + <filename>src/test/modules/test_shm_mq/</filename> example\n> extension. Others\n>\n\nIt'd have to be \"Example usage\" but sure. I don't mind either version after\nthat correction.\n\n\n> [17] I think it is\n> + syscache entries, as this can cause subtle bugs. See\n>\n> instead of\n> + syscache cache entries, as this can cause subtle bugs. See\n>\n\nPIN Number :)\n\nSure, agreed.\n\nI really appreciate the proof read and comments.\n\nDo you think I missed anything crucial? I've written some material that\nsummarises pg's concurrency and IPC primitives at a high level but it's\nstill too much to go into this docs section IMO.\n\nHiThanks so much for reading over this!Would you mind attaching a revised version of the patch with your edits? Otherwise I'll go and merge them in once you've had your say on my comments inline below.Bruce, Robert, can I have an opinion from you on how best to locate and structure these docs, or whether you think they're suitable for the main docs at all? See patch upthread.On Tue, 19 Jan 2021 at 22:33, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\nHere are some comments:\n\n[1]\n   background worker's main function, and must be unblocked by it; this is to\n    allow the process to customize its signal handlers, if necessary.\n-   Signals can be unblocked in the new process by calling\n-   <function>BackgroundWorkerUnblockSignals</function> and blocked by calling\n-   <function>BackgroundWorkerBlockSignals</function>.\n+   It is important that all background workers set up and unblock signal\n+   handling before they enter their main loops. Signal handling in background\n+   workers is discussed separately in <xref linkend=\"bgworker-signals\"/>.\n   </para>I wasn't sure either way on that, see your [3] below.\n[2]\n+   interupt-aware APIs</link> for the purpose. Do not\n<function>usleep()</function>,\n+   <function>system()</function>, make blocking system calls, etc.\n+  </para>\n\nIs it \"Do not use <function>usleep()</function>,\n<function>system()</function> or make blocking system calls etc.\" ?Right. Good catch.\n\n[3] IMO, we can remove following from \"bgworker-signals\" if we retain\nit where currently it is, as discussed in [1].\n+    Signals can be unblocked in the new process by calling\n+    <function>BackgroundWorkerUnblockSignals</function> and blocked by calling\n+    <function>BackgroundWorkerBlockSignals</function>.If so, need to mention that they start blocked and link to the main text where that's mentioned.That's part of why I moved this chunk into the signal section. \n\n[4] Can we say\n+    The default signal handlers set up for background workers <emphasis>do\n+    default background worker signal handlers, it should call\n\ninstead of\n+    The default signal handlers installed for background workers <emphasis>do\n+    default background worker signal handling it should callHuh?I don't understand this proposal.s/install/set up/g?\n\n[5] IMO, we can have something like below\n+    request, etc. Set up these handlers before unblocking signals as\n+    shown below:\n\ninstead of\n+    request, etc. To install these handlers, before unblocking interrupts\n+    run:Ditto\n\n[6] I think logs and errors either elog() or ereport can be used, so how about\n+        Use <function>elog()</function> or <function>ereport()</function> for\n+        logging output or raising errors instead of any direct stdio calls.\n\ninstead of\n+        Use <function>elog()</function> and <function>ereport()</function> for\n+        logging output and raising errors instead of any direct stdio calls.OK. \n\n[7] Can we use child processes instead of subprocess ? If okay in\nother places in the patch as well.Fine with me. The point is really that they're non-postgres processes being spawned by a backend, and that doing so must be done carefully.\n[8] Why should file descriptor manager API be used to execute\nsubprocesses/child processes?\n+        To execute subprocesses and open files, use the routines provided by\n+        the file descriptor manager like <function>BasicOpenFile</function>\n+        and <function>OpenPipeStream</function> instead of a directYeah, that wording is confusing, agreed. The point was that you shouldn't use system() or popen(), you should OpenPipeStream(). And similarly, you should avoid fopen() etc and instead use the Pg wrapper BasicOpenFile().\"PostgreSQL backends are required to limit the number of file descriptors theyopen. To open files, use postgres file descriptor manager routines like BasicOpenFile()instead of directly using open() or fopen(). To open pipes to or from external processes,use OpenPipeStream() instead of popen().\" ?\n\n[9] \"should always be\"? even if it's a blocking extesion, does it\nwork? If our intention is to recommend the developers, maybe we should\navoid using the term \"should\" in the patch in other places as well.The trouble is that it's a bit ... fuzzy.You can get away with blocking for short periods without responding to signals, but it's a \"how long is a piece of string\" issue.\"should be\" is fine.A hard \"must\" or \"must not\" would be misleading. But this isn't the place to go into all the details of how time sensitive (or not) interrupt handling of different kinds is in different places for different worker types. \n[11] I think it is\n+        block if this happens. So cleanup of resources is not\nentirely managed by PostgreSQL, it\n+       must be handled using appropriate callbacks provided by PostgreSQL\n\ninstead of\n+        block if this happens. So all cleanup of resources not already\n+        managed by the PostgreSQL runtime must be handled using appropriateI don't agree with the proposed new wording here.Delete the \"So all\" from my original, or... Cleanup of any resources that are not managed\nby the PostgreSQL runtime must be handled using appropriate ...?\n\n[12] I think it is\n+        found in corresponding PostgreSQL header and source files.\n\ninstead of\n+        found in the PostgreSQL headers and sources.Sure. \n\n[13] I think it is\n+        Use PostgreSQL runtime concurrency and synchronisation primitives\n\n+        between the PostgreSQL processes. These include signals and\nProcSignal multiplexed\n\ninstead of\n+        Use the PostgreSQL runtime's concurrency and synchronisation primitives\n\n+        between PostgreSQL processes. These include signals and\nProcSignal multiplexedOK. \n\n[14] Is it \"relation/table based state management\"?\n+        Sometimes relation-based state management for extensions is notHopefully someone who's writing an extension knows that relation mostly == table. A relation could be a generic xlog relation etc too. So I think this is correct as-is. \n[15] I think it is\n+        use PostgreSQL shared-memory based inter-process communication\n\ninstead of\n+        use PostgreSQL's shared-memory based inter-process communicationProbably a linguistic preference, I don't mind.\n\n[16] I think it is\n+        or shared memory message queues (<acronym>shm_mq</acronym>). Examples\n+        usage of some of these features can be found in the\n+        <filename>src/test/modules/test_shm_mq/</filename> sample\nextension. Others\n\ninstead of\n+        or shared memory message queues (<acronym>shm_mq</acronym>). Examples\n+        of the use of some these features can be found in the\n+        <filename>src/test/modules/test_shm_mq/</filename> example\nextension. OthersIt'd have to be \"Example usage\" but sure. I don't mind either version after that correction. \n[17] I think it is\n+        syscache entries, as this can cause subtle bugs. See\n\ninstead of\n+        syscache cache entries, as this can cause subtle bugs. SeePIN Number :)Sure, agreed.I really appreciate the proof read and comments.Do you think I missed anything crucial? I've written some material that summarises pg's concurrency and IPC primitives at a high level but it's still too much to go into this docs section IMO.", "msg_date": "Fri, 22 Jan 2021 14:36:48 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "On 1/22/21 1:36 AM, Craig Ringer wrote:\n> \n> Would you mind attaching a revised version of the patch with your edits? \n> Otherwise I'll go and merge them in once you've had your say on my \n> comments inline below.\n\nBharath, do the revisions in [1] look OK to you?\n\n> Bruce, Robert, can I have an opinion from you on how best to locate and \n> structure these docs, or whether you think they're suitable for the main \n> docs at all? See patch upthread.\n\nBruce, Robert, any thoughts here?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/CAGRY4nyjHh-Tm89A8eS1x%2BJtZ-qHU7wY%2BR0DEEtWfv5TQ3HcGA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 25 Mar 2021 08:49:44 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "On Thu, Mar 25, 2021 at 08:49:44AM -0400, David Steele wrote:\n> On 1/22/21 1:36 AM, Craig Ringer wrote:\n> > \n> > Would you mind attaching a revised version of the patch with your edits?\n> > Otherwise I'll go and merge them in once you've had your say on my\n> > comments inline below.\n> \n> Bharath, do the revisions in [1] look OK to you?\n> \n> > Bruce, Robert, can I have an opinion from you on how best to locate and\n> > structure these docs, or whether you think they're suitable for the main\n> > docs at all? See patch upthread.\n> \n> Bruce, Robert, any thoughts here?\n\nI know I sent an email earlier this month saying we shouldn't\nover-document the backend hooks because the code could drift away from\nthe README content:\n\n\thttps://www.postgresql.org/message-id/20210309172049.GD26575%40momjian.us\n\t\n\tAgreed. If you document the hooks too much, it allows them to drift\n\taway from matching the code, which makes the hook documentation actually\n\tworse than having no hook documentation at all.\n\nHowever, for this doc patch, the content seem to be more strategic, so\nless likely to change, and hard to figure out from the code directly.\nTherefore, I think this would be a useful addition to the docs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 25 Mar 2021 18:15:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "On Fri, 26 Mar 2021 at 06:15, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Mar 25, 2021 at 08:49:44AM -0400, David Steele wrote:\n> > On 1/22/21 1:36 AM, Craig Ringer wrote:\n> > >\n> > > Would you mind attaching a revised version of the patch with your\n> edits?\n> > > Otherwise I'll go and merge them in once you've had your say on my\n> > > comments inline below.\n> >\n> > Bharath, do the revisions in [1] look OK to you?\n> >\n> > > Bruce, Robert, can I have an opinion from you on how best to locate and\n> > > structure these docs, or whether you think they're suitable for the\n> main\n> > > docs at all? See patch upthread.\n> >\n> > Bruce, Robert, any thoughts here?\n>\n> I know I sent an email earlier this month saying we shouldn't\n> over-document the backend hooks because the code could drift away from\n> the README content:\n>\n>\n> https://www.postgresql.org/message-id/20210309172049.GD26575%40momjian.us\n>\n> Agreed. If you document the hooks too much, it allows them to\n> drift\n> away from matching the code, which makes the hook documentation\n> actually\n> worse than having no hook documentation at all.\n>\n> However, for this doc patch, the content seem to be more strategic, so\n> less likely to change, and hard to figure out from the code directly.\n> Therefore, I think this would be a useful addition to the docs.\n>\n\nThanks for the kind words. It's good to hear that it may be useful. Let me\nknow if anything further is needed.\n\nOn Fri, 26 Mar 2021 at 06:15, Bruce Momjian <bruce@momjian.us> wrote:On Thu, Mar 25, 2021 at 08:49:44AM -0400, David Steele wrote:\n> On 1/22/21 1:36 AM, Craig Ringer wrote:\n> > \n> > Would you mind attaching a revised version of the patch with your edits?\n> > Otherwise I'll go and merge them in once you've had your say on my\n> > comments inline below.\n> \n> Bharath, do the revisions in [1] look OK to you?\n> \n> > Bruce, Robert, can I have an opinion from you on how best to locate and\n> > structure these docs, or whether you think they're suitable for the main\n> > docs at all? See patch upthread.\n> \n> Bruce, Robert, any thoughts here?\n\nI know I sent an email earlier this month saying we shouldn't\nover-document the backend hooks because the code could drift away from\nthe README content:\n\n        https://www.postgresql.org/message-id/20210309172049.GD26575%40momjian.us\n\n        Agreed.  If you document the hooks too much, it allows them to drift\n        away from matching the code, which makes the hook documentation actually\n        worse than having no hook documentation at all.\n\nHowever, for this doc patch, the content seem to be more strategic, so\nless likely to change, and hard to figure out from the code directly.\nTherefore, I think this would be a useful addition to the docs.Thanks for the kind words. It's good to hear that it may be useful. Let me know if anything further is needed.", "msg_date": "Fri, 26 Mar 2021 16:40:08 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "On Mon, 2021-01-18 at 15:56 +0800, Craig Ringer wrote:\n> The attached patch expands the xfunc docs and bgworker docs a little, providing a starting point for developers\n> to learn how to do some common tasks the Postgres Way.\n\nI like these changes!\n\nHere is a review:\n\n+ <para>\n+ See <xref linkend=\"xfunc-shared-addin\"/> for information on how to\n+ request extension shared memory allocations, <literal>LWLock</literal>s,\n+ etc.\n+ </para>\n\nThis doesn't sound very English to me, and I don't see the point in\nrepeating parts of the enumeration. How about\n\n See ... for detailed information how to allocate these resources.\n\n+ <para>\n+ If a background worker needs to sleep or wait for activity it should\n\nMissing comma after \"activity\".\n\n+ always use <link linkend=\"xfunc-sleeping-interrupts-blocking\">PostgreSQL's\n+ interupt-aware APIs</link> for the purpose. Do not <function>usleep()</function>,\n+ <function>system()</function>, make blocking system calls, etc.\n+ </para>\n\n\"system\" is not a verb. Suggestion:\n\n Do not use <function>usleep()</function>, <function>system()</function>\n or other blocking system calls.\n\n+ <para>\n+ The <filename>src/test/modules/worker_spi</filename> and\n+ <filename>src/test/modules/test_shm_mq</filename> contain working examples\n+ that demonstrates some useful techniques.\n </para>\n\nThat is missing a noun in my opinion, I would prefer:\n\n The modules ... contain working examples ...\n\n+ <sect1 id=\"bgworker-signals\" xreflabel=\"Signal Handling in Background Workers\">\n+ <title>Signal Handling in Background Workers</title>\n\nIt is not a good idea to start a section in the middle of a documentation page.\nThat will lead to a weird TOC entry at the top of the page.\n\nThe better way to do this is to write a short introductory header and convert\nmost of the first half of the page into another section, so that we end up\nwith a page the has the introductory material and two TOC entries for the details.\n\n+ The default signal handlers installed for background workers <emphasis>do\n+ not interrupt queries or blocking calls into other postgres code</emphasis>\n\n<productname>PostgreSQL</productname>, not \"postgres\".\nAlso, there is a comma missing at the end of the line.\n\n+ so they are only suitable for simple background workers that frequently and\n+ predictably return control to their main loop. If your worker uses the\n+ default background worker signal handling it should call\n\nAnother missing comma after \"handling\".\n\n+ <function>HandleMainLoopInterrupts()</function> on each pass through its\n+ main loop.\n+ </para>\n+\n+ <para>\n+ Background workers that may make blocking calls into core PostgreSQL code\n+ and/or run user-supplied queries should generally replace the default bgworker\n\nPlease stick with \"background worker\", \"bgworker\" is too sloppy IMO.\n\n+ signal handlers with the handlers used for normal user backends. This will\n+ ensure that the worker will respond in a timely manner to a termination\n+ request, query cancel request, recovery conflict interrupt, deadlock detector\n+ request, etc. To install these handlers, before unblocking interrupts\n+ run:\n\nThe following would be more grammatical:\n\n To install these handlers, run the following before unblocking interrupts:\n\n+ Then ensure that your main loop and any other code that could run for some\n+ time contains <function>CHECK_FOR_INTERRUPTS();</function> calls. A\n+ background worker using these signal handlers must use <link\n+ linkend=\"xfunc-resource-management\">PostgreSQL's resource management APIs\n+ and callbacks</link> to handle cleanup rather than relying on control\n+ returning to the main loop because the signal handlers may call\n\nThere should be a comma before \"because\".\n\n+ <function>proc_exit()</function> directly. This is recommended practice\n+ for all types of extension code anyway.\n+ </para>\n+\n+ <para>\n+ See the comments in <filename>src/include/miscadmin.h</filename> in the\n+ postgres headers for more details on signal handling.\n+ </para>\n\n\"in the postgres headers\" is redundant - at any rate, it should be \"PostgreSQL\".\n\n+ Do not attempt to use C++ exceptions or Windows Structured Exception\n+ Handling, and never call <function>exit()</function> directly.\n\nI am alright with this addition, but I think it would be good to link to\n<xref linkend=\"extend-cpp\"/> from it.\n\n+ <listitem id=\"xfunc-threading\">\n+ <para>\n+ Individual PostgreSQL backends are single-threaded.\n+ Never call any PostgreSQL function or access any PostgreSQL-managed data\n+ structure from a thread other than the main\n\n\"PostgreSQL\" should always have the <productname> tag.\nThis applies to a lot of places in this patch.\n\n+ thread. If at all possible your extension should not start any threads\n\nComma after \"possible\".\n\n+ and should only use the main thread. PostgreSQL generally uses subprocesses\n\nHm. If the extension does not start threads, it automatically uses the main thread.\nI think that should be removed for clarity.\n\n+ that coordinate over shared memory instead of threads - see\n+ <xref linkend=\"bgworker\"/>.\n\nIt also uses signals and light-weight locks - but I think that you don't need to\ndescribe the coordination mechanisms here, which are explained in the link you added.\n\n+ primitives like <function>WaitEventSetWait</function> where necessary. Any\n+ potentially long-running loop should periodically call <function>\n+ CHECK_FOR_INTERRUPTS()</function> to give PostgreSQL a chance to interrupt\n+ the function in case of a shutdown request, query cancel, etc. This means\n\nAre there other causes than shutdown or query cancellation?\nAt any rate, I am not fond of enumerations ending with \"etc\".\n\n+ you should <function>sleep()</function> or <function>usleep()</function>\n\nYou mean: \"you should *not* use sleep()\"\n\n+ for any nontrivial amount of time - use <function>WaitLatch()</function>\n\n\"&mdash;\" would be better than \"-\".\n\n+ or its variants instead. For details on interrupt handling see\n\nComma after \"handling\".\n\n[...]\n+ based cleanup. Your extension function could be terminated mid-execution\n\n... could be terminated *in* mid-execution ...\n\n+ by PostgreSQL if any function that it calls makes a\n+ <function>CHECK_FOR_INTERRUPTS()</function> check. It may not\n\n\"makes\" sound kind of clumsy in my ears.\n\n+ Spinlocks, latches, condition variables, and more. Details on all of these\n+ is far outside the scope of this document, and the best reference is\n+ the relevant source code.\n\nI don't think we have to add that last sentence. That holds for pretty much\neverything in this documentation.\n\n+ <para>\n+ Sometimes relation-based state management for extensions is not\n+ sufficient to meet their needs. In that case the extension may need to\n\nBetter:\n Sometimes, relation-based state management is not sufficient to meet the\n needs of an extension.\n\n+ use PostgreSQL's shared-memory based inter-process communication\n+ features, and should certainly do so instead of inventing its own or\n+ trying to use platform level features. An extension may use\n+ <link linkend=\"xfunc-shared-addin\">\"raw\" shared memory requested from\n+ the postmaster at startup</link> or higher level features like dynamic\n+ shared memory segments (<acronym>DSM</acronym>),\n+ dynamic shared areas (<acronym>DSA</acronym>),\n+ or shared memory message queues (<acronym>shm_mq</acronym>). Examples\n+ of the use of some these features can be found in the\n+ <filename>src/test/modules/test_shm_mq/</filename> example extension. Others\n+ can be found in various main PostgreSQL backend code.\n+ </para>\n\nInstead of the last sentence, I'd prefer\n... or other parts of the source code.\n\n+ <listitem id=\"xfunc-relcache-syscache\">\n+ <para>\n+ Look up system catalogs and table information using the relcache and syscache\n\nHow about \"table metadata\" rather than \"table information\"?\n\n+ APIs (<function>SearchSysCache...</function>,\n+ <function>relation_open()</function>, etc) rather than attempting to run\n+ SQL queries to fetch the information. Ensure that your function holds\n+ any necessary locks on the target objects. Take care not to make any calls\n\n... holds *the* necessary locks ...\n\n+ that could trigger cache invalidations while still accessing any\n+ syscache cache entries, as this can cause subtle bugs. See\n\nSubtle? Perhaps \"bugs that are hard to find\".\n\n+ <filename>src/backend/utils/cache/syscache.c</filename>,\n+ <filename>src/backend/utils/cache/relcache.c</filename>,\n+ <filename>src/backend/access/common/relation.c</filename> and their\n+ headers for details.\n+ </para>\n+ </listitem>\n\n\nAttached is a new version that has my suggested changes, plus a few from\nBharath Rupireddy (I do not agree with many of his suggestions).\n\nYours,\nLaurenz Albe", "msg_date": "Sun, 30 May 2021 13:19:55 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "Laurenz,\n\nThanks for your comments. Sorry it's taken me so long to get back to you.\nCommenting inline below on anything I think needs comment; other proposed\nchanges look good.\n\nOn Sun, 30 May 2021 at 19:20, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> + always use <link\n> linkend=\"xfunc-sleeping-interrupts-blocking\">PostgreSQL's\n> + interupt-aware APIs</link> for the purpose. Do not\n> <function>usleep()</function>,\n> + <function>system()</function>, make blocking system calls, etc.\n> + </para>\n>\n> \"system\" is not a verb.\n>\n\nWhen it's a function call it is, but I'm fine with your revision:\n\n Do not use <function>usleep()</function>, <function>system()</function>\n> or other blocking system calls.\n>\n> + and should only use the main thread. PostgreSQL generally uses\n> subprocesses\n>\n> Hm. If the extension does not start threads, it automatically uses the\n> main thread.\n> I think that should be removed for clarity.\n>\n\nIIRC I intended that to apply to the section that tries to say how to\nsurvive running your own threads in the backend if you really must do so.\n\n+ primitives like <function>WaitEventSetWait</function> where\n> necessary. Any\n> + potentially long-running loop should periodically call <function>\n> + CHECK_FOR_INTERRUPTS()</function> to give PostgreSQL a chance to\n> interrupt\n> + the function in case of a shutdown request, query cancel, etc.\n> This means\n>\n> Are there other causes than shutdown or query cancellation?\n> At any rate, I am not fond of enumerations ending with \"etc\".\n>\n\nI guess. I wanted to emphasise that if you mess this up postgres might fail\nto shut down or your backend might fail to respond to SIGTERM /\npg_terminate_backend, as those are the most commonly reported symptoms when\nsuch issues are encountered.\n\n\n+ by PostgreSQL if any function that it calls makes a\n> + <function>CHECK_FOR_INTERRUPTS()</function> check. It may not\n>\n> \"makes\" sound kind of clumsy in my ears.\n>\n\nYeah. I didn't come up with anything better right away but will look when I\nget the chance to return to this patch.\n\n\n> Attached is a new version that has my suggested changes, plus a few from\n> Bharath Rupireddy (I do not agree with many of his suggestions).\n>\n\nThanks very much. I will try to return to this soon and review the diff\nthen rebase and update the patch.\n\nI have a large backlog to get through, and I've recently had the pleasure\nof having to work on windows/powershell build system stuff, so it may still\ntake me a while.\n\nLaurenz,Thanks for your comments. Sorry it's taken me so long to get back to you. Commenting inline below on anything I think needs comment; other proposed changes look good.On Sun, 30 May 2021 at 19:20, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n+   always use <link linkend=\"xfunc-sleeping-interrupts-blocking\">PostgreSQL's\n+   interupt-aware APIs</link> for the purpose. Do not <function>usleep()</function>,\n+   <function>system()</function>, make blocking system calls, etc.\n+  </para>\n\n\"system\" is not a verb.When it's a function call it is, but I'm fine with your revision: \n\n  Do not use <function>usleep()</function>, <function>system()</function>\n  or other blocking system calls.\n+        and should only use the main thread. PostgreSQL generally uses subprocesses\n\nHm.  If the extension does not start threads, it automatically uses the main thread.\nI think that should be removed for clarity.IIRC I intended that to apply to the section that tries to say how to survive running your own threads in the backend if you really must do so.\n+        primitives like <function>WaitEventSetWait</function> where necessary. Any\n+        potentially long-running loop should periodically call <function>\n+        CHECK_FOR_INTERRUPTS()</function> to give PostgreSQL a chance to interrupt\n+        the function in case of a shutdown request, query cancel, etc. This means\n\nAre there other causes than shutdown or query cancellation?\nAt any rate, I am not fond of enumerations ending with \"etc\".I guess. I wanted to emphasise that if you mess this up postgres might fail to shut down or your backend might fail to respond to SIGTERM / pg_terminate_backend, as those are the most commonly reported symptoms when such issues are encountered. \n+        by PostgreSQL if any function that it calls makes a\n+        <function>CHECK_FOR_INTERRUPTS()</function> check. It may not\n\n\"makes\" sound kind of clumsy in my ears.Yeah. I didn't come up with anything better right away but will look when I get the chance to return to this patch. \nAttached is a new version that has my suggested changes, plus a few from\nBharath Rupireddy (I do not agree with many of his suggestions).Thanks very much. I will try to return to this soon and review the diff then rebase and update the patch.I have a large backlog to get through, and I've recently had the pleasure of having to work on windows/powershell build system stuff, so it may still take me a while.", "msg_date": "Tue, 29 Jun 2021 13:30:45 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "On Tue, 29 Jun 2021 at 13:30, Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n> Laurenz,\n>\n> Thanks for your comments. Sorry it's taken me so long to get back to you.\n> Commenting inline below on anything I think needs comment; other proposed\n> changes look good.\n>\n\nI'm not going to get back to this anytime soon.\n\nIf anybody wants to pick it up that's great. Otherwise at least it's there\nin the mailing lists to search.\n\nOn Tue, 29 Jun 2021 at 13:30, Craig Ringer <craig.ringer@enterprisedb.com> wrote:Laurenz,Thanks for your comments. Sorry it's taken me so long to get back to you. Commenting inline below on anything I think needs comment; other proposed changes look good.I'm not going to get back to this anytime soon.If anybody wants to pick it up that's great. Otherwise at least it's there in the mailing lists to search.", "msg_date": "Mon, 30 Aug 2021 10:20:46 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" }, { "msg_contents": "> On 30 Aug 2021, at 04:20, Craig Ringer <craig.ringer@enterprisedb.com> wrote:\n> \n> On Tue, 29 Jun 2021 at 13:30, Craig Ringer <craig.ringer@enterprisedb.com <mailto:craig.ringer@enterprisedb.com>> wrote:\n> Laurenz,\n> \n> Thanks for your comments. Sorry it's taken me so long to get back to you. Commenting inline below on anything I think needs comment; other proposed changes look good.\n> \n> I'm not going to get back to this anytime soon.\n> \n> If anybody wants to pick it up that's great. Otherwise at least it's there in the mailing lists to search.\n\nI'm marking this returned with feedback for now, please open a new entry when\nthere is an updated patch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 11:55:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] More docs on what to do and not do in extension code" } ]
[ { "msg_contents": "Hi folks\n\nA few times lately I've been doing things in extensions that've made me\nwant to be able to run my own code whenever InterruptPending is true and\nCHECK_FOR_INTERRUPTS() calls ProcessInterrupts()\n\nSo here's a simple patch to add ProcessInterrupts_hook. It follows the\nusual pattern like ProcessUtility_hook and standard_ProcessUtility.\n\nWhy? Because sometimes I want most of the behaviour of die(), but the\noption to override it with some bgworker-specific choices occasionally.\nHOLD_INTERRUPTS() is too big a hammer.\n\nWhat I really want to go along with this is a way for any backend to\nobserve the postmaster's pmState and its \"Shutdown\" variable's value, so\nany backend can tell if we're in FastShutdown, SmartShutdown, etc. Copies\nin shmem only obviously. But I'm not convinced it's right to just copy\nthese vars as-is to shmem, and I don't want to use the memory for a\nProcSignal slot for something that won't be relevant for most backends for\nmost of the postmaster lifetime. Ideas welcomed.", "msg_date": "Mon, 18 Jan 2021 15:59:40 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On Mon, Jan 18, 2021 at 3:00 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> A few times lately I've been doing things in extensions that've made me want to be able to run my own code whenever InterruptPending is true and CHECK_FOR_INTERRUPTS() calls ProcessInterrupts()\n\nI've wanted this in the past, too, so +1 from me.\n\n> What I really want to go along with this is a way for any backend to observe the postmaster's pmState and its \"Shutdown\" variable's value, so any backend can tell if we're in FastShutdown, SmartShutdown, etc. Copies in shmem only obviously. But I'm not convinced it's right to just copy these vars as-is to shmem, and I don't want to use the memory for a ProcSignal slot for something that won't be relevant for most backends for most of the postmaster lifetime. Ideas welcomed.\n\nI've wanted something along this line, too, but what I was thinking\nabout was more along the lines of having the postmaster signal the\nbackends when a smart shutdown happened. After all when a fast\nshutdown happens the backends already get told to terminate, and that\nseems like it ought to be enough: I'm not sure backends have any\nbusiness caring about why they are being asked to terminate. But they\nmight well want to know whether a smart shutdown is in progress, and\nright now there's no way for them to know that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 08:50:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jan 18, 2021 at 3:00 AM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n>> A few times lately I've been doing things in extensions that've made me want to be able to run my own code whenever InterruptPending is true and CHECK_FOR_INTERRUPTS() calls ProcessInterrupts()\n\n> I've wanted this in the past, too, so +1 from me.\n\nI dunno, this seems pretty scary and easily abusable. There's not all\nthat much that can be done safely in ProcessInterrupts(), and we should\nnot be encouraging extensions to think they can add random processing\nthere.\n\n>> What I really want to go along with this is a way for any backend to observe the postmaster's pmState and its \"Shutdown\" variable's value, so any backend can tell if we're in FastShutdown, SmartShutdown, etc.\n\n> I've wanted something along this line, too, but what I was thinking\n> about was more along the lines of having the postmaster signal the\n> backends when a smart shutdown happened.\n\nWe're about halfway there already, see 7e784d1dc. I didn't do the\nother half because it wasn't necessary to the problem, but exposing\nthe shutdown state more fully seems reasonable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jan 2021 11:56:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On Mon, Jan 18, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I've wanted this in the past, too, so +1 from me.\n>\n> I dunno, this seems pretty scary and easily abusable. There's not all\n> that much that can be done safely in ProcessInterrupts(), and we should\n> not be encouraging extensions to think they can add random processing\n> there.\n\nWe've had this disagreement before about other things, and I just\ndon't agree. If somebody uses a hook for something wildly unsafe, that\nwill break their stuff, not ours. That's not to say I endorse adding\nhooks for random purposes in random places. In particular, if it's\nimpossible to use a particular hook in a reasonably safe way, that's a\nsign that the hook is badly-designed and that we should not have it.\nBut, that's not the case here. I care more about smart extension\nauthors being able to do useful things than I do about the possibility\nthat dumb extension authors will do stupid things. We can't really\nprevent the latter anyway: this is open source.\n\n> We're about halfway there already, see 7e784d1dc. I didn't do the\n> other half because it wasn't necessary to the problem, but exposing\n> the shutdown state more fully seems reasonable.\n\nAh, I hadn't realized.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 13:00:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On Tue, 19 Jan 2021, 02:01 Robert Haas, <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 18, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I've wanted this in the past, too, so +1 from me.\n> >\n> > I dunno, this seems pretty scary and easily abusable. There's not all\n> > that much that can be done safely in ProcessInterrupts(), and we should\n> > not be encouraging extensions to think they can add random processing\n> > there.\n>\n> We've had this disagreement before about other things, and I just\n> don't agree. If somebody uses a hook for something wildly unsafe, that\n> will break their stuff, not ours.\n\n\nGenerally yeah.\n\nAnd we have no shortage of hooks with plenty of error or abuse potential\nand few safeguards already. I'd argue that in C code any external code is\ninherently unsafe anyway. So it's mainly down to whether the hook actively\nencourages unsafe actions without providing commensurate benefits, and\nwhether there's a better/safer way to achieve the same thing.\n\nThat's not to say I endorse adding\n\nhooks for random purposes in random places. In particular, if it's\n> impossible to use a particular hook in a reasonably safe way, that's a\n> sign that the hook is badly-designed and that we should not have it.\n>\n\nYep. Agreed.\n\nAny hook is possible to abuse or write incorrectly, from simple fmgr\nloadable functions right on up.\n\nThe argument that a hook could be abused would apply just as well to\nexposing pqsignal() itself to extensions. Probably more so. Also to\nanything like ProcessUtility_hook.\n\n\n> > We're about halfway there already, see 7e784d1dc. I didn't do the\n> > other half because it wasn't necessary to the problem, but exposing\n> > the shutdown state more fully seems reasonable.\n>\n\nExcellent, I'll take a look. Thanks.\n\nOn Tue, 19 Jan 2021, 02:01 Robert Haas, <robertmhaas@gmail.com> wrote:On Mon, Jan 18, 2021 at 11:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I've wanted this in the past, too, so +1 from me.\n>\n> I dunno, this seems pretty scary and easily abusable.  There's not all\n> that much that can be done safely in ProcessInterrupts(), and we should\n> not be encouraging extensions to think they can add random processing\n> there.\n\nWe've had this disagreement before about other things, and I just\ndon't agree. If somebody uses a hook for something wildly unsafe, that\nwill break their stuff, not ours.Generally yeah. And we have no shortage of hooks with plenty of error or abuse potential and few safeguards already. I'd argue that in C code any external code is inherently unsafe anyway. So it's mainly down to whether the hook actively encourages unsafe actions without providing commensurate benefits, and whether there's a better/safer way to achieve the same thing.That's not to say I endorse adding\nhooks for random purposes in random places. In particular, if it's\nimpossible to use a particular hook in a reasonably safe way, that's a\nsign that the hook is badly-designed and that we should not have it.Yep. Agreed.Any hook is possible to abuse or write incorrectly, from simple fmgr loadable functions right on up.The argument that a hook could be abused would apply just as well to exposing pqsignal() itself to extensions. Probably more so. Also to anything like ProcessUtility_hook.\n> We're about halfway there already, see 7e784d1dc.  I didn't do the\n> other half because it wasn't necessary to the problem, but exposing\n> the shutdown state more fully seems reasonable.Excellent, I'll take a look. Thanks.", "msg_date": "Tue, 19 Jan 2021 12:44:10 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On Tue, 19 Jan 2021 at 12:44, Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n>\n> > We're about halfway there already, see 7e784d1dc. I didn't do the\n>> > other half because it wasn't necessary to the problem, but exposing\n>> > the shutdown state more fully seems reasonable.\n>>\n>\n> Excellent, I'll take a look. Thanks.\n>\n\nThat looks very handy already.\n\nExtending it to be set before SIGTERM too would be handy.\n\nMy suggestion, which I'm happy to post in patch form if you think it's\nreasonable:\n\n* Change QuitSignalReason to ExitSignalReason to cover both SIGTERM (fast)\nand SIGQUIT (immediate)\n\n* Rename PMQUIT_FOR_STOP to PMEXIT_IMMEDIATE_SHUTDOWN\n\n* Add enumeration values PMEXIT_SMART_SHUTDOWN and PMEXIT_FAST_SHUTDOWN\n\n* For a fast shutdown, in pmdie()'s SIGINT case call\nSetExitSignalReason(PMEXIT_FAST_SHUTDOWN), so that when\nPostmasterStateMachine() calls SignalSomeChildren(SIGTERM, ...) in response\nto PM_STOP_BACKENDS, the reason is already available.\n\n* For smart shutdown, in pmdie()'s SIGTERM case call\nSetExitSignalReason(PMEXIT_SMART_SHUTDOWN) and set the latch of every live\nbackend. There isn't any appropriate PROCSIG so unless we want to overload\nPROCSIG_WALSND_INIT_STOPPING (ick), but I think it'd generally be\nsufficient to check GetExitSignalReason() in backend main loops.\n\nThe fast shutdown case seems like a no-brainer extension of your existing\npatch.\n\nI'm not entirely sure about the smart shutdown case. I don't want to add a\nProcSignal slot just for this and the info isn't urgent anyway. I think\nthat by checking for postmaster shutdown in the backend main loop we'd be\nable to support eager termination of idle backends on smart shutdown\n(immediately, or after an idle grace period), which is something I've\nwanted for quite some time. It shouldn't be significantly expensive\nespecially in the idle loop.\n\nThoughts?\n\n(Also I want a hook in PostgresMain's idle loop for things like this).\n\nOn Tue, 19 Jan 2021 at 12:44, Craig Ringer <craig.ringer@enterprisedb.com> wrote:\n> We're about halfway there already, see 7e784d1dc.  I didn't do the\n> other half because it wasn't necessary to the problem, but exposing\n> the shutdown state more fully seems reasonable.Excellent, I'll take a look. Thanks.That looks very handy already.Extending it to be set before SIGTERM too would be handy.My suggestion, which I'm happy to post in patch form if you think it's reasonable:* Change QuitSignalReason to ExitSignalReason to cover both SIGTERM (fast) and SIGQUIT (immediate)* Rename PMQUIT_FOR_STOP to PMEXIT_IMMEDIATE_SHUTDOWN* Add enumeration values PMEXIT_SMART_SHUTDOWN  and PMEXIT_FAST_SHUTDOWN * For a fast shutdown, in pmdie()'s SIGINT case call SetExitSignalReason(PMEXIT_FAST_SHUTDOWN), so that when PostmasterStateMachine() calls SignalSomeChildren(SIGTERM, ...) in response to PM_STOP_BACKENDS, the reason is already available.* For smart shutdown, in pmdie()'s SIGTERM case call SetExitSignalReason(PMEXIT_SMART_SHUTDOWN) and set the latch of every live backend. There isn't any appropriate PROCSIG so unless we want to overload PROCSIG_WALSND_INIT_STOPPING (ick), but I think it'd generally be sufficient to check GetExitSignalReason() in backend main loops.The fast shutdown case seems like a no-brainer extension of your existing patch. I'm not entirely sure about the smart shutdown case. I don't want to add a ProcSignal slot just for this and the info isn't urgent anyway. I think that by checking for postmaster shutdown in the backend main loop we'd be able to support eager termination of idle backends on smart shutdown (immediately, or after an idle grace period), which is something I've wanted for quite some time. It shouldn't be significantly expensive especially in the idle loop.Thoughts?(Also I want a hook in PostgresMain's idle loop for things like this).", "msg_date": "Tue, 19 Jan 2021 14:42:31 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On 1/19/21 1:42 AM, Craig Ringer wrote:\n> On Tue, 19 Jan 2021 at 12:44, Craig Ringer \n> <craig.ringer@enterprisedb.com <mailto:craig.ringer@enterprisedb.com>> \n> wrote:\n> \n> > We're about halfway there already, see 7e784d1dc.  I didn't\n> do the\n> > other half because it wasn't necessary to the problem, but\n> exposing\n> > the shutdown state more fully seems reasonable.\n> \n> Excellent, I'll take a look. Thanks.\n> \n> That looks very handy already.\n> \n> Extending it to be set before SIGTERM too would be handy.\n> \n> My suggestion, which I'm happy to post in patch form if you think it's \n> reasonable <snip>\n\nTom, Robert, and thoughts on the proposals in [1]?\n\nCraig, based on the state of this proposal (i.e. likely not a candidate \nfor PG14) I think it makes sense to move it to the next CF.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/CAGRY4nyNfscmQiZBCNT7cBYnQxJLAAVCGz%2BGZAQDAco1Fbb01w%40mail.gmail.com\n\n\n", "msg_date": "Fri, 19 Mar 2021 11:27:50 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 1/19/21 1:42 AM, Craig Ringer wrote:\n>> My suggestion, which I'm happy to post in patch form if you think it's \n>> reasonable <snip>\n\n> Tom, Robert, and thoughts on the proposals in [1]?\n> https://www.postgresql.org/message-id/CAGRY4nyNfscmQiZBCNT7cBYnQxJLAAVCGz%2BGZAQDAco1Fbb01w%40mail.gmail.com\n\nNo objection to generalizing the state passed through pmsignal.c.\n\nI'm not very comfortable about the idea of having the postmaster set\nchild processes' latches ... that doesn't sound terribly safe from the\nstandpoint of not allowing the postmaster to mess with shared memory\nstate that could cause it to block or crash. If we already do that\nelsewhere, then OK, but I don't think we do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Mar 2021 15:25:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On Fri, Mar 19, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Steele <david@pgmasters.net> writes:\n> > On 1/19/21 1:42 AM, Craig Ringer wrote:\n> >> My suggestion, which I'm happy to post in patch form if you think it's\n> >> reasonable <snip>\n>\n> > Tom, Robert, and thoughts on the proposals in [1]?\n> > https://www.postgresql.org/message-id/CAGRY4nyNfscmQiZBCNT7cBYnQxJLAAVCGz%2BGZAQDAco1Fbb01w%40mail.gmail.com\n>\n> No objection to generalizing the state passed through pmsignal.c.\n>\n> I'm not very comfortable about the idea of having the postmaster set\n> child processes' latches ... that doesn't sound terribly safe from the\n> standpoint of not allowing the postmaster to mess with shared memory\n> state that could cause it to block or crash. If we already do that\n> elsewhere, then OK, but I don't think we do.\n\nIt should be unnecessary anyway. We changed it a while back to make\nany SIGUSR1 set the latch ....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Mar 2021 15:43:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 19, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm not very comfortable about the idea of having the postmaster set\n>> child processes' latches ... that doesn't sound terribly safe from the\n>> standpoint of not allowing the postmaster to mess with shared memory\n>> state that could cause it to block or crash. If we already do that\n>> elsewhere, then OK, but I don't think we do.\n\n> It should be unnecessary anyway. We changed it a while back to make\n> any SIGUSR1 set the latch ....\n\nHmm, so the postmaster could send SIGUSR1 without setting any particular\npmsignal reason? Yeah, I suppose that could work. Or we could recast\nthis as being a new pmsignal reason.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Mar 2021 15:46:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On Sat, 20 Mar 2021 at 03:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Mar 19, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'm not very comfortable about the idea of having the postmaster set\n> >> child processes' latches ... that doesn't sound terribly safe from the\n> >> standpoint of not allowing the postmaster to mess with shared memory\n> >> state that could cause it to block or crash. If we already do that\n> >> elsewhere, then OK, but I don't think we do.\n>\n> > It should be unnecessary anyway. We changed it a while back to make\n> > any SIGUSR1 set the latch ....\n>\n> Hmm, so the postmaster could send SIGUSR1 without setting any particular\n> pmsignal reason? Yeah, I suppose that could work. Or we could recast\n> this as being a new pmsignal reason.\n>\n\nI'd be fine with either way.\n\nI don't expect to be able to get to working on a concrete patch for this\nany time soon, so I'll be leaving it here unless someone else needs to pick\nit up for their extension work. The in-principle agreement is there for\nfuture work anyway.\n\nOn Sat, 20 Mar 2021 at 03:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 19, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm not very comfortable about the idea of having the postmaster set\n>> child processes' latches ... that doesn't sound terribly safe from the\n>> standpoint of not allowing the postmaster to mess with shared memory\n>> state that could cause it to block or crash.  If we already do that\n>> elsewhere, then OK, but I don't think we do.\n\n> It should be unnecessary anyway. We changed it a while back to make\n> any SIGUSR1 set the latch ....\n\nHmm, so the postmaster could send SIGUSR1 without setting any particular\npmsignal reason?  Yeah, I suppose that could work.  Or we could recast\nthis as being a new pmsignal reason.I'd be fine with either way.I don't expect to be able to get to working on a concrete patch for this any time soon, so I'll be leaving it here unless someone else needs to pick it up for their extension work. The in-principle agreement is there for future work anyway.", "msg_date": "Tue, 29 Jun 2021 13:32:26 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On Tue, Jun 29, 2021 at 01:32:26PM +0800, Craig Ringer wrote:\n> On Sat, 20 Mar 2021 at 03:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > On Fri, Mar 19, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> I'm not very comfortable about the idea of having the postmaster set\n> > >> child processes' latches ... that doesn't sound terribly safe from the\n> > >> standpoint of not allowing the postmaster to mess with shared memory\n> > >> state that could cause it to block or crash. If we already do that\n> > >> elsewhere, then OK, but I don't think we do.\n> >\n> > > It should be unnecessary anyway. We changed it a while back to make\n> > > any SIGUSR1 set the latch ....\n> >\n> > Hmm, so the postmaster could send SIGUSR1 without setting any particular\n> > pmsignal reason? Yeah, I suppose that could work. Or we could recast\n> > this as being a new pmsignal reason.\n> >\n> \n> I'd be fine with either way.\n> \n> I don't expect to be able to get to working on a concrete patch for this\n> any time soon, so I'll be leaving it here unless someone else needs to pick\n> it up for their extension work. The in-principle agreement is there for\n> future work anyway.\n\nHi Craig,\n\nThere is still a CF entry for this. Should we close it as withdrawn? or\nmaybe RwF?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 1 Oct 2021 12:24:13 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" }, { "msg_contents": "On Sat, 2 Oct 2021 at 01:24, Jaime Casanova <jcasanov@systemguards.com.ec>\nwrote:\n\n> On Tue, Jun 29, 2021 at 01:32:26PM +0800, Craig Ringer wrote:\n> > On Sat, 20 Mar 2021 at 03:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > Robert Haas <robertmhaas@gmail.com> writes:\n> > > > On Fri, Mar 19, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >> I'm not very comfortable about the idea of having the postmaster set\n> > > >> child processes' latches ... that doesn't sound terribly safe from\n> the\n> > > >> standpoint of not allowing the postmaster to mess with shared memory\n> > > >> state that could cause it to block or crash. If we already do that\n> > > >> elsewhere, then OK, but I don't think we do.\n> > >\n> > > > It should be unnecessary anyway. We changed it a while back to make\n> > > > any SIGUSR1 set the latch ....\n> > >\n> > > Hmm, so the postmaster could send SIGUSR1 without setting any\n> particular\n> > > pmsignal reason? Yeah, I suppose that could work. Or we could recast\n> > > this as being a new pmsignal reason.\n> > >\n> >\n> > I'd be fine with either way.\n> >\n> > I don't expect to be able to get to working on a concrete patch for this\n> > any time soon, so I'll be leaving it here unless someone else needs to\n> pick\n> > it up for their extension work. The in-principle agreement is there for\n> > future work anyway.\n>\n> Hi Craig,\n>\n> There is still a CF entry for this. Should we close it as withdrawn? or\n> maybe RwF?\n>\n\nI'm not going to get time for it now, so I think marking it withdrawn is\nreasonable.\n\nI think it's well worth doing and Tom seems to think it's not a crazy idea,\nbut I'm no longer working on the software that needed it, and don't see a\nlot of other people calling for it, so it can wait until someone else needs\nit.\n\nOn Sat, 2 Oct 2021 at 01:24, Jaime Casanova <jcasanov@systemguards.com.ec> wrote:On Tue, Jun 29, 2021 at 01:32:26PM +0800, Craig Ringer wrote:\n> On Sat, 20 Mar 2021 at 03:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > On Fri, Mar 19, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> I'm not very comfortable about the idea of having the postmaster set\n> > >> child processes' latches ... that doesn't sound terribly safe from the\n> > >> standpoint of not allowing the postmaster to mess with shared memory\n> > >> state that could cause it to block or crash.  If we already do that\n> > >> elsewhere, then OK, but I don't think we do.\n> >\n> > > It should be unnecessary anyway. We changed it a while back to make\n> > > any SIGUSR1 set the latch ....\n> >\n> > Hmm, so the postmaster could send SIGUSR1 without setting any particular\n> > pmsignal reason?  Yeah, I suppose that could work.  Or we could recast\n> > this as being a new pmsignal reason.\n> >\n> \n> I'd be fine with either way.\n> \n> I don't expect to be able to get to working on a concrete patch for this\n> any time soon, so I'll be leaving it here unless someone else needs to pick\n> it up for their extension work. The in-principle agreement is there for\n> future work anyway.\n\nHi Craig,\n\nThere is still a CF entry for this. Should we close it as withdrawn? or\nmaybe RwF?I'm not going to get time for it now, so I think marking it withdrawn is reasonable.I think it's well worth doing and Tom seems to think it's not a crazy idea, but I'm no longer working on the software that needed it, and don't see a lot of other people calling for it, so it can wait until someone else needs it.", "msg_date": "Tue, 12 Oct 2021 09:52:50 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] ProcessInterrupts_hook" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile working with cursors that reference plans with CustomScanStates \nnodes, I encountered a segfault which originates from \nsearch_plan_tree(). The query plan is the result of a simple SELECT \nstatement into which I inject a Custom Scan node at the root to do some \npost-processing before returning rows. This plan is referenced by a \nsecond plan with a Tid Scan which originates from a query of the form \nDELETE FROM foo WHERE CURRENT OF my_cursor;\n\nsearch_plan_tree() assumes that \nCustomScanState::ScanState::ss_currentRelation is never NULL. In my \nunderstanding that only holds for CustomScanState nodes which are at the \nbottom of the plan and actually read from a relation. CustomScanState \nnodes which are not at the bottom don't have ss_currentRelation set. I \nbelieve for such nodes, instead search_plan_tree() should recurse into \nCustomScanState::custom_ps.\n\nI attached a patch. Any thoughts?\n\nBest regards,\nDavid\nSwarm64", "msg_date": "Mon, 18 Jan 2021 11:43:30 +0100", "msg_from": "David Geier <david@swarm64.com>", "msg_from_op": true, "msg_subject": "search_plan_tree(): handling of non-leaf CustomScanState nodes causes\n segfault" }, { "msg_contents": "On Mon, Jan 18, 2021 at 4:13 PM David Geier <david@swarm64.com> wrote:\n>\n> Hi hackers,\n>\n> While working with cursors that reference plans with CustomScanStates\n> nodes, I encountered a segfault which originates from\n> search_plan_tree(). The query plan is the result of a simple SELECT\n> statement into which I inject a Custom Scan node at the root to do some\n> post-processing before returning rows. This plan is referenced by a\n> second plan with a Tid Scan which originates from a query of the form\n> DELETE FROM foo WHERE CURRENT OF my_cursor;\n>\n> search_plan_tree() assumes that\n> CustomScanState::ScanState::ss_currentRelation is never NULL. In my\n> understanding that only holds for CustomScanState nodes which are at the\n> bottom of the plan and actually read from a relation. CustomScanState\n> nodes which are not at the bottom don't have ss_currentRelation set. I\n> believe for such nodes, instead search_plan_tree() should recurse into\n> CustomScanState::custom_ps.\n>\n> I attached a patch. Any thoughts?\n\nI don't have any comments about your patch as such, but ForeignScan is\nsimilar to CustomScan. ForeignScan also can leave ss_currentRelation\nNULL if it represents join between two foreign tables. So either\nForeignScan has the same problem as CustomScan (it's just above\nCustomScan case in search_plan_tree()) or it's handling it in some\nother way. In the first case we may want to fix that too in the same\nmanner (not necessarily in the same patch) and the in the later case\nCustomScan can handle it the same way.\n\nSaid that, I didn't notice any field in ForeignScan which is parallel\nto custom_ps, so what you are proposing is still needed.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 18 Jan 2021 17:13:28 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "Hi,\n\n+ * Custom scan nodes can be leaf nodes or inner nodes and\ntherfore need special treatment.\n\nThe special treatment applies to inner nodes. The above should be better\nphrased to clarify.\n\nCheers\n\nOn Mon, Jan 18, 2021 at 2:43 AM David Geier <david@swarm64.com> wrote:\n\n> Hi hackers,\n>\n> While working with cursors that reference plans with CustomScanStates\n> nodes, I encountered a segfault which originates from\n> search_plan_tree(). The query plan is the result of a simple SELECT\n> statement into which I inject a Custom Scan node at the root to do some\n> post-processing before returning rows. This plan is referenced by a\n> second plan with a Tid Scan which originates from a query of the form\n> DELETE FROM foo WHERE CURRENT OF my_cursor;\n>\n> search_plan_tree() assumes that\n> CustomScanState::ScanState::ss_currentRelation is never NULL. In my\n> understanding that only holds for CustomScanState nodes which are at the\n> bottom of the plan and actually read from a relation. CustomScanState\n> nodes which are not at the bottom don't have ss_currentRelation set. I\n> believe for such nodes, instead search_plan_tree() should recurse into\n> CustomScanState::custom_ps.\n>\n> I attached a patch. Any thoughts?\n>\n> Best regards,\n> David\n> Swarm64\n>\n>\n\nHi,+            * Custom scan nodes can be leaf nodes or inner nodes and therfore need special treatment.The special treatment applies to inner nodes. The above should be better phrased to clarify.CheersOn Mon, Jan 18, 2021 at 2:43 AM David Geier <david@swarm64.com> wrote:Hi hackers,\n\nWhile working with cursors that reference plans with CustomScanStates \nnodes, I encountered a segfault which originates from \nsearch_plan_tree(). The query plan is the result of a simple SELECT \nstatement into which I inject a Custom Scan node at the root to do some \npost-processing before returning rows. This plan is referenced by a \nsecond plan with a Tid Scan which originates from a query of the form \nDELETE FROM foo WHERE CURRENT OF my_cursor;\n\nsearch_plan_tree() assumes that \nCustomScanState::ScanState::ss_currentRelation is never NULL. In my \nunderstanding that only holds for CustomScanState nodes which are at the \nbottom of the plan and actually read from a relation. CustomScanState \nnodes which are not at the bottom don't have ss_currentRelation set. I \nbelieve for such nodes, instead search_plan_tree() should recurse into \nCustomScanState::custom_ps.\n\nI attached a patch. Any thoughts?\n\nBest regards,\nDavid\nSwarm64", "msg_date": "Mon, 18 Jan 2021 10:08:32 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "David Geier <david@swarm64.com> writes:\n> search_plan_tree() assumes that \n> CustomScanState::ScanState::ss_currentRelation is never NULL. In my \n> understanding that only holds for CustomScanState nodes which are at the \n> bottom of the plan and actually read from a relation. CustomScanState \n> nodes which are not at the bottom don't have ss_currentRelation set. I \n> believe for such nodes, instead search_plan_tree() should recurse into \n> CustomScanState::custom_ps.\n\nHm. I agree that we shouldn't simply assume that ss_currentRelation\nisn't null. However, we cannot make search_plan_tree() descend\nthrough non-leaf CustomScan nodes, because we don't know what processing\nis involved there. We need to find a scan that is guaranteed to return\nrows that are one-to-one with the cursor output. This is why the function\ndoesn't descend through join or aggregation nodes, and I see no argument\nby which we should assume we know more about what a customscan node will\ndo than we know about those.\n\nSo I'm inclined to think a suitable fix is just\n\n- if (RelationGetRelid(sstate->ss_currentRelation) == table_oid)\n+ if (sstate->ss_currentRelation &&\n+ RelationGetRelid(sstate->ss_currentRelation) == table_oid)\n result = sstate;\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jan 2021 13:46:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "Hi,\nIt seems sstate->ss_currentRelation being null can only\noccur for T_ForeignScanState and T_CustomScanState.\n\nWhat about the following change ?\n\nCheers\n\ndiff --git a/src/backend/executor/execCurrent.c\nb/src/backend/executor/execCurrent.c\nindex 0852bb9cec..56e31951d1 100644\n--- a/src/backend/executor/execCurrent.c\n+++ b/src/backend/executor/execCurrent.c\n@@ -325,12 +325,21 @@ search_plan_tree(PlanState *node, Oid table_oid,\n case T_IndexOnlyScanState:\n case T_BitmapHeapScanState:\n case T_TidScanState:\n+ {\n+ ScanState *sstate = (ScanState *) node;\n+\n+ if (RelationGetRelid(sstate->ss_currentRelation) ==\ntable_oid)\n+ result = sstate;\n+ break;\n+ }\n+\n case T_ForeignScanState:\n case T_CustomScanState:\n {\n ScanState *sstate = (ScanState *) node;\n\n- if (RelationGetRelid(sstate->ss_currentRelation) ==\ntable_oid)\n+ if (sstate->ss_currentRelation &&\n+ RelationGetRelid(sstate->ss_currentRelation) ==\ntable_oid)\n result = sstate;\n break;\n }\n\nOn Mon, Jan 18, 2021 at 10:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Geier <david@swarm64.com> writes:\n> > search_plan_tree() assumes that\n> > CustomScanState::ScanState::ss_currentRelation is never NULL. In my\n> > understanding that only holds for CustomScanState nodes which are at the\n> > bottom of the plan and actually read from a relation. CustomScanState\n> > nodes which are not at the bottom don't have ss_currentRelation set. I\n> > believe for such nodes, instead search_plan_tree() should recurse into\n> > CustomScanState::custom_ps.\n>\n> Hm. I agree that we shouldn't simply assume that ss_currentRelation\n> isn't null. However, we cannot make search_plan_tree() descend\n> through non-leaf CustomScan nodes, because we don't know what processing\n> is involved there. We need to find a scan that is guaranteed to return\n> rows that are one-to-one with the cursor output. This is why the function\n> doesn't descend through join or aggregation nodes, and I see no argument\n> by which we should assume we know more about what a customscan node will\n> do than we know about those.\n>\n> So I'm inclined to think a suitable fix is just\n>\n> - if (RelationGetRelid(sstate->ss_currentRelation) ==\n> table_oid)\n> + if (sstate->ss_currentRelation &&\n> + RelationGetRelid(sstate->ss_currentRelation) ==\n> table_oid)\n> result = sstate;\n>\n> regards, tom lane\n>\n>\n>\n\nHi,It seems sstate->ss_currentRelation being null can only occur for T_ForeignScanState and T_CustomScanState.What about the following change ?Cheersdiff --git a/src/backend/executor/execCurrent.c b/src/backend/executor/execCurrent.cindex 0852bb9cec..56e31951d1 100644--- a/src/backend/executor/execCurrent.c+++ b/src/backend/executor/execCurrent.c@@ -325,12 +325,21 @@ search_plan_tree(PlanState *node, Oid table_oid,         case T_IndexOnlyScanState:         case T_BitmapHeapScanState:         case T_TidScanState:+            {+                ScanState  *sstate = (ScanState *) node;++                if (RelationGetRelid(sstate->ss_currentRelation) == table_oid)+                    result = sstate;+                break;+            }+         case T_ForeignScanState:         case T_CustomScanState:             {                 ScanState  *sstate = (ScanState *) node;-                if (RelationGetRelid(sstate->ss_currentRelation) == table_oid)+                if (sstate->ss_currentRelation &&+                    RelationGetRelid(sstate->ss_currentRelation) == table_oid)                     result = sstate;                 break;             }On Mon, Jan 18, 2021 at 10:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Geier <david@swarm64.com> writes:\n> search_plan_tree() assumes that \n> CustomScanState::ScanState::ss_currentRelation is never NULL. In my \n> understanding that only holds for CustomScanState nodes which are at the \n> bottom of the plan and actually read from a relation. CustomScanState \n> nodes which are not at the bottom don't have ss_currentRelation set. I \n> believe for such nodes, instead search_plan_tree() should recurse into \n> CustomScanState::custom_ps.\n\nHm.  I agree that we shouldn't simply assume that ss_currentRelation\nisn't null.  However, we cannot make search_plan_tree() descend\nthrough non-leaf CustomScan nodes, because we don't know what processing\nis involved there.  We need to find a scan that is guaranteed to return\nrows that are one-to-one with the cursor output.  This is why the function\ndoesn't descend through join or aggregation nodes, and I see no argument\nby which we should assume we know more about what a customscan node will\ndo than we know about those.\n\nSo I'm inclined to think a suitable fix is just\n\n-               if (RelationGetRelid(sstate->ss_currentRelation) == table_oid)\n+               if (sstate->ss_currentRelation &&\n+                   RelationGetRelid(sstate->ss_currentRelation) == table_oid)\n                    result = sstate;\n\n                        regards, tom lane", "msg_date": "Mon, 18 Jan 2021 11:23:42 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> It seems sstate->ss_currentRelation being null can only\n> occur for T_ForeignScanState and T_CustomScanState.\n> What about the following change ?\n\nSeems like more code for no very good reason.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jan 2021 15:15:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "Hi, Tom:\nI was thinking that, if sstate->ss_currentRelation is null for the other\ncases, that would be a bug.\nAn assertion can be added for the cases ending with T_TidScanState.\nThough, the null sstate->ss_currentRelation would surface immediately\n(apart from assertion). So I omitted the assertion in the diff.\n\nCheers\n\nOn Mon, Jan 18, 2021 at 12:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > It seems sstate->ss_currentRelation being null can only\n> > occur for T_ForeignScanState and T_CustomScanState.\n> > What about the following change ?\n>\n> Seems like more code for no very good reason.\n>\n> regards, tom lane\n>\n\nHi, Tom:I was thinking that, if sstate->ss_currentRelation is null for the other cases, that would be a bug.An assertion can be added for the cases ending with T_TidScanState.Though, the null sstate->ss_currentRelation would surface immediately (apart from assertion). So I omitted the assertion in the diff.CheersOn Mon, Jan 18, 2021 at 12:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> It seems sstate->ss_currentRelation being null can only\n> occur for T_ForeignScanState and T_CustomScanState.\n> What about the following change ?\n\nSeems like more code for no very good reason.\n\n                        regards, tom lane", "msg_date": "Mon, 18 Jan 2021 13:09:20 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was thinking that, if sstate->ss_currentRelation is null for the other\n> cases, that would be a bug.\n> An assertion can be added for the cases ending with T_TidScanState.\n\nMaybe, but there are surely a lot of other places that would crash\nin such a case --- places far more often traversed than search_plan_tree.\nI do not see any value in complicating search_plan_tree for that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jan 2021 16:13:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "Hi,\n\nOn 18.01.21 19:46, Tom Lane wrote:\n> David Geier <david@swarm64.com> writes:\n>> search_plan_tree() assumes that\n>> CustomScanState::ScanState::ss_currentRelation is never NULL. In my\n>> understanding that only holds for CustomScanState nodes which are at the\n>> bottom of the plan and actually read from a relation. CustomScanState\n>> nodes which are not at the bottom don't have ss_currentRelation set. I\n>> believe for such nodes, instead search_plan_tree() should recurse into\n>> CustomScanState::custom_ps.\n> Hm. I agree that we shouldn't simply assume that ss_currentRelation\n> isn't null. However, we cannot make search_plan_tree() descend\n> through non-leaf CustomScan nodes, because we don't know what processing\n> is involved there. We need to find a scan that is guaranteed to return\n> rows that are one-to-one with the cursor output. This is why the function\n> doesn't descend through join or aggregation nodes, and I see no argument\n> by which we should assume we know more about what a customscan node will\n> do than we know about those.\nThat makes sense. Thanks for the explanation.\n>\n> So I'm inclined to think a suitable fix is just\n>\n> - if (RelationGetRelid(sstate->ss_currentRelation) == table_oid)\n> + if (sstate->ss_currentRelation &&\n> + RelationGetRelid(sstate->ss_currentRelation) == table_oid)\n> result = sstate;\n>\n> \t\t\tregards, tom lane\n>\n>\nI updated the patch to match your proposal.\n\nBest regards,\nDavid\nSwarm64", "msg_date": "Mon, 18 Jan 2021 22:32:12 +0100", "msg_from": "David Geier <david@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "David Geier <david@swarm64.com> writes:\n> On 18.01.21 19:46, Tom Lane wrote:\n>> Hm. I agree that we shouldn't simply assume that ss_currentRelation\n>> isn't null. However, we cannot make search_plan_tree() descend\n>> through non-leaf CustomScan nodes, because we don't know what processing\n>> is involved there. We need to find a scan that is guaranteed to return\n>> rows that are one-to-one with the cursor output. This is why the function\n>> doesn't descend through join or aggregation nodes, and I see no argument\n>> by which we should assume we know more about what a customscan node will\n>> do than we know about those.\n\n> That makes sense. Thanks for the explanation.\n\nOK, cool. I was afraid you'd argue that you really needed your CustomScan\nnode to be transparent in such cases. We could imagine inventing an\nadditional custom-scan-provider callback to embed the necessary knowledge,\nbut I'd rather not add the complexity until someone has a use-case.\n\n> I updated the patch to match your proposal.\n\nWFM, will push in a bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jan 2021 17:42:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "Hi,\n\nOn 18.01.21 23:42, Tom Lane wrote:\n> David Geier<david@swarm64.com> writes:\n>> On 18.01.21 19:46, Tom Lane wrote:\n>>> Hm. I agree that we shouldn't simply assume that ss_currentRelation\n>>> isn't null. However, we cannot make search_plan_tree() descend\n>>> through non-leaf CustomScan nodes, because we don't know what processing\n>>> is involved there. We need to find a scan that is guaranteed to return\n>>> rows that are one-to-one with the cursor output. This is why the function\n>>> doesn't descend through join or aggregation nodes, and I see no argument\n>>> by which we should assume we know more about what a customscan node will\n>>> do than we know about those.\n>> That makes sense. Thanks for the explanation.\n> OK, cool. I was afraid you'd argue that you really needed your CustomScan\n> node to be transparent in such cases. We could imagine inventing an\n> additional custom-scan-provider callback to embed the necessary knowledge,\n> but I'd rather not add the complexity until someone has a use-case.\n\nI was thinking about that. Generally, having such possibility would be \nvery useful. Unfortunately, I believe that in my specific case even \nhaving such functionality wouldn't allow for the query to work with \nCURRENT OF, because my CSP behaves similarly to a materialize node.\n\nMy understanding is only such plans are supported which consist of nodes \nthat guarantee that the tuple returned by the plan is the unmodified \ntuple a scan leaf node is currently positioned on?\n\nStill, if there's interest I would be happy to draft a patch. Instead of \na separate CSP callback, we could also provide an additional flag like \nCUSTOMPATH_SUPPORT_CURRENT_OF. The advantage of the callback would be \nthat we could delay the decision until execution time where potentially \nmore information is available.\n>> I updated the patch to match your proposal.\n> WFM, will push in a bit.\n>\n> \t\t\tregards, tom lane\nBest regards,\nDavid\nSwarm64\n\n\n", "msg_date": "Tue, 19 Jan 2021 08:41:20 +0100", "msg_from": "David Geier <david@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "David Geier <david@swarm64.com> writes:\n> On 18.01.21 23:42, Tom Lane wrote:\n>> OK, cool. I was afraid you'd argue that you really needed your CustomScan\n>> node to be transparent in such cases. We could imagine inventing an\n>> additional custom-scan-provider callback to embed the necessary knowledge,\n>> but I'd rather not add the complexity until someone has a use-case.\n\n> I was thinking about that. Generally, having such possibility would be \n> very useful. Unfortunately, I believe that in my specific case even \n> having such functionality wouldn't allow for the query to work with \n> CURRENT OF, because my CSP behaves similarly to a materialize node.\n> My understanding is only such plans are supported which consist of nodes \n> that guarantee that the tuple returned by the plan is the unmodified \n> tuple a scan leaf node is currently positioned on?\n\nDoesn't have to be *unmodified* --- a projection is fine, for example.\nBut we have to be sure that the current output tuple of the plan tree\nis based on the current output tuple of the bottom-level table scan\nnode. As an example of the hazards here, it's currently safe for\nsearch_plan_tree to descend through a Limit node, but it did not use to\nbe, because the old implementation of Limit was such that it could return\na different tuple from the one the underlying scan node thinks it is\npositioned on.\n\nAs another example, descending through Append is OK because only one\nof the child scans will be positioned-on-a-tuple at all; the rest\nwill be at EOF or not yet started, so they can't produce a match\nto whatever tuple ID the WHERE CURRENT OF is asking about.\n\nNow that I look at this, I strongly wonder whether whoever added\nMergeAppend support here understood what they were doing. That\nlooks broken, because child nodes will typically be positioned on\ntuples, whether or not the current top-level output came from them.\nSo I fear we could get a false-positive confirmation that some\ntuple matches WHERE CURRENT OF.\n\nAnyway, it seems clearly possible that some nonleaf CustomScans\nwould operate in a manner that would allow descending through them\nwhile others wouldn't. But I don't really want to write the docs\nexplaining what a callback for this should do ;-)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 10:19:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" }, { "msg_contents": "I wrote:\n> Now that I look at this, I strongly wonder whether whoever added\n> MergeAppend support here understood what they were doing. That\n> looks broken, because child nodes will typically be positioned on\n> tuples, whether or not the current top-level output came from them.\n> So I fear we could get a false-positive confirmation that some\n> tuple matches WHERE CURRENT OF.\n\nUrgh, indeed it's buggy. With the attached test script I get\n\n...\nBEGIN\nDECLARE CURSOR\n f1 | f2 \n----+-----\n 1 | one\n(1 row)\n\nUPDATE 1\nUPDATE 1\nUPDATE 1\nCOMMIT\n f1 | f2 \n----+-------------\n 1 | one updated\n(1 row)\n\n f1 | f2 \n----+-------------\n 2 | two updated\n(1 row)\n\n f1 | f2 \n----+---------------\n 3 | three updated\n(1 row)\n\nwhere clearly only the row with f1=1 should have updated\n(and if you leave off ORDER BY, so as to get a Merge not\nMergeAppend plan, indeed only that row updates).\n\nI shall go fix this, and try to improve the evidently-inadequate\ncomments in search_plan_tree.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 19 Jan 2021 11:53:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: search_plan_tree(): handling of non-leaf CustomScanState nodes\n causes segfault" } ]
[ { "msg_contents": "While discussing the topic of foreign key performance off-list with\nRobert and Corey (also came up briefly on the list recently [1], [2]),\na few ideas were thrown around to simplify our current system of RI\nchecks to enforce foreign keys with the aim of reducing some of its\noverheads. The two main aspects of how we do these checks that\nseemingly cause the most overhead are:\n\n* Using row-level triggers that are fired during the modification of\nthe referencing and the referenced relations to perform them\n\n* Using plain SQL queries issued over SPI\n\nThere is a discussion nearby titled \"More efficient RI checks - take\n2\" [2] to address this problem from the viewpoint that it is using\nrow-level triggers that causes the most overhead, although there are\nsome posts mentioning that SQL-over-SPI is not without blame here. I\ndecided to focus on the latter aspect and tried reimplementing some\nchecks such that SPI can be skipped altogether.\n\nI started with the check that's performed when inserting into or\nupdating the referencing table to confirm that the new row points to a\nvalid row in the referenced relation. The corresponding SQL is this:\n\nSELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x\n\n$1 is the value of the foreign key of the new row. If the query\nreturns a row, all good. Thanks to SPI, or its use of plan caching,\nthe query is re-planned only a handful of times before making a\ngeneric plan that is then saved and reused, which looks like this:\n\n QUERY PLAN\n--------------------------------------\n LockRows\n -> Index Scan using pk_pkey on pk x\n Index Cond: (a = $1)\n(3 rows)\n\nSo in most cases, the trigger's function would only execute the plan\nthat's already there, at least in a given session. That's good but\nwhat we realized would be even better is if we didn't have to\n\"execute\" a full-fledged \"plan\" for this, that is, to simply find out\nwhether a row containing the key we're looking for exists in the\nreferenced relation and if found lock it. Directly scanning the index\nand locking it directly with table_tuple_lock() like ExecLockRows()\ndoes gives us exactly that behavior, which seems simple enough to be\ndone in a not-so-long local function in ri_trigger.c. I gave that a\ntry and came up with the attached. It also takes care of the case\nwhere the referenced relation is partitioned in which case its\nappropriate leaf partition's index is scanned.\n\nThe patch results in ~2x improvement in the performance of inserts and\nupdates on referencing tables:\n\ncreate table p (a numeric primary key);\ninsert into p select generate_series(1, 1000000);\ncreate table f (a bigint references p);\n\n-- unpatched\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 6340.733 ms (00:06.341)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 7490.906 ms (00:07.491)\n\n-- patched\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 3340.808 ms (00:03.341)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 4178.171 ms (00:04.178)\n\nThe improvement is even more dramatic when the referenced table (that\nwe're no longer querying over SPI) is partitioned. Here are the\nnumbers when the PK relation has 1000 hash partitions.\n\nUnpatched:\n\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 35898.783 ms (00:35.899)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 37736.294 ms (00:37.736)\n\nPatched:\n\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 5633.377 ms (00:05.633)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 6345.029 ms (00:06.345)\n\nThat's over ~5x improvement!\n\nWhile the above case seemed straightforward enough for skipping SPI,\nit seems a bit hard to do the same for other cases where we query the\n*referencing* relation during an operation on the referenced table\n(for example, checking if the row being deleted is still referenced),\nbecause the plan in those cases is not predictably an index scan.\nAlso, the filters in those queries are more than likely to not match\nthe partition key of a partitioned referencing relation, so all\npartitions will have to scanned. I have left those cases as future\nwork.\n\nThe patch seems simple enough to consider for inclusion in v14 unless\nof course we stumble into some dealbreaker(s). I will add this to\nMarch CF.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CADkLM%3DcTt_8Fg1Jtij5j%2BQEBOxz9Cuu4DiMDYOwdtktDAKzuLw%40mail.gmail.com\n\n[2] https://www.postgresql.org/message-id/1813.1586363881%40antos", "msg_date": "Mon, 18 Jan 2021 21:39:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "simplifying foreign key/RI checks" }, { "msg_contents": "Hi,\nI was looking at this statement:\n\ninsert into f select generate_series(1, 2000000, 2);\n\nSince certain generated values (the second half) are not in table p,\nwouldn't insertion for those values fail ?\nI tried a scaled down version (1000th) of your example:\n\nyugabyte=# insert into f select generate_series(1, 2000, 2);\nERROR: insert or update on table \"f\" violates foreign key constraint\n\"f_a_fkey\"\nDETAIL: Key (a)=(1001) is not present in table \"p\".\n\nFor v1-0002-Avoid-using-SPI-for-some-RI-checks.patch :\n\n+ * Collect partition key values from the unique key.\n\nAt the end of the nested loop, should there be an assertion\nthat partkey->partnatts partition key values have been found ?\nThis can be done by using a counter (initialized to 0) which is incremented\nwhen a match is found by the inner loop.\n\nCheers\n\nOn Mon, Jan 18, 2021 at 4:40 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> While discussing the topic of foreign key performance off-list with\n> Robert and Corey (also came up briefly on the list recently [1], [2]),\n> a few ideas were thrown around to simplify our current system of RI\n> checks to enforce foreign keys with the aim of reducing some of its\n> overheads. The two main aspects of how we do these checks that\n> seemingly cause the most overhead are:\n>\n> * Using row-level triggers that are fired during the modification of\n> the referencing and the referenced relations to perform them\n>\n> * Using plain SQL queries issued over SPI\n>\n> There is a discussion nearby titled \"More efficient RI checks - take\n> 2\" [2] to address this problem from the viewpoint that it is using\n> row-level triggers that causes the most overhead, although there are\n> some posts mentioning that SQL-over-SPI is not without blame here. I\n> decided to focus on the latter aspect and tried reimplementing some\n> checks such that SPI can be skipped altogether.\n>\n> I started with the check that's performed when inserting into or\n> updating the referencing table to confirm that the new row points to a\n> valid row in the referenced relation. The corresponding SQL is this:\n>\n> SELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x\n>\n> $1 is the value of the foreign key of the new row. If the query\n> returns a row, all good. Thanks to SPI, or its use of plan caching,\n> the query is re-planned only a handful of times before making a\n> generic plan that is then saved and reused, which looks like this:\n>\n> QUERY PLAN\n> --------------------------------------\n> LockRows\n> -> Index Scan using pk_pkey on pk x\n> Index Cond: (a = $1)\n> (3 rows)\n>\n> So in most cases, the trigger's function would only execute the plan\n> that's already there, at least in a given session. That's good but\n> what we realized would be even better is if we didn't have to\n> \"execute\" a full-fledged \"plan\" for this, that is, to simply find out\n> whether a row containing the key we're looking for exists in the\n> referenced relation and if found lock it. Directly scanning the index\n> and locking it directly with table_tuple_lock() like ExecLockRows()\n> does gives us exactly that behavior, which seems simple enough to be\n> done in a not-so-long local function in ri_trigger.c. I gave that a\n> try and came up with the attached. It also takes care of the case\n> where the referenced relation is partitioned in which case its\n> appropriate leaf partition's index is scanned.\n>\n> The patch results in ~2x improvement in the performance of inserts and\n> updates on referencing tables:\n>\n> create table p (a numeric primary key);\n> insert into p select generate_series(1, 1000000);\n> create table f (a bigint references p);\n>\n> -- unpatched\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 6340.733 ms (00:06.341)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 7490.906 ms (00:07.491)\n>\n> -- patched\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 3340.808 ms (00:03.341)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 4178.171 ms (00:04.178)\n>\n> The improvement is even more dramatic when the referenced table (that\n> we're no longer querying over SPI) is partitioned. Here are the\n> numbers when the PK relation has 1000 hash partitions.\n>\n> Unpatched:\n>\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 35898.783 ms (00:35.899)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 37736.294 ms (00:37.736)\n>\n> Patched:\n>\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 5633.377 ms (00:05.633)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 6345.029 ms (00:06.345)\n>\n> That's over ~5x improvement!\n>\n> While the above case seemed straightforward enough for skipping SPI,\n> it seems a bit hard to do the same for other cases where we query the\n> *referencing* relation during an operation on the referenced table\n> (for example, checking if the row being deleted is still referenced),\n> because the plan in those cases is not predictably an index scan.\n> Also, the filters in those queries are more than likely to not match\n> the partition key of a partitioned referencing relation, so all\n> partitions will have to scanned. I have left those cases as future\n> work.\n>\n> The patch seems simple enough to consider for inclusion in v14 unless\n> of course we stumble into some dealbreaker(s). I will add this to\n> March CF.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n> [1]\n> https://www.postgresql.org/message-id/CADkLM%3DcTt_8Fg1Jtij5j%2BQEBOxz9Cuu4DiMDYOwdtktDAKzuLw%40mail.gmail.com\n>\n> [2] https://www.postgresql.org/message-id/1813.1586363881%40antos\n>\n\nHi,I was looking at this statement:insert into f select generate_series(1, 2000000, 2);Since certain generated values (the second half) are not in table p, wouldn't insertion for those values fail ?I tried a scaled down version (1000th) of your example:yugabyte=# insert into f select generate_series(1, 2000, 2);ERROR:  insert or update on table \"f\" violates foreign key constraint \"f_a_fkey\"DETAIL:  Key (a)=(1001) is not present in table \"p\".For v1-0002-Avoid-using-SPI-for-some-RI-checks.patch :+        * Collect partition key values from the unique key.At the end of the nested loop, should there be an assertion that partkey->partnatts partition key values have been found ?This can be done by using a counter (initialized to 0) which is incremented when a match is found by the inner loop.CheersOn Mon, Jan 18, 2021 at 4:40 AM Amit Langote <amitlangote09@gmail.com> wrote:While discussing the topic of foreign key performance off-list with\nRobert and Corey (also came up briefly on the list recently [1], [2]),\na few ideas were thrown around to simplify our current system of RI\nchecks to enforce foreign keys with the aim of reducing some of its\noverheads.  The two main aspects of  how we do these checks that\nseemingly cause the most overhead are:\n\n* Using row-level triggers that are fired during the modification of\nthe referencing and the referenced relations to perform them\n\n* Using plain SQL queries issued over SPI\n\nThere is a discussion nearby titled \"More efficient RI checks - take\n2\" [2] to address this problem from the viewpoint that it is using\nrow-level triggers that causes the most overhead, although there are\nsome posts mentioning that SQL-over-SPI is not without blame here.  I\ndecided to focus on the latter aspect and tried reimplementing some\nchecks such that SPI can be skipped altogether.\n\nI started with the check that's performed when inserting into or\nupdating the referencing table to confirm that the new row points to a\nvalid row in the referenced relation.  The corresponding SQL is this:\n\nSELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x\n\n$1 is the value of the foreign key of the new row.  If the query\nreturns a row, all good.  Thanks to SPI, or its use of plan caching,\nthe query is re-planned only a handful of times before making a\ngeneric plan that is then saved and reused, which looks like this:\n\n              QUERY PLAN\n--------------------------------------\n LockRows\n   ->  Index Scan using pk_pkey on pk x\n         Index Cond: (a = $1)\n(3 rows)\n\nSo in most cases, the trigger's function would only execute the plan\nthat's already there, at least in a given session.  That's good but\nwhat we realized would be even better is if we didn't have to\n\"execute\" a full-fledged \"plan\" for this, that is, to simply find out\nwhether a row containing the key we're looking for exists in the\nreferenced relation and if found lock it.  Directly scanning the index\nand locking it directly with table_tuple_lock() like ExecLockRows()\ndoes gives us exactly that behavior, which seems simple enough to be\ndone in a not-so-long local function in ri_trigger.c.  I gave that a\ntry and came up with the attached.  It also takes care of the case\nwhere the referenced relation is partitioned in which case its\nappropriate leaf partition's index is scanned.\n\nThe patch results in ~2x improvement in the performance of inserts and\nupdates on referencing tables:\n\ncreate table p (a numeric primary key);\ninsert into p select generate_series(1, 1000000);\ncreate table f (a bigint references p);\n\n-- unpatched\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 6340.733 ms (00:06.341)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 7490.906 ms (00:07.491)\n\n-- patched\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 3340.808 ms (00:03.341)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 4178.171 ms (00:04.178)\n\nThe improvement is even more dramatic when the referenced table (that\nwe're no longer querying over SPI) is partitioned.  Here are the\nnumbers when the PK relation has 1000 hash partitions.\n\nUnpatched:\n\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 35898.783 ms (00:35.899)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 37736.294 ms (00:37.736)\n\nPatched:\n\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 5633.377 ms (00:05.633)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 6345.029 ms (00:06.345)\n\nThat's over ~5x improvement!\n\nWhile the above case seemed straightforward enough for skipping SPI,\nit seems a bit hard to do the same for other cases where we query the\n*referencing* relation during an operation on the referenced table\n(for example, checking if the row being deleted is still referenced),\nbecause the plan in those cases is not predictably an index scan.\nAlso, the filters in those queries are more than likely to not match\nthe partition key of a partitioned referencing relation, so all\npartitions will have to scanned.  I have left those cases as future\nwork.\n\nThe patch seems simple enough to consider for inclusion in v14 unless\nof course we stumble into some dealbreaker(s).  I will add this to\nMarch CF.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CADkLM%3DcTt_8Fg1Jtij5j%2BQEBOxz9Cuu4DiMDYOwdtktDAKzuLw%40mail.gmail.com\n\n[2] https://www.postgresql.org/message-id/1813.1586363881%40antos", "msg_date": "Mon, 18 Jan 2021 09:48:47 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "po 18. 1. 2021 v 13:40 odesílatel Amit Langote <amitlangote09@gmail.com>\nnapsal:\n\n> While discussing the topic of foreign key performance off-list with\n> Robert and Corey (also came up briefly on the list recently [1], [2]),\n> a few ideas were thrown around to simplify our current system of RI\n> checks to enforce foreign keys with the aim of reducing some of its\n> overheads. The two main aspects of how we do these checks that\n> seemingly cause the most overhead are:\n>\n> * Using row-level triggers that are fired during the modification of\n> the referencing and the referenced relations to perform them\n>\n> * Using plain SQL queries issued over SPI\n>\n> There is a discussion nearby titled \"More efficient RI checks - take\n> 2\" [2] to address this problem from the viewpoint that it is using\n> row-level triggers that causes the most overhead, although there are\n> some posts mentioning that SQL-over-SPI is not without blame here. I\n> decided to focus on the latter aspect and tried reimplementing some\n> checks such that SPI can be skipped altogether.\n>\n> I started with the check that's performed when inserting into or\n> updating the referencing table to confirm that the new row points to a\n> valid row in the referenced relation. The corresponding SQL is this:\n>\n> SELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x\n>\n> $1 is the value of the foreign key of the new row. If the query\n> returns a row, all good. Thanks to SPI, or its use of plan caching,\n> the query is re-planned only a handful of times before making a\n> generic plan that is then saved and reused, which looks like this:\n>\n> QUERY PLAN\n> --------------------------------------\n> LockRows\n> -> Index Scan using pk_pkey on pk x\n> Index Cond: (a = $1)\n> (3 rows)\n>\n>\n>\n\nWhat is performance when the referenced table is small? - a lot of\ncodebooks are small between 1000 to 10K rows.\n\npo 18. 1. 2021 v 13:40 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:While discussing the topic of foreign key performance off-list with\nRobert and Corey (also came up briefly on the list recently [1], [2]),\na few ideas were thrown around to simplify our current system of RI\nchecks to enforce foreign keys with the aim of reducing some of its\noverheads.  The two main aspects of  how we do these checks that\nseemingly cause the most overhead are:\n\n* Using row-level triggers that are fired during the modification of\nthe referencing and the referenced relations to perform them\n\n* Using plain SQL queries issued over SPI\n\nThere is a discussion nearby titled \"More efficient RI checks - take\n2\" [2] to address this problem from the viewpoint that it is using\nrow-level triggers that causes the most overhead, although there are\nsome posts mentioning that SQL-over-SPI is not without blame here.  I\ndecided to focus on the latter aspect and tried reimplementing some\nchecks such that SPI can be skipped altogether.\n\nI started with the check that's performed when inserting into or\nupdating the referencing table to confirm that the new row points to a\nvalid row in the referenced relation.  The corresponding SQL is this:\n\nSELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x\n\n$1 is the value of the foreign key of the new row.  If the query\nreturns a row, all good.  Thanks to SPI, or its use of plan caching,\nthe query is re-planned only a handful of times before making a\ngeneric plan that is then saved and reused, which looks like this:\n\n              QUERY PLAN\n--------------------------------------\n LockRows\n   ->  Index Scan using pk_pkey on pk x\n         Index Cond: (a = $1)\n(3 rows)\nWhat is performance when the referenced table is small? - a lot of codebooks are small between 1000 to 10K rows.", "msg_date": "Mon, 18 Jan 2021 19:00:38 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Tue, Jan 19, 2021 at 3:01 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 18. 1. 2021 v 13:40 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n>> I started with the check that's performed when inserting into or\n>> updating the referencing table to confirm that the new row points to a\n>> valid row in the referenced relation. The corresponding SQL is this:\n>>\n>> SELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x\n>>\n>> $1 is the value of the foreign key of the new row. If the query\n>> returns a row, all good. Thanks to SPI, or its use of plan caching,\n>> the query is re-planned only a handful of times before making a\n>> generic plan that is then saved and reused, which looks like this:\n>>\n>> QUERY PLAN\n>> --------------------------------------\n>> LockRows\n>> -> Index Scan using pk_pkey on pk x\n>> Index Cond: (a = $1)\n>> (3 rows)\n>\n>\n> What is performance when the referenced table is small? - a lot of codebooks are small between 1000 to 10K rows.\n\nI see the same ~2x improvement.\n\ncreate table p (a numeric primary key);\ninsert into p select generate_series(1, 1000);\ncreate table f (a bigint references p);\n\nUnpatched:\n\ninsert into f select i%1000+1 from generate_series(1, 1000000) i;\nINSERT 0 1000000\nTime: 5461.377 ms (00:05.461)\n\n\nPatched:\n\ninsert into f select i%1000+1 from generate_series(1, 1000000) i;\nINSERT 0 1000000\nTime: 2357.440 ms (00:02.357)\n\nThat's expected because the overhead of using SPI to check the PK\ntable, which the patch gets rid of, is the same no matter the size of\nthe index to be scanned.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 11:08:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Tue, Jan 19, 2021 at 2:47 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> I was looking at this statement:\n>\n> insert into f select generate_series(1, 2000000, 2);\n>\n> Since certain generated values (the second half) are not in table p, wouldn't insertion for those values fail ?\n> I tried a scaled down version (1000th) of your example:\n>\n> yugabyte=# insert into f select generate_series(1, 2000, 2);\n> ERROR: insert or update on table \"f\" violates foreign key constraint \"f_a_fkey\"\n> DETAIL: Key (a)=(1001) is not present in table \"p\".\n\nSorry, a wrong copy-paste by me. Try this:\n\ncreate table p (a numeric primary key);\ninsert into p select generate_series(1, 2000000);\ncreate table f (a bigint references p);\n\n-- Unpatched\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 6527.652 ms (00:06.528)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 8108.310 ms (00:08.108)\n\n-- Patched:\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 3312.193 ms (00:03.312)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 4292.807 ms (00:04.293)\n\n> For v1-0002-Avoid-using-SPI-for-some-RI-checks.patch :\n>\n> + * Collect partition key values from the unique key.\n>\n> At the end of the nested loop, should there be an assertion that partkey->partnatts partition key values have been found ?\n> This can be done by using a counter (initialized to 0) which is incremented when a match is found by the inner loop.\n\nI've updated the patch to add the Assert. Thanks for taking a look.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Jan 2021 11:45:29 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Thanks for the quick response.\n\n+ if (mapped_partkey_attnums[i] == pk_attnums[j])\n+ {\n+ partkey_vals[i] = pk_vals[j];\n+ partkey_isnull[i] = pk_nulls[j] == 'n' ? true : false;\n+ i++;\n+ break;\n\nThe way counter (i) is incremented is out of my expectation.\nIn the rare case, where some i doesn't have corresponding pk_attnums[j],\nwouldn't there be a dead loop ?\n\nI think the goal of adding the assertion should be not loop infinitely even\nif the invariant is not satisfied.\n\nI guess a counter other than i would be better for this purpose.\n\nCheers\n\nOn Mon, Jan 18, 2021 at 6:45 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Tue, Jan 19, 2021 at 2:47 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > I was looking at this statement:\n> >\n> > insert into f select generate_series(1, 2000000, 2);\n> >\n> > Since certain generated values (the second half) are not in table p,\n> wouldn't insertion for those values fail ?\n> > I tried a scaled down version (1000th) of your example:\n> >\n> > yugabyte=# insert into f select generate_series(1, 2000, 2);\n> > ERROR: insert or update on table \"f\" violates foreign key constraint\n> \"f_a_fkey\"\n> > DETAIL: Key (a)=(1001) is not present in table \"p\".\n>\n> Sorry, a wrong copy-paste by me. Try this:\n>\n> create table p (a numeric primary key);\n> insert into p select generate_series(1, 2000000);\n> create table f (a bigint references p);\n>\n> -- Unpatched\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 6527.652 ms (00:06.528)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 8108.310 ms (00:08.108)\n>\n> -- Patched:\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 3312.193 ms (00:03.312)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 4292.807 ms (00:04.293)\n>\n> > For v1-0002-Avoid-using-SPI-for-some-RI-checks.patch :\n> >\n> > + * Collect partition key values from the unique key.\n> >\n> > At the end of the nested loop, should there be an assertion that\n> partkey->partnatts partition key values have been found ?\n> > This can be done by using a counter (initialized to 0) which is\n> incremented when a match is found by the inner loop.\n>\n> I've updated the patch to add the Assert. Thanks for taking a look.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nThanks for the quick response.+               if (mapped_partkey_attnums[i] == pk_attnums[j])+               {+                   partkey_vals[i] = pk_vals[j];+                   partkey_isnull[i] = pk_nulls[j] == 'n' ? true : false;+                   i++;+                   break;The way counter (i) is incremented is out of my expectation.In the rare case, where some i doesn't have corresponding pk_attnums[j], wouldn't there be a dead loop ?I think the goal of adding the assertion should be not loop infinitely even if the invariant is not satisfied.I guess a counter other than i would be better for this purpose.CheersOn Mon, Jan 18, 2021 at 6:45 PM Amit Langote <amitlangote09@gmail.com> wrote:On Tue, Jan 19, 2021 at 2:47 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> I was looking at this statement:\n>\n> insert into f select generate_series(1, 2000000, 2);\n>\n> Since certain generated values (the second half) are not in table p, wouldn't insertion for those values fail ?\n> I tried a scaled down version (1000th) of your example:\n>\n> yugabyte=# insert into f select generate_series(1, 2000, 2);\n> ERROR:  insert or update on table \"f\" violates foreign key constraint \"f_a_fkey\"\n> DETAIL:  Key (a)=(1001) is not present in table \"p\".\n\nSorry, a wrong copy-paste by me.  Try this:\n\ncreate table p (a numeric primary key);\ninsert into p select generate_series(1, 2000000);\ncreate table f (a bigint references p);\n\n-- Unpatched\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 6527.652 ms (00:06.528)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 8108.310 ms (00:08.108)\n\n-- Patched:\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 3312.193 ms (00:03.312)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 4292.807 ms (00:04.293)\n\n> For v1-0002-Avoid-using-SPI-for-some-RI-checks.patch :\n>\n> +        * Collect partition key values from the unique key.\n>\n> At the end of the nested loop, should there be an assertion that partkey->partnatts partition key values have been found ?\n> This can be done by using a counter (initialized to 0) which is incremented when a match is found by the inner loop.\n\nI've updated the patch to add the Assert.  Thanks for taking a look.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Jan 2021 19:01:47 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "\nOn Tue, 19 Jan 2021 at 10:45, Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Jan 19, 2021 at 2:47 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>> Hi,\n>> I was looking at this statement:\n>>\n>> insert into f select generate_series(1, 2000000, 2);\n>>\n>> Since certain generated values (the second half) are not in table p, wouldn't insertion for those values fail ?\n>> I tried a scaled down version (1000th) of your example:\n>>\n>> yugabyte=# insert into f select generate_series(1, 2000, 2);\n>> ERROR: insert or update on table \"f\" violates foreign key constraint \"f_a_fkey\"\n>> DETAIL: Key (a)=(1001) is not present in table \"p\".\n>\n> Sorry, a wrong copy-paste by me. Try this:\n>\n> create table p (a numeric primary key);\n> insert into p select generate_series(1, 2000000);\n> create table f (a bigint references p);\n>\n> -- Unpatched\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 6527.652 ms (00:06.528)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 8108.310 ms (00:08.108)\n>\n> -- Patched:\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 3312.193 ms (00:03.312)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 4292.807 ms (00:04.293)\n>\n>> For v1-0002-Avoid-using-SPI-for-some-RI-checks.patch :\n>>\n>> + * Collect partition key values from the unique key.\n>>\n>> At the end of the nested loop, should there be an assertion that partkey->partnatts partition key values have been found ?\n>> This can be done by using a counter (initialized to 0) which is incremented when a match is found by the inner loop.\n>\n> I've updated the patch to add the Assert. Thanks for taking a look.\n\nAfter apply the v2 patches, here are some warnings:\n\nIn file included from /home/japin/Codes/postgresql/Debug/../src/include/postgres.h:47:0,\n from /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:24:\n/home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c: In function ‘ri_PrimaryKeyExists’:\n/home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:134:5: warning: this statement may fall through [-Wimplicit-fallthrough=]\n do { \\\n ^\n/home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:156:2: note: in expansion of macro ‘ereport_domain’\n ereport_domain(elevel, TEXTDOMAIN, __VA_ARGS__)\n ^~~~~~~~~~~~~~\n/home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:229:2: note: in expansion of macro ‘ereport’\n ereport(elevel, errmsg_internal(__VA_ARGS__))\n ^~~~~~~\n/home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:417:5: note: in expansion of macro ‘elog’\n elog(ERROR, \"unexpected table_tuple_lock status: %u\", res);\n ^~~~\n/home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:419:4: note: here\n default:\n ^~~~~~~\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 19 Jan 2021 11:11:50 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "út 19. 1. 2021 v 3:08 odesílatel Amit Langote <amitlangote09@gmail.com>\nnapsal:\n\n> On Tue, Jan 19, 2021 at 3:01 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > po 18. 1. 2021 v 13:40 odesílatel Amit Langote <amitlangote09@gmail.com>\n> napsal:\n> >> I started with the check that's performed when inserting into or\n> >> updating the referencing table to confirm that the new row points to a\n> >> valid row in the referenced relation. The corresponding SQL is this:\n> >>\n> >> SELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x\n> >>\n> >> $1 is the value of the foreign key of the new row. If the query\n> >> returns a row, all good. Thanks to SPI, or its use of plan caching,\n> >> the query is re-planned only a handful of times before making a\n> >> generic plan that is then saved and reused, which looks like this:\n> >>\n> >> QUERY PLAN\n> >> --------------------------------------\n> >> LockRows\n> >> -> Index Scan using pk_pkey on pk x\n> >> Index Cond: (a = $1)\n> >> (3 rows)\n> >\n> >\n> > What is performance when the referenced table is small? - a lot of\n> codebooks are small between 1000 to 10K rows.\n>\n> I see the same ~2x improvement.\n>\n> create table p (a numeric primary key);\n> insert into p select generate_series(1, 1000);\n> create table f (a bigint references p);\n>\n> Unpatched:\n>\n> insert into f select i%1000+1 from generate_series(1, 1000000) i;\n> INSERT 0 1000000\n> Time: 5461.377 ms (00:05.461)\n>\n>\n> Patched:\n>\n> insert into f select i%1000+1 from generate_series(1, 1000000) i;\n> INSERT 0 1000000\n> Time: 2357.440 ms (00:02.357)\n>\n> That's expected because the overhead of using SPI to check the PK\n> table, which the patch gets rid of, is the same no matter the size of\n> the index to be scanned.\n>\n\nIt looks very well.\n\nRegards\n\nPavel\n\n\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nút 19. 1. 2021 v 3:08 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:On Tue, Jan 19, 2021 at 3:01 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 18. 1. 2021 v 13:40 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n>> I started with the check that's performed when inserting into or\n>> updating the referencing table to confirm that the new row points to a\n>> valid row in the referenced relation.  The corresponding SQL is this:\n>>\n>> SELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x\n>>\n>> $1 is the value of the foreign key of the new row.  If the query\n>> returns a row, all good.  Thanks to SPI, or its use of plan caching,\n>> the query is re-planned only a handful of times before making a\n>> generic plan that is then saved and reused, which looks like this:\n>>\n>>               QUERY PLAN\n>> --------------------------------------\n>>  LockRows\n>>    ->  Index Scan using pk_pkey on pk x\n>>          Index Cond: (a = $1)\n>> (3 rows)\n>\n>\n> What is performance when the referenced table is small? - a lot of codebooks are small between 1000 to 10K rows.\n\nI see the same ~2x improvement.\n\ncreate table p (a numeric primary key);\ninsert into p select generate_series(1, 1000);\ncreate table f (a bigint references p);\n\nUnpatched:\n\ninsert into f select i%1000+1 from generate_series(1, 1000000) i;\nINSERT 0 1000000\nTime: 5461.377 ms (00:05.461)\n\n\nPatched:\n\ninsert into f select i%1000+1 from generate_series(1, 1000000) i;\nINSERT 0 1000000\nTime: 2357.440 ms (00:02.357)\n\nThat's expected because the overhead of using SPI to check the PK\ntable, which the patch gets rid of, is the same no matter the size of\nthe index to be scanned.It looks very well. RegardsPavel\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Jan 2021 05:17:16 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": ">\n>\n> In file included from\n> /home/japin/Codes/postgresql/Debug/../src/include/postgres.h:47:0,\n> from\n> /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:24:\n> /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:\n> In function ‘ri_PrimaryKeyExists’:\n> /home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:134:5:\n> warning: this statement may fall through [-Wimplicit-fallthrough=]\n> do { \\\n> ^\n> /home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:156:2:\n> note: in expansion of macro ‘ereport_domain’\n> ereport_domain(elevel, TEXTDOMAIN, __VA_ARGS__)\n> ^~~~~~~~~~~~~~\n> /home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:229:2:\n> note: in expansion of macro ‘ereport’\n> ereport(elevel, errmsg_internal(__VA_ARGS__))\n> ^~~~~~~\n> /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:417:5:\n> note: in expansion of macro ‘elog’\n> elog(ERROR, \"unexpected table_tuple_lock status: %u\", res);\n> ^~~~\n> /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:419:4:\n> note: here\n> default:\n> ^~~~~~~\n>\n> --\n> Regrads,\n> Japin Li.\n> ChengDu WenWu Information Technology Co.,Ltd.\n>\n\nI also get this warning. Adding a \"break;\" at line 418 resolves the warning.\n\nIn file included from /home/japin/Codes/postgresql/Debug/../src/include/postgres.h:47:0,\n                 from /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:24:\n/home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c: In function ‘ri_PrimaryKeyExists’:\n/home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:134:5: warning: this statement may fall through [-Wimplicit-fallthrough=]\n  do { \\\n     ^\n/home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:156:2: note: in expansion of macro ‘ereport_domain’\n  ereport_domain(elevel, TEXTDOMAIN, __VA_ARGS__)\n  ^~~~~~~~~~~~~~\n/home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:229:2: note: in expansion of macro ‘ereport’\n  ereport(elevel, errmsg_internal(__VA_ARGS__))\n  ^~~~~~~\n/home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:417:5: note: in expansion of macro ‘elog’\n     elog(ERROR, \"unexpected table_tuple_lock status: %u\", res);\n     ^~~~\n/home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:419:4: note: here\n    default:\n    ^~~~~~~\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.I also get this warning. Adding a \"break;\" at line 418 resolves the warning.", "msg_date": "Tue, 19 Jan 2021 00:56:02 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Tue, Jan 19, 2021 at 2:56 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> In file included from /home/japin/Codes/postgresql/Debug/../src/include/postgres.h:47:0,\n>> from /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:24:\n>> /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c: In function ‘ri_PrimaryKeyExists’:\n>> /home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:134:5: warning: this statement may fall through [-Wimplicit-fallthrough=]\n>> do { \\\n>> ^\n>> /home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:156:2: note: in expansion of macro ‘ereport_domain’\n>> ereport_domain(elevel, TEXTDOMAIN, __VA_ARGS__)\n>> ^~~~~~~~~~~~~~\n>> /home/japin/Codes/postgresql/Debug/../src/include/utils/elog.h:229:2: note: in expansion of macro ‘ereport’\n>> ereport(elevel, errmsg_internal(__VA_ARGS__))\n>> ^~~~~~~\n>> /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:417:5: note: in expansion of macro ‘elog’\n>> elog(ERROR, \"unexpected table_tuple_lock status: %u\", res);\n>> ^~~~\n>> /home/japin/Codes/postgresql/Debug/../src/backend/utils/adt/ri_triggers.c:419:4: note: here\n>> default:\n>> ^~~~~~~\n\nThanks, will fix it.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 15:26:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Jan 18, 2021 at 9:45 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Tue, Jan 19, 2021 at 2:47 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > I was looking at this statement:\n> >\n> > insert into f select generate_series(1, 2000000, 2);\n> >\n> > Since certain generated values (the second half) are not in table p,\n> wouldn't insertion for those values fail ?\n> > I tried a scaled down version (1000th) of your example:\n> >\n> > yugabyte=# insert into f select generate_series(1, 2000, 2);\n> > ERROR: insert or update on table \"f\" violates foreign key constraint\n> \"f_a_fkey\"\n> > DETAIL: Key (a)=(1001) is not present in table \"p\".\n>\n> Sorry, a wrong copy-paste by me. Try this:\n>\n> create table p (a numeric primary key);\n> insert into p select generate_series(1, 2000000);\n> create table f (a bigint references p);\n>\n> -- Unpatched\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 6527.652 ms (00:06.528)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 8108.310 ms (00:08.108)\n>\n> -- Patched:\n> insert into f select generate_series(1, 2000000, 2);\n> INSERT 0 1000000\n> Time: 3312.193 ms (00:03.312)\n>\n> update f set a = a + 1;\n> UPDATE 1000000\n> Time: 4292.807 ms (00:04.293)\n>\n> > For v1-0002-Avoid-using-SPI-for-some-RI-checks.patch :\n> >\n> > + * Collect partition key values from the unique key.\n> >\n> > At the end of the nested loop, should there be an assertion that\n> partkey->partnatts partition key values have been found ?\n> > This can be done by using a counter (initialized to 0) which is\n> incremented when a match is found by the inner loop.\n>\n> I've updated the patch to add the Assert. Thanks for taking a look.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\nv2 patch applies and passes make check and make check-world. Perhaps, given\nthe missing break at line 418 without any tests failing, we could add\nanother regression test if we're into 100% code path coverage. As it is, I\nthink the compiler warning was a sufficient alert.\n\nThe code is easy to read, and the comments touch on the major points of\nwhat complexities arise from partitioned tables.\n\nA somewhat pedantic complaint I have brought up off-list is that this patch\ncontinues the pattern of the variable and function names making the\nassumption that the foreign key is referencing the primary key of the\nreferenced table. Foreign key constraints need only reference a unique\nindex, it doesn't have to be the primary key. Granted, that unique index is\nbehaving exactly as a primary key would, so conceptually it is very\nsimilar, but keeping with the existing naming (pk_rel, pk_type, etc) can\nlead a developer to think that it would be just as correct to find the\nreferenced relation and get the primary key index from there, which would\nnot always be correct. This patch correctly grabs the index from the\nconstraint itself, so no problem there.\n\nI like that this patch changes the absolute minimum of the code in order to\nget a very significant performance benefit. It does so in a way that should\nreduce resource pressure found in other places [1]. This will in turn\nreduce the performance penalty of \"doing the right thing\" in terms of\ndefining enforced foreign keys. It seems to get a clearer performance boost\nthan was achieved with previous efforts at statement level triggers.\n\nThis patch completely sidesteps the DELETE case, which has more insidious\nperformance implications, but is also far less common, and whose solution\nwill likely be very different.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKkQ508Z6r5e3jdqhfPWSzSajLpHo3OYYOAmfeSAuPTo5VGfgw@mail.gmail.com\n\nOn Mon, Jan 18, 2021 at 9:45 PM Amit Langote <amitlangote09@gmail.com> wrote:On Tue, Jan 19, 2021 at 2:47 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> I was looking at this statement:\n>\n> insert into f select generate_series(1, 2000000, 2);\n>\n> Since certain generated values (the second half) are not in table p, wouldn't insertion for those values fail ?\n> I tried a scaled down version (1000th) of your example:\n>\n> yugabyte=# insert into f select generate_series(1, 2000, 2);\n> ERROR:  insert or update on table \"f\" violates foreign key constraint \"f_a_fkey\"\n> DETAIL:  Key (a)=(1001) is not present in table \"p\".\n\nSorry, a wrong copy-paste by me.  Try this:\n\ncreate table p (a numeric primary key);\ninsert into p select generate_series(1, 2000000);\ncreate table f (a bigint references p);\n\n-- Unpatched\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 6527.652 ms (00:06.528)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 8108.310 ms (00:08.108)\n\n-- Patched:\ninsert into f select generate_series(1, 2000000, 2);\nINSERT 0 1000000\nTime: 3312.193 ms (00:03.312)\n\nupdate f set a = a + 1;\nUPDATE 1000000\nTime: 4292.807 ms (00:04.293)\n\n> For v1-0002-Avoid-using-SPI-for-some-RI-checks.patch :\n>\n> +        * Collect partition key values from the unique key.\n>\n> At the end of the nested loop, should there be an assertion that partkey->partnatts partition key values have been found ?\n> This can be done by using a counter (initialized to 0) which is incremented when a match is found by the inner loop.\n\nI've updated the patch to add the Assert.  Thanks for taking a look.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.comv2 patch applies and passes make check and make check-world. Perhaps, given the missing break at line 418 without any tests failing, we could add another regression test if we're into 100% code path coverage. As it is, I think the compiler warning was a sufficient alert.The code is easy to read, and the comments touch on the major points of what complexities arise from partitioned tables.A somewhat pedantic complaint I have brought up off-list is that this patch continues the pattern of the variable and function names making the assumption that the foreign key is referencing the primary key of the referenced table. Foreign key constraints need only reference a unique index, it doesn't have to be the primary key. Granted, that unique index is behaving exactly as a primary key would, so conceptually it is very similar, but keeping with the existing naming (pk_rel, pk_type, etc) can lead a developer to think that it would be just as correct to find the referenced relation and get the primary key index from there, which would not always be correct. This patch correctly grabs the index from the constraint itself, so no problem there.I like that this patch changes the absolute minimum of the code in order to get a very significant performance benefit. It does so in a way that should reduce resource pressure found in other places [1]. This will in turn reduce the performance penalty of \"doing the right thing\" in terms of defining enforced foreign keys. It seems to get a clearer performance boost than was achieved with previous efforts at statement level triggers.This patch completely sidesteps the DELETE case, which has more insidious performance implications, but is also far less common, and whose solution will likely be very different.[1] https://www.postgresql.org/message-id/CAKkQ508Z6r5e3jdqhfPWSzSajLpHo3OYYOAmfeSAuPTo5VGfgw@mail.gmail.com", "msg_date": "Tue, 19 Jan 2021 01:46:28 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Tue, Jan 19, 2021 at 3:46 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> v2 patch applies and passes make check and make check-world. Perhaps, given the missing break at line 418 without any tests failing, we could add another regression test if we're into 100% code path coverage. As it is, I think the compiler warning was a sufficient alert.\n\nThanks for the review. I will look into checking the coverage.\n\n> The code is easy to read, and the comments touch on the major points of what complexities arise from partitioned tables.\n>\n> A somewhat pedantic complaint I have brought up off-list is that this patch continues the pattern of the variable and function names making the assumption that the foreign key is referencing the primary key of the referenced table. Foreign key constraints need only reference a unique index, it doesn't have to be the primary key. Granted, that unique index is behaving exactly as a primary key would, so conceptually it is very similar, but keeping with the existing naming (pk_rel, pk_type, etc) can lead a developer to think that it would be just as correct to find the referenced relation and get the primary key index from there, which would not always be correct. This patch correctly grabs the index from the constraint itself, so no problem there.\n\nI decided not to deviate from pk_ terminology so that the new code\ndoesn't look too different from the other code in the file. Although,\nI guess we can at least call the main function\nri_ReferencedKeyExists() instead of ri_PrimaryKeyExists(), so I've\nchanged that.\n\n> I like that this patch changes the absolute minimum of the code in order to get a very significant performance benefit. It does so in a way that should reduce resource pressure found in other places [1]. This will in turn reduce the performance penalty of \"doing the right thing\" in terms of defining enforced foreign keys. It seems to get a clearer performance boost than was achieved with previous efforts at statement level triggers.\n>\n> [1] https://www.postgresql.org/message-id/CAKkQ508Z6r5e3jdqhfPWSzSajLpHo3OYYOAmfeSAuPTo5VGfgw@mail.gmail.com\n\nThanks. I hadn't noticed [1] before today, but after looking it over,\nit seems that what is being proposed there can still be of use. As\nlong as SPI is used in ri_trigger.c, it makes sense to consider any\ntweaks addressing its negative impact, especially if they are not very\ninvasive. There's this patch too from the last month:\nhttps://commitfest.postgresql.org/32/2930/\n\n> This patch completely sidesteps the DELETE case, which has more insidious performance implications, but is also far less common, and whose solution will likely be very different.\n\nYeah, we should continue looking into the ways to make referenced-side\nRI checks be less bloated.\n\nI've attached the updated patch.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Jan 2021 16:44:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Tue, Jan 19, 2021 at 12:00 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> + if (mapped_partkey_attnums[i] == pk_attnums[j])\n> + {\n> + partkey_vals[i] = pk_vals[j];\n> + partkey_isnull[i] = pk_nulls[j] == 'n' ? true : false;\n> + i++;\n> + break;\n>\n> The way counter (i) is incremented is out of my expectation.\n> In the rare case, where some i doesn't have corresponding pk_attnums[j], wouldn't there be a dead loop ?\n>\n> I think the goal of adding the assertion should be not loop infinitely even if the invariant is not satisfied.\n>\n> I guess a counter other than i would be better for this purpose.\n\nI have done that in v3. Thanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 16:46:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": ">\n> I decided not to deviate from pk_ terminology so that the new code\n> doesn't look too different from the other code in the file. Although,\n> I guess we can at least call the main function\n> ri_ReferencedKeyExists() instead of ri_PrimaryKeyExists(), so I've\n> changed that.\n>\n\nI agree with leaving the existing terminology where it is for this patch.\nChanging the function name is probably enough to alert the reader that the\nthings that are called pks may not be precisely that.\n\nI decided not to deviate from pk_ terminology so that the new code\ndoesn't look too different from the other code in the file.  Although,\nI guess we can at least call the main function\nri_ReferencedKeyExists() instead of ri_PrimaryKeyExists(), so I've\nchanged that.I agree with leaving the existing terminology where it is for this patch. Changing the function name is probably enough to alert the reader that the things that are called pks may not be precisely that.", "msg_date": "Tue, 19 Jan 2021 12:55:10 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": ">\n>\n>\n> I decided not to deviate from pk_ terminology so that the new code\n> doesn't look too different from the other code in the file. Although,\n> I guess we can at least call the main function\n> ri_ReferencedKeyExists() instead of ri_PrimaryKeyExists(), so I've\n> changed that.\n>\n\nI think that's a nice compromise, it makes the reader aware of the concept.\n\n\n>\n> I've attached the updated patch.\n>\n\nMissing \"break\" added. Check.\nComment updated. Check.\nFunction renamed. Check.\nAttribute mapping matching test (and assertion) added. Check.\nPatch applies to an as-of-today master, passes make check and check world.\nNo additional regression tests required, as no new functionality is\nintroduced.\nNo docs required, as there is nothing user-facing.\n\nQuestions:\n1. There's a palloc for mapped_partkey_attnums, which is never freed, is\nthe prevailing memory context short lived enough that we don't care?\n2. Same question for the AtrrMap map, should there be a free_attrmap().\n\n\nI decided not to deviate from pk_ terminology so that the new code\ndoesn't look too different from the other code in the file.  Although,\nI guess we can at least call the main function\nri_ReferencedKeyExists() instead of ri_PrimaryKeyExists(), so I've\nchanged that.I think that's a nice compromise, it makes the reader aware of the concept. \n\nI've attached the updated patch.Missing \"break\" added. Check.Comment updated. Check.Function renamed. Check.Attribute mapping matching test (and assertion) added. Check.Patch applies to an as-of-today master, passes make check and check world.No additional regression tests required, as no new functionality is introduced.No docs required, as there is nothing user-facing.Questions:1. There's a palloc for mapped_partkey_attnums, which is never freed, is the prevailing memory context short lived enough that we don't care?2. Same question for the AtrrMap map, should there be a free_attrmap().", "msg_date": "Fri, 22 Jan 2021 01:22:07 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Fri, Jan 22, 2021 at 3:22 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> I decided not to deviate from pk_ terminology so that the new code\n>> doesn't look too different from the other code in the file. Although,\n>> I guess we can at least call the main function\n>> ri_ReferencedKeyExists() instead of ri_PrimaryKeyExists(), so I've\n>> changed that.\n>\n> I think that's a nice compromise, it makes the reader aware of the concept.\n>>\n>> I've attached the updated patch.\n>\n> Missing \"break\" added. Check.\n> Comment updated. Check.\n> Function renamed. Check.\n> Attribute mapping matching test (and assertion) added. Check.\n> Patch applies to an as-of-today master, passes make check and check world.\n> No additional regression tests required, as no new functionality is introduced.\n> No docs required, as there is nothing user-facing.\n\nThanks for the review.\n\n> Questions:\n> 1. There's a palloc for mapped_partkey_attnums, which is never freed, is the prevailing memory context short lived enough that we don't care?\n> 2. Same question for the AtrrMap map, should there be a free_attrmap().\n\nI hadn't checked, but yes, the prevailing context is\nAfterTriggerTupleContext, a short-lived one that is reset for every\ntrigger event tuple. I'm still inclined to explicitly free those\nobjects, so changed like that. While at it, I also changed\nmapped_partkey_attnums to root_partattrs for readability.\n\nAttached v4.\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 23 Jan 2021 16:10:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Hi,\n\n+ for (i = 0; i < riinfo->nkeys; i++)\n+ {\n+ Oid eq_opr = eq_oprs[i];\n+ Oid typeid = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n+ RI_CompareHashEntry *entry = ri_HashCompareOp(eq_opr, typeid);\n+\n+ if (pk_nulls[i] != 'n' &&\nOidIsValid(entry->cast_func_finfo.fn_oid))\n\nIt seems the pk_nulls[i] != 'n' check can be lifted ahead of the assignment\nto the three local variables. That way, ri_HashCompareOp wouldn't be called\nwhen pk_nulls[i] == 'n'.\n\n+ case TM_Updated:\n+ if (IsolationUsesXactSnapshot())\n...\n+ case TM_Deleted:\n+ if (IsolationUsesXactSnapshot())\n\nIt seems the handling for TM_Updated and TM_Deleted is the same. The cases\nfor these two values can be put next to each other (saving one block of\ncode).\n\nCheers\n\nOn Fri, Jan 22, 2021 at 11:10 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Fri, Jan 22, 2021 at 3:22 PM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> >> I decided not to deviate from pk_ terminology so that the new code\n> >> doesn't look too different from the other code in the file. Although,\n> >> I guess we can at least call the main function\n> >> ri_ReferencedKeyExists() instead of ri_PrimaryKeyExists(), so I've\n> >> changed that.\n> >\n> > I think that's a nice compromise, it makes the reader aware of the\n> concept.\n> >>\n> >> I've attached the updated patch.\n> >\n> > Missing \"break\" added. Check.\n> > Comment updated. Check.\n> > Function renamed. Check.\n> > Attribute mapping matching test (and assertion) added. Check.\n> > Patch applies to an as-of-today master, passes make check and check\n> world.\n> > No additional regression tests required, as no new functionality is\n> introduced.\n> > No docs required, as there is nothing user-facing.\n>\n> Thanks for the review.\n>\n> > Questions:\n> > 1. There's a palloc for mapped_partkey_attnums, which is never freed, is\n> the prevailing memory context short lived enough that we don't care?\n> > 2. Same question for the AtrrMap map, should there be a free_attrmap().\n>\n> I hadn't checked, but yes, the prevailing context is\n> AfterTriggerTupleContext, a short-lived one that is reset for every\n> trigger event tuple. I'm still inclined to explicitly free those\n> objects, so changed like that. While at it, I also changed\n> mapped_partkey_attnums to root_partattrs for readability.\n>\n> Attached v4.\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nHi,+       for (i = 0; i < riinfo->nkeys; i++)+       {+           Oid     eq_opr = eq_oprs[i];+           Oid     typeid = RIAttType(fk_rel, riinfo->fk_attnums[i]);+           RI_CompareHashEntry *entry = ri_HashCompareOp(eq_opr, typeid);++           if (pk_nulls[i] != 'n' && OidIsValid(entry->cast_func_finfo.fn_oid))It seems the pk_nulls[i] != 'n' check can be lifted ahead of the assignment to the three local variables. That way, ri_HashCompareOp wouldn't be called when pk_nulls[i] == 'n'.+           case TM_Updated:+               if (IsolationUsesXactSnapshot())...+           case TM_Deleted:+               if (IsolationUsesXactSnapshot())It seems the handling for TM_Updated and TM_Deleted is the same. The cases for these two values can be put next to each other (saving one block of code).CheersOn Fri, Jan 22, 2021 at 11:10 PM Amit Langote <amitlangote09@gmail.com> wrote:On Fri, Jan 22, 2021 at 3:22 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> I decided not to deviate from pk_ terminology so that the new code\n>> doesn't look too different from the other code in the file.  Although,\n>> I guess we can at least call the main function\n>> ri_ReferencedKeyExists() instead of ri_PrimaryKeyExists(), so I've\n>> changed that.\n>\n> I think that's a nice compromise, it makes the reader aware of the concept.\n>>\n>> I've attached the updated patch.\n>\n> Missing \"break\" added. Check.\n> Comment updated. Check.\n> Function renamed. Check.\n> Attribute mapping matching test (and assertion) added. Check.\n> Patch applies to an as-of-today master, passes make check and check world.\n> No additional regression tests required, as no new functionality is introduced.\n> No docs required, as there is nothing user-facing.\n\nThanks for the review.\n\n> Questions:\n> 1. There's a palloc for mapped_partkey_attnums, which is never freed, is the prevailing memory context short lived enough that we don't care?\n> 2. Same question for the AtrrMap map, should there be a free_attrmap().\n\nI hadn't checked, but yes, the prevailing context is\nAfterTriggerTupleContext, a short-lived one that is reset for every\ntrigger event tuple.  I'm still inclined to explicitly free those\nobjects, so changed like that.  While at it, I also changed\nmapped_partkey_attnums to root_partattrs for readability.\n\nAttached v4.\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 23 Jan 2021 09:53:59 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Sat, Jan 23, 2021 at 12:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n>\n> + for (i = 0; i < riinfo->nkeys; i++)\n> + {\n> + Oid eq_opr = eq_oprs[i];\n> + Oid typeid = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n> + RI_CompareHashEntry *entry = ri_HashCompareOp(eq_opr, typeid);\n> +\n> + if (pk_nulls[i] != 'n' &&\n> OidIsValid(entry->cast_func_finfo.fn_oid))\n>\n> It seems the pk_nulls[i] != 'n' check can be lifted ahead of the\n> assignment to the three local variables. That way, ri_HashCompareOp\n> wouldn't be called when pk_nulls[i] == 'n'.\n>\n> + case TM_Updated:\n> + if (IsolationUsesXactSnapshot())\n> ...\n> + case TM_Deleted:\n> + if (IsolationUsesXactSnapshot())\n>\n> It seems the handling for TM_Updated and TM_Deleted is the same. The cases\n> for these two values can be put next to each other (saving one block of\n> code).\n>\n> Cheers\n>\n\nI'll pause on reviewing v4 until you've addressed the suggestions above.\n\nOn Sat, Jan 23, 2021 at 12:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,+       for (i = 0; i < riinfo->nkeys; i++)+       {+           Oid     eq_opr = eq_oprs[i];+           Oid     typeid = RIAttType(fk_rel, riinfo->fk_attnums[i]);+           RI_CompareHashEntry *entry = ri_HashCompareOp(eq_opr, typeid);++           if (pk_nulls[i] != 'n' && OidIsValid(entry->cast_func_finfo.fn_oid))It seems the pk_nulls[i] != 'n' check can be lifted ahead of the assignment to the three local variables. That way, ri_HashCompareOp wouldn't be called when pk_nulls[i] == 'n'.+           case TM_Updated:+               if (IsolationUsesXactSnapshot())...+           case TM_Deleted:+               if (IsolationUsesXactSnapshot())It seems the handling for TM_Updated and TM_Deleted is the same. The cases for these two values can be put next to each other (saving one block of code).CheersI'll pause on reviewing v4 until you've addressed the suggestions above.", "msg_date": "Sat, 23 Jan 2021 21:26:24 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Sun, Jan 24, 2021 at 11:26 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n> On Sat, Jan 23, 2021 at 12:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>> Hi,\n\nThanks for the review.\n\n>> + for (i = 0; i < riinfo->nkeys; i++)\n>> + {\n>> + Oid eq_opr = eq_oprs[i];\n>> + Oid typeid = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n>> + RI_CompareHashEntry *entry = ri_HashCompareOp(eq_opr, typeid);\n>> +\n>> + if (pk_nulls[i] != 'n' && OidIsValid(entry->cast_func_finfo.fn_oid))\n>>\n>> It seems the pk_nulls[i] != 'n' check can be lifted ahead of the assignment to the three local variables. That way, ri_HashCompareOp wouldn't be called when pk_nulls[i] == 'n'.\n\nGood idea, so done. Although, there can't be nulls right now.\n\n>> + case TM_Updated:\n>> + if (IsolationUsesXactSnapshot())\n>> ...\n>> + case TM_Deleted:\n>> + if (IsolationUsesXactSnapshot())\n>>\n>> It seems the handling for TM_Updated and TM_Deleted is the same. The cases for these two values can be put next to each other (saving one block of code).\n\nAh, yes. The TM_Updated case used to be handled a bit differently in\nearlier unposted versions of the patch, though at some point I\nconcluded that the special handling was unnecessary, but didn't\nrealize what you just pointed out. Fixed.\n\n> I'll pause on reviewing v4 until you've addressed the suggestions above.\n\nHere's v5.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 24 Jan 2021 20:51:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Sun, Jan 24, 2021 at 6:51 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Sun, Jan 24, 2021 at 11:26 AM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> > On Sat, Jan 23, 2021 at 12:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >>\n> >> Hi,\n>\n> Thanks for the review.\n>\n> >> + for (i = 0; i < riinfo->nkeys; i++)\n> >> + {\n> >> + Oid eq_opr = eq_oprs[i];\n> >> + Oid typeid = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n> >> + RI_CompareHashEntry *entry = ri_HashCompareOp(eq_opr,\n> typeid);\n> >> +\n> >> + if (pk_nulls[i] != 'n' &&\n> OidIsValid(entry->cast_func_finfo.fn_oid))\n> >>\n> >> It seems the pk_nulls[i] != 'n' check can be lifted ahead of the\n> assignment to the three local variables. That way, ri_HashCompareOp\n> wouldn't be called when pk_nulls[i] == 'n'.\n>\n> Good idea, so done. Although, there can't be nulls right now.\n>\n> >> + case TM_Updated:\n> >> + if (IsolationUsesXactSnapshot())\n> >> ...\n> >> + case TM_Deleted:\n> >> + if (IsolationUsesXactSnapshot())\n> >>\n> >> It seems the handling for TM_Updated and TM_Deleted is the same. The\n> cases for these two values can be put next to each other (saving one block\n> of code).\n>\n> Ah, yes. The TM_Updated case used to be handled a bit differently in\n> earlier unposted versions of the patch, though at some point I\n> concluded that the special handling was unnecessary, but didn't\n> realize what you just pointed out. Fixed.\n>\n> > I'll pause on reviewing v4 until you've addressed the suggestions above.\n>\n> Here's v5.\n>\n\nv5 patches apply to master.\nSuggested If/then optimization is implemented.\nSuggested case merging is implemented.\nPasses make check and make check-world yet again.\nJust to confirm, we *don't* free the RI_CompareHashEntry because it points\nto an entry in a hash table which is TopMemoryContext aka lifetime of the\nsession, correct?\n\nAnybody else want to look this patch over before I mark it Ready For\nCommitter?\n\nOn Sun, Jan 24, 2021 at 6:51 AM Amit Langote <amitlangote09@gmail.com> wrote:On Sun, Jan 24, 2021 at 11:26 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n> On Sat, Jan 23, 2021 at 12:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>> Hi,\n\nThanks for the review.\n\n>> +       for (i = 0; i < riinfo->nkeys; i++)\n>> +       {\n>> +           Oid     eq_opr = eq_oprs[i];\n>> +           Oid     typeid = RIAttType(fk_rel, riinfo->fk_attnums[i]);\n>> +           RI_CompareHashEntry *entry = ri_HashCompareOp(eq_opr, typeid);\n>> +\n>> +           if (pk_nulls[i] != 'n' && OidIsValid(entry->cast_func_finfo.fn_oid))\n>>\n>> It seems the pk_nulls[i] != 'n' check can be lifted ahead of the assignment to the three local variables. That way, ri_HashCompareOp wouldn't be called when pk_nulls[i] == 'n'.\n\nGood idea, so done.  Although, there can't be nulls right now.\n\n>> +           case TM_Updated:\n>> +               if (IsolationUsesXactSnapshot())\n>> ...\n>> +           case TM_Deleted:\n>> +               if (IsolationUsesXactSnapshot())\n>>\n>> It seems the handling for TM_Updated and TM_Deleted is the same. The cases for these two values can be put next to each other (saving one block of code).\n\nAh, yes.  The TM_Updated case used to be handled a bit differently in\nearlier unposted versions of the patch, though at some point I\nconcluded that the special handling was unnecessary, but didn't\nrealize what you just pointed out.  Fixed.\n\n> I'll pause on reviewing v4 until you've addressed the suggestions above.\n\nHere's v5.v5 patches apply to master.Suggested If/then optimization is implemented.Suggested case merging is implemented.Passes make check and make check-world yet again.Just to confirm, we don't free the RI_CompareHashEntry because it points to an entry in a hash table which is TopMemoryContext aka lifetime of the session, correct?Anybody else want to look this patch over before I mark it Ready For Committer?", "msg_date": "Sun, 24 Jan 2021 19:24:00 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Hi, Amit-san,\n\nNice patch. I have confirmed that this solves the problem in [1] with\nINSERT/UPDATE.\n\nHEAD + patch\n name | bytes | pg_size_pretty\n------------------+-------+----------------\n CachedPlanQuery | 10280 | 10 kB\n CachedPlanSource | 14616 | 14 kB\n CachedPlan | 13168 | 13 kB ★ 710MB -> 13kB\n(3 rows)\n\n> > This patch completely sidesteps the DELETE case, which has more insidious performance implications, but is also far less common, and whose solution will likely be very different.\n>\n> Yeah, we should continue looking into the ways to make referenced-side\n> RI checks be less bloated.\n\nHowever, as already mentioned, the problem of memory bloat on DELETE remains.\nThis can be solved by the patch in [1], but I think it is too much to apply\nthis patch only for DELETE. What do you think?\n\n[1] https://www.postgresql.org/message-id/flat/cab4b85d-9292-967d-adf2-be0d803c3e23%40nttcom.co.jp_1\n\n-- \nKeisuke Kuroda\nNTT Software Innovation Center\nkeisuke.kuroda.3862@gmail.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 18:06:39 +0900", "msg_from": "Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Jan 25, 2021 at 9:24 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n> On Sun, Jan 24, 2021 at 6:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Here's v5.\n>\n> v5 patches apply to master.\n> Suggested If/then optimization is implemented.\n> Suggested case merging is implemented.\n> Passes make check and make check-world yet again.\n> Just to confirm, we don't free the RI_CompareHashEntry because it points to an entry in a hash table which is TopMemoryContext aka lifetime of the session, correct?\n\nRight.\n\n> Anybody else want to look this patch over before I mark it Ready For Committer?\n\nWould be nice to have others look it over. Thanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 18:19:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Kuroda-san,\n\nOn Mon, Jan 25, 2021 at 6:06 PM Keisuke Kuroda\n<keisuke.kuroda.3862@gmail.com> wrote:\n> Hi, Amit-san,\n>\n> Nice patch. I have confirmed that this solves the problem in [1] with\n> INSERT/UPDATE.\n\nThanks for testing.\n\n> HEAD + patch\n> name | bytes | pg_size_pretty\n> ------------------+-------+----------------\n> CachedPlanQuery | 10280 | 10 kB\n> CachedPlanSource | 14616 | 14 kB\n> CachedPlan | 13168 | 13 kB ★ 710MB -> 13kB\n> (3 rows)\n\nIf you only tested insert/update on the referencing table, I would've\nexpected to see nothing in the result of that query, because the patch\neliminates all use of SPI in that case. I suspect the CachedPlan*\nmemory contexts you are seeing belong to some early activity in the\nsession. So if you try the insert/update in a freshly started\nsession, you would see 0 rows in the result of that query.\n\n> > > This patch completely sidesteps the DELETE case, which has more insidious performance implications, but is also far less common, and whose solution will likely be very different.\n> >\n> > Yeah, we should continue looking into the ways to make referenced-side\n> > RI checks be less bloated.\n>\n> However, as already mentioned, the problem of memory bloat on DELETE remains.\n> This can be solved by the patch in [1], but I think it is too much to apply\n> this patch only for DELETE. What do you think?\n>\n> [1] https://www.postgresql.org/message-id/flat/cab4b85d-9292-967d-adf2-be0d803c3e23%40nttcom.co.jp_1\n\nHmm, the patch tries to solve a general problem that SPI plans are not\nbeing shared among partitions whereas they should be. So I don't\nthink that it's necessarily specific to DELETE. Until we have a\nsolution like the patch on this thread for DELETE, it seems fine to\nconsider the other patch as a stopgap solution.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 19:01:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Jan 25, 2021 at 7:01 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Jan 25, 2021 at 6:06 PM Keisuke Kuroda\n> <keisuke.kuroda.3862@gmail.com> wrote:\n> > However, as already mentioned, the problem of memory bloat on DELETE remains.\n> > This can be solved by the patch in [1], but I think it is too much to apply\n> > this patch only for DELETE. What do you think?\n> >\n> > [1] https://www.postgresql.org/message-id/flat/cab4b85d-9292-967d-adf2-be0d803c3e23%40nttcom.co.jp_1\n>\n> Hmm, the patch tries to solve a general problem that SPI plans are not\n> being shared among partitions whereas they should be. So I don't\n> think that it's necessarily specific to DELETE. Until we have a\n> solution like the patch on this thread for DELETE, it seems fine to\n> consider the other patch as a stopgap solution.\n\nForgot to mention one thing. Alvaro, in his last email on that\nthread, characterized that patch as fixing a bug, although I may have\nmisread that.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 20:04:41 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Hi Amit-san,\n\nOn 2021/01/25 18:19, Amit Langote wrote:\n> On Mon, Jan 25, 2021 at 9:24 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> On Sun, Jan 24, 2021 at 6:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>>> Here's v5.\n>>\n>> v5 patches apply to master.\n>> Suggested If/then optimization is implemented.\n>> Suggested case merging is implemented.\n>> Passes make check and make check-world yet again.\n>> Just to confirm, we don't free the RI_CompareHashEntry because it points to an entry in a hash table which is TopMemoryContext aka lifetime of the session, correct?\n> \n> Right.\n> \n>> Anybody else want to look this patch over before I mark it Ready For Committer?\n> \n> Would be nice to have others look it over. Thanks.\n\n\nThanks for creating the patch!\n\nI tried to review the patch. Here is my comment.\n\n* According to this thread [1], it might be better to replace elog() with\n ereport() in the patch.\n\n[1]: https://www.postgresql.org/message-id/flat/92d6f545-5102-65d8-3c87-489f71ea0a37%40enterprisedb.com\n\nThanks,\nTatsuro Yamada\n\n\n\n", "msg_date": "Wed, 27 Jan 2021 08:51:40 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Yamada-san,\n\nOn Wed, Jan 27, 2021 at 8:51 AM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> On 2021/01/25 18:19, Amit Langote wrote:\n> > On Mon, Jan 25, 2021 at 9:24 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n> >> Anybody else want to look this patch over before I mark it Ready For Committer?\n> >\n> > Would be nice to have others look it over. Thanks.\n>\n> Thanks for creating the patch!\n>\n> I tried to review the patch. Here is my comment.\n\nThanks for the comment.\n\n> * According to this thread [1], it might be better to replace elog() with\n> ereport() in the patch.\n>\n> [1]: https://www.postgresql.org/message-id/flat/92d6f545-5102-65d8-3c87-489f71ea0a37%40enterprisedb.com\n\nCould you please tell which elog() of the following added by the patch\nyou are concerned about?\n\n+ case TM_Invisible:\n+ elog(ERROR, \"attempted to lock invisible tuple\");\n+ break;\n+\n+ case TM_SelfModified:\n+ case TM_BeingModified:\n+ case TM_WouldBlock:\n+ elog(ERROR, \"unexpected table_tuple_lock status: %u\", res);\n+ break;\n\n+ default:\n+ elog(ERROR, \"unrecognized table_tuple_lock status: %u\", res);\n\nAll of these are meant as debugging elog()s for cases that won't\nnormally occur. IIUC, the discussion at the linked thread excludes\nthose from consideration.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 15:10:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Hi Amit-san,\n\n> + case TM_Invisible:\n> + elog(ERROR, \"attempted to lock invisible tuple\");\n> + break;\n> +\n> + case TM_SelfModified:\n> + case TM_BeingModified:\n> + case TM_WouldBlock:\n> + elog(ERROR, \"unexpected table_tuple_lock status: %u\", res);\n> + break;\n> \n> + default:\n> + elog(ERROR, \"unrecognized table_tuple_lock status: %u\", res);\n> \n> All of these are meant as debugging elog()s for cases that won't\n> normally occur. IIUC, the discussion at the linked thread excludes\n> those from consideration.\n\nThanks for your explanation.\nAh, I reread the thread, and I now realized that user visible log messages\nare the target to replace. I understood that that elog() for the cases won't\nnormally occur. Sorry for the noise.\n\nRegards,\nTatsuro Yamada\n\n\n\n", "msg_date": "Wed, 27 Jan 2021 16:07:01 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Hi Amit-san,\n\nThanks for the answer!\n\n> If you only tested insert/update on the referencing table, I would've\n> expected to see nothing in the result of that query, because the patch\n> eliminates all use of SPI in that case. I suspect the CachedPlan*\n> memory contexts you are seeing belong to some early activity in the\n> session. So if you try the insert/update in a freshly started\n> session, you would see 0 rows in the result of that query.\n\nThat's right.\nCREATE PARTITION TABLE included in the test script(rep.sql) was using SPI.\nIn a new session, I confirmed that CachedPlan is not generated when only\nexecute INSERT.\n\n# only execute INSERT\n\npostgres=# INSERT INTO ps SELECT generate_series(1,4999);\nINSERT 0 4999\npostgres=#\npostgres=# INSERT INTO pr SELECT i, i from generate_series(1,4999)i;\nINSERT 0 4999\n\npostgres=# SELECT name, sum(used_bytes) as bytes,\npg_size_pretty(sum(used_bytes)) FROM pg_backend_memory_contexts\nWHERE name LIKE 'Cached%' GROUP BY name;\n\n name | bytes | pg_size_pretty\n------+-------+----------------\n(0 rows) ★ No CachedPlan\n\n> Hmm, the patch tries to solve a general problem that SPI plans are not\n> being shared among partitions whereas they should be. So I don't\n> think that it's necessarily specific to DELETE. Until we have a\n> solution like the patch on this thread for DELETE, it seems fine to\n> consider the other patch as a stopgap solution.\n\nI see.\nSo this is a solution to the problem of using SPI plans in partitions,\nnot just DELETE.\nI agree with you, I think this is a solution to the current problem.\n\nBest Regards,\n\n\n\n-- \nKeisuke Kuroda\nNTT Software Innovation Center\nkeisuke.kuroda.3862@gmail.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 16:58:44 +0900", "msg_from": "Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "At Sun, 24 Jan 2021 20:51:39 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> Here's v5.\n\nAt Mon, 25 Jan 2021 18:19:56 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> > Anybody else want to look this patch over before I mark it Ready For Committer?\n> \n> Would be nice to have others look it over. Thanks.\n\nThis nice improvement.\n\n0001 just looks fine.\n\n0002:\n\n /* RI query type codes */\n-/* these queries are executed against the PK (referenced) table: */\n+/*\n+ * 1 and 2 are no longer used, because PK (referenced) table is looked up\n+ * directly using ri_ReferencedKeyExists().\n #define RI_PLAN_CHECK_LOOKUPPK\t\t\t1\n #define RI_PLAN_CHECK_LOOKUPPK_FROM_PK\t2\n #define RI_PLAN_LAST_ON_PK\t\t\t\tRI_PLAN_CHECK_LOOKUPPK_FROM_PK\n\nHowever, this patch does.\n\n+\tif (!ri_ReferencedKeyExists(pk_rel, fk_rel, newslot, riinfo))\n+\t\tri_ReportViolation(riinfo,\n+\t\t\t\t\t\t pk_rel, fk_rel,\n+\t\t\t\t\t\t newslot,\n+\t\t\t\t\t\t NULL,\n+\t\t\t\t\t\t RI_PLAN_CHECK_LOOKUPPK, false);\n\nIt seems to me 1 (RI_PLAN_CHECK_LOOKUPPK) is still alive. (Yeah, I\nknow that doesn't mean the usefulness of the macro but the mechanism\nthe macro suggests, but it is confusing.) On the other hand,\nRI_PLAN_CHECK_LOOKUPPK_FROM_PK and RI_PLAN_LAST_ON_PK seem to be no\nlonger used. (Couldn't we remove them?)\n\n(about the latter, we can rewrite the only use of it \"if\n(qkey->constr_queryno <= RI_PLAN_LAST_ON_PK)\" not to use the macro.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 27 Jan 2021 17:32:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Wed, Jan 27, 2021 at 5:32 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sun, 24 Jan 2021 20:51:39 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > Here's v5.\n>\n> At Mon, 25 Jan 2021 18:19:56 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > > Anybody else want to look this patch over before I mark it Ready For Committer?\n> >\n> > Would be nice to have others look it over. Thanks.\n>\n> This nice improvement.\n>\n> 0001 just looks fine.\n>\n> 0002:\n>\n> /* RI query type codes */\n> -/* these queries are executed against the PK (referenced) table: */\n> +/*\n> + * 1 and 2 are no longer used, because PK (referenced) table is looked up\n> + * directly using ri_ReferencedKeyExists().\n> #define RI_PLAN_CHECK_LOOKUPPK 1\n> #define RI_PLAN_CHECK_LOOKUPPK_FROM_PK 2\n> #define RI_PLAN_LAST_ON_PK RI_PLAN_CHECK_LOOKUPPK_FROM_PK\n>\n> However, this patch does.\n>\n> + if (!ri_ReferencedKeyExists(pk_rel, fk_rel, newslot, riinfo))\n> + ri_ReportViolation(riinfo,\n> + pk_rel, fk_rel,\n> + newslot,\n> + NULL,\n> + RI_PLAN_CHECK_LOOKUPPK, false);\n>\n> It seems to me 1 (RI_PLAN_CHECK_LOOKUPPK) is still alive. (Yeah, I\n> know that doesn't mean the usefulness of the macro but the mechanism\n> the macro suggests, but it is confusing.) On the other hand,\n> RI_PLAN_CHECK_LOOKUPPK_FROM_PK and RI_PLAN_LAST_ON_PK seem to be no\n> longer used. (Couldn't we remove them?)\n\nYeah, better to just remove those _PK macros and say this module no\nlonger runs any queries on the PK table.\n\nHow about the attached?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 27 Jan 2021 22:02:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": ">\n> > It seems to me 1 (RI_PLAN_CHECK_LOOKUPPK) is still alive. (Yeah, I\n> > know that doesn't mean the usefulness of the macro but the mechanism\n> > the macro suggests, but it is confusing.) On the other hand,\n> > RI_PLAN_CHECK_LOOKUPPK_FROM_PK and RI_PLAN_LAST_ON_PK seem to be no\n> > longer used. (Couldn't we remove them?)\n>\n> Yeah, better to just remove those _PK macros and say this module no\n> longer runs any queries on the PK table.\n>\n> How about the attached?\n>\n>\nSorry for the delay.\nI see that the changes were made as described.\nPasses make check and make check-world yet again.\nI'm marking this Ready For Committer unless someone objects.\n\n\n> It seems to me 1 (RI_PLAN_CHECK_LOOKUPPK) is still alive. (Yeah, I\n> know that doesn't mean the usefulness of the macro but the mechanism\n> the macro suggests, but it is confusing.) On the other hand,\n> RI_PLAN_CHECK_LOOKUPPK_FROM_PK and RI_PLAN_LAST_ON_PK seem to be no\n> longer used.  (Couldn't we remove them?)\n\nYeah, better to just remove those _PK macros and say this module no\nlonger runs any queries on the PK table.\n\nHow about the attached?\nSorry for the delay.I see that the changes were made as described.Passes make check and make check-world yet again.I'm marking this Ready For Committer unless someone objects.", "msg_date": "Mon, 1 Mar 2021 01:13:47 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Mar 1, 2021 at 3:14 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> > It seems to me 1 (RI_PLAN_CHECK_LOOKUPPK) is still alive. (Yeah, I\n>> > know that doesn't mean the usefulness of the macro but the mechanism\n>> > the macro suggests, but it is confusing.) On the other hand,\n>> > RI_PLAN_CHECK_LOOKUPPK_FROM_PK and RI_PLAN_LAST_ON_PK seem to be no\n>> > longer used. (Couldn't we remove them?)\n>>\n>> Yeah, better to just remove those _PK macros and say this module no\n>> longer runs any queries on the PK table.\n>>\n>> How about the attached?\n>>\n>\n> Sorry for the delay.\n> I see that the changes were made as described.\n> Passes make check and make check-world yet again.\n> I'm marking this Ready For Committer unless someone objects.\n\nThank you Corey for the review.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Mar 2021 11:18:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Hi amit,\r\n\r\n(sorry about not cc the hacker list)\r\nI have an issue about command id here.\r\nIt's probably not directly related to your patch, so I am sorry if it bothers you.\r\n\r\n+\t/*\r\n+\t * Start the scan. To make the changes of the current command visible to\r\n+\t * the scan and for subsequent locking of the tuple (if any) found,\r\n+\t * increment the command counter.\r\n+\t */\r\n+\tCommandCounterIncrement();\r\n\r\nFor insert on fk relation, is it necessary to create new command id every time ?\r\nI think it is only necessary when it modifies the referenced table.\r\nfor example: 1) has modifyingcte\r\n 2) has modifying function(trigger/domain...)\r\n\r\nAll of the above seems not supported in parallel mode(parallel unsafe).\r\nSo I was wondering if we can avoid the CommandCounterIncrement in parallel mode.\r\n\r\nBest regards,\r\nhouzj\r\n", "msg_date": "Tue, 2 Mar 2021 07:51:57 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: simplifying foreign key/RI checks" }, { "msg_contents": "I took a quick look at this. I guess I'm disturbed by the idea\nthat we'd totally replace the implementation technology for only one\nvariant of foreign key checks. That means that there'll be a lot\nof minor details that don't act the same depending on context. One\npoint I was just reminded of by [1] is that the SPI approach enforces\npermissions checks on the table access, which I do not see being done\nanywhere in your patch. Now, maybe it's fine not to have such checks,\non the grounds that the existence of the RI constraint is sufficient\npermission (the creator had to have REFERENCES permission to make it).\nBut I'm not sure about that. Should we add SELECT permissions checks\nto this code path to make it less different?\n\nIn the same vein, the existing code actually runs the query as the\ntable owner (cf. SetUserIdAndSecContext in ri_PerformCheck), another\nnicety you haven't bothered with. Maybe that is invisible for a\npure SELECT query but I'm not sure I would bet on it. At the very\nleast you're betting that the index-related operators you invoke\naren't going to care, and that nobody is going to try to use this\ndifference to create a security exploit via a trojan-horse index.\n\nShall we mention RLS restrictions? If we don't worry about that,\nI think REFERENCES privilege becomes a full bypass of RLS, at\nleast for unique-key columns.\n\nI wonder also what happens if the referenced table isn't a plain\nheap with a plain btree index. Maybe you're accessing it at the\nright level of abstraction so things will just work with some\nother access methods, but I'm not sure about that. (Anybody\nwant to try this with a partitioned table some of whose partitions\nare foreign tables?)\n\nLastly, ri_PerformCheck is pretty careful about not only which\nsnapshot it uses, but which *pair* of snapshots it uses, because\nsometimes it needs to worry about data changes since the start\nof the transaction. You've ignored all of that complexity AFAICS.\nThat's okay (I think) for RI_FKey_check which was passing\ndetectNewRows = false, but for sure it's not okay for\nri_Check_Pk_Match. (I kind of thought we had isolation tests\nthat would catch that, but apparently not.)\n\nSo, this is a cute idea, and the speedup is pretty impressive,\nbut I don't think it's anywhere near committable. I also wonder\nwhether we really want ri_triggers.c having its own copy of\nlow-level stuff like the tuple-locking code you copied. Seems\nlike a likely maintenance hazard, so maybe some more refactoring\nis needed.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16911-ca792f6bbe244754%40postgresql.org\n\n\n", "msg_date": "Wed, 03 Mar 2021 15:15:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Thu, Mar 4, 2021 at 5:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I took a quick look at this.\n\nThanks a lot for the review.\n\n> I guess I'm disturbed by the idea\n> that we'd totally replace the implementation technology for only one\n> variant of foreign key checks. That means that there'll be a lot\n> of minor details that don't act the same depending on context. One\n> point I was just reminded of by [1] is that the SPI approach enforces\n> permissions checks on the table access, which I do not see being done\n> anywhere in your patch. Now, maybe it's fine not to have such checks,\n> on the grounds that the existence of the RI constraint is sufficient\n> permission (the creator had to have REFERENCES permission to make it).\n> But I'm not sure about that. Should we add SELECT permissions checks\n> to this code path to make it less different?\n>\n> In the same vein, the existing code actually runs the query as the\n> table owner (cf. SetUserIdAndSecContext in ri_PerformCheck), another\n> nicety you haven't bothered with. Maybe that is invisible for a\n> pure SELECT query but I'm not sure I would bet on it. At the very\n> least you're betting that the index-related operators you invoke\n> aren't going to care, and that nobody is going to try to use this\n> difference to create a security exploit via a trojan-horse index.\n\nHow about we do at the top of ri_ReferencedKeyExists() what\nri_PerformCheck() always does before executing a query, which is this:\n\n /* Switch to proper UID to perform check as */\n GetUserIdAndSecContext(&save_userid, &save_sec_context);\n SetUserIdAndSecContext(RelationGetForm(query_rel)->relowner,\n save_sec_context | SECURITY_LOCAL_USERID_CHANGE |\n SECURITY_NOFORCE_RLS);\n\nAnd then also check the permissions of the switched user on the scan\ntarget relation's schema (ACL_USAGE) and the relation itself\n(ACL_SELECT).\n\nIOW, this:\n\n+ Oid save_userid;\n+ int save_sec_context;\n+ AclResult aclresult;\n+\n+ /* Switch to proper UID to perform check as */\n+ GetUserIdAndSecContext(&save_userid, &save_sec_context);\n+ SetUserIdAndSecContext(RelationGetForm(pk_rel)->relowner,\n+ save_sec_context | SECURITY_LOCAL_USERID_CHANGE |\n+ SECURITY_NOFORCE_RLS);\n+\n+ /* Check namespace permissions. */\n+ aclresult = pg_namespace_aclcheck(RelationGetNamespace(pk_rel),\n+ GetUserId(), ACL_USAGE);\n+ if (aclresult != ACLCHECK_OK)\n+ aclcheck_error(aclresult, OBJECT_SCHEMA,\n+ get_namespace_name(RelationGetNamespace(pk_rel)));\n+ /* Check the user has SELECT permissions on the referenced relation. */\n+ aclresult = pg_class_aclcheck(RelationGetRelid(pk_rel), GetUserId(),\n+ ACL_SELECT);\n+ if (aclresult != ACLCHECK_OK)\n+ aclcheck_error(aclresult, OBJECT_TABLE,\n+ RelationGetRelationName(pk_rel));\n\n /*\n * Extract the unique key from the provided slot and choose the equality\n@@ -414,6 +436,9 @@ ri_ReferencedKeyExists(Relation pk_rel, Relation fk_rel,\n index_endscan(scan);\n ExecDropSingleTupleTableSlot(outslot);\n\n+ /* Restore UID and security context */\n+ SetUserIdAndSecContext(save_userid, save_sec_context);\n+\n /* Don't release lock until commit. */\n index_close(idxrel, NoLock);\n\n> Shall we mention RLS restrictions? If we don't worry about that,\n> I think REFERENCES privilege becomes a full bypass of RLS, at\n> least for unique-key columns.\n\nSeeing what check_enable_rls() does when running under the security\ncontext set by ri_PerformCheck(), it indeed seems that RLS is bypassed\nwhen executing these RI queries. The following comment in\ncheck_enable_rls() seems to say so:\n\n * InNoForceRLSOperation indicates that we should not apply RLS even\n * if the table has FORCE RLS set - IF the current user is the owner.\n * This is specifically to ensure that referential integrity checks\n * are able to still run correctly.\n\n> I wonder also what happens if the referenced table isn't a plain\n> heap with a plain btree index. Maybe you're accessing it at the\n> right level of abstraction so things will just work with some\n> other access methods, but I'm not sure about that.\n\nI believe that I've made ri_ReferencedKeyExists() use the appropriate\nAPIs to scan the index, lock the returned table tuple, etc., but do\nyou think we might be better served by introducing a new set of APIs\nfor this use case?\n\n> (Anybody\n> want to try this with a partitioned table some of whose partitions\n> are foreign tables?)\n\nPartitioned tables with foreign table partitions cannot be referenced\nin a foreign key, so cannot appear in this function. That's because\nunique constraints are not allowed when there are foreign table\npartitions.\n\n> Lastly, ri_PerformCheck is pretty careful about not only which\n> snapshot it uses, but which *pair* of snapshots it uses, because\n> sometimes it needs to worry about data changes since the start\n> of the transaction. You've ignored all of that complexity AFAICS.\n> That's okay (I think) for RI_FKey_check which was passing\n> detectNewRows = false, but for sure it's not okay for\n> ri_Check_Pk_Match. (I kind of thought we had isolation tests\n> that would catch that, but apparently not.)\n\nOkay, let me closely check the ri_Check_Pk_Match() case and see if\nthere's any live bug.\n\n> So, this is a cute idea, and the speedup is pretty impressive,\n> but I don't think it's anywhere near committable. I also wonder\n> whether we really want ri_triggers.c having its own copy of\n> low-level stuff like the tuple-locking code you copied. Seems\n> like a likely maintenance hazard, so maybe some more refactoring\n> is needed.\n\nOkay, I will see if there's a way to avoid copying too much code.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Mar 2021 23:41:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Mar 8, 2021 at 11:41 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Mar 4, 2021 at 5:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Lastly, ri_PerformCheck is pretty careful about not only which\n> > snapshot it uses, but which *pair* of snapshots it uses, because\n> > sometimes it needs to worry about data changes since the start\n> > of the transaction. You've ignored all of that complexity AFAICS.\n> > That's okay (I think) for RI_FKey_check which was passing\n> > detectNewRows = false, but for sure it's not okay for\n> > ri_Check_Pk_Match. (I kind of thought we had isolation tests\n> > that would catch that, but apparently not.)\n>\n> Okay, let me closely check the ri_Check_Pk_Match() case and see if\n> there's any live bug.\n\nI checked, and AFAICS, the query invoked by ri_Check_Pk_Match() (that\nis, without the patch) does not use the \"crosscheck\" snapshot at any\npoint during its execution. That snapshot is only used in the\ntable_update() and table_delete() routines, which are not involved in\nthe execution of ri_Check_Pk_Match()'s query.\n\nI dug through git history and -hackers archives to understand the\norigins of RI code's use of a crosscheck snapshot and came across this\ndiscussion:\n\nhttps://www.postgresql.org/message-id/20031001150510.U45145%40megazone.bigpanda.com\n\nIf I am reading the discussion and the details in subsequent commit\n55d85f42a891a correctly, the crosscheck snapshot is only to be used to\nensure, under serializable isolation, that any attempts by the RI\nquery of updating/deleting rows that are not visible to the\ntransaction snapshot cause a serialization error. Use of the same\nfacilities in ri_Check_Pk_Match() was merely done as future-proofing,\nwith no particular use case to address, then and perhaps even now.\n\nIf that is indeed the case, it does not seem particularly incorrect\nfor ri_ReferencedKeyExists() added by the patch to not bother with\nsetting up a crosscheck snapshot, even when called from\nri_Check_Pk_Match(). Am I missing something?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Mar 2021 22:37:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Mar 8, 2021 at 11:41 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Mar 4, 2021 at 5:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I guess I'm disturbed by the idea\n> > that we'd totally replace the implementation technology for only one\n> > variant of foreign key checks. That means that there'll be a lot\n> > of minor details that don't act the same depending on context. One\n> > point I was just reminded of by [1] is that the SPI approach enforces\n> > permissions checks on the table access, which I do not see being done\n> > anywhere in your patch. Now, maybe it's fine not to have such checks,\n> > on the grounds that the existence of the RI constraint is sufficient\n> > permission (the creator had to have REFERENCES permission to make it).\n> > But I'm not sure about that. Should we add SELECT permissions checks\n> > to this code path to make it less different?\n> >\n> > In the same vein, the existing code actually runs the query as the\n> > table owner (cf. SetUserIdAndSecContext in ri_PerformCheck), another\n> > nicety you haven't bothered with. Maybe that is invisible for a\n> > pure SELECT query but I'm not sure I would bet on it. At the very\n> > least you're betting that the index-related operators you invoke\n> > aren't going to care, and that nobody is going to try to use this\n> > difference to create a security exploit via a trojan-horse index.\n>\n> How about we do at the top of ri_ReferencedKeyExists() what\n> ri_PerformCheck() always does before executing a query, which is this:\n>\n> /* Switch to proper UID to perform check as */\n> GetUserIdAndSecContext(&save_userid, &save_sec_context);\n> SetUserIdAndSecContext(RelationGetForm(query_rel)->relowner,\n> save_sec_context | SECURITY_LOCAL_USERID_CHANGE |\n> SECURITY_NOFORCE_RLS);\n>\n> And then also check the permissions of the switched user on the scan\n> target relation's schema (ACL_USAGE) and the relation itself\n> (ACL_SELECT).\n>\n> IOW, this:\n>\n> + Oid save_userid;\n> + int save_sec_context;\n> + AclResult aclresult;\n> +\n> + /* Switch to proper UID to perform check as */\n> + GetUserIdAndSecContext(&save_userid, &save_sec_context);\n> + SetUserIdAndSecContext(RelationGetForm(pk_rel)->relowner,\n> + save_sec_context | SECURITY_LOCAL_USERID_CHANGE |\n> + SECURITY_NOFORCE_RLS);\n> +\n> + /* Check namespace permissions. */\n> + aclresult = pg_namespace_aclcheck(RelationGetNamespace(pk_rel),\n> + GetUserId(), ACL_USAGE);\n> + if (aclresult != ACLCHECK_OK)\n> + aclcheck_error(aclresult, OBJECT_SCHEMA,\n> + get_namespace_name(RelationGetNamespace(pk_rel)));\n> + /* Check the user has SELECT permissions on the referenced relation. */\n> + aclresult = pg_class_aclcheck(RelationGetRelid(pk_rel), GetUserId(),\n> + ACL_SELECT);\n> + if (aclresult != ACLCHECK_OK)\n> + aclcheck_error(aclresult, OBJECT_TABLE,\n> + RelationGetRelationName(pk_rel));\n>\n> /*\n> * Extract the unique key from the provided slot and choose the equality\n> @@ -414,6 +436,9 @@ ri_ReferencedKeyExists(Relation pk_rel, Relation fk_rel,\n> index_endscan(scan);\n> ExecDropSingleTupleTableSlot(outslot);\n>\n> + /* Restore UID and security context */\n> + SetUserIdAndSecContext(save_userid, save_sec_context);\n> +\n> /* Don't release lock until commit. */\n> index_close(idxrel, NoLock);\n\nI've included these changes in the updated patch.\n\n> > Shall we mention RLS restrictions? If we don't worry about that,\n> > I think REFERENCES privilege becomes a full bypass of RLS, at\n> > least for unique-key columns.\n>\n> Seeing what check_enable_rls() does when running under the security\n> context set by ri_PerformCheck(), it indeed seems that RLS is bypassed\n> when executing these RI queries. The following comment in\n> check_enable_rls() seems to say so:\n>\n> * InNoForceRLSOperation indicates that we should not apply RLS even\n> * if the table has FORCE RLS set - IF the current user is the owner.\n> * This is specifically to ensure that referential integrity checks\n> * are able to still run correctly.\n\nI've added a comment to note that the new way of \"selecting\" the\nreferenced tuple effectively bypasses RLS, as is the case when\nselecting via SPI.\n\n> > I wonder also what happens if the referenced table isn't a plain\n> > heap with a plain btree index. Maybe you're accessing it at the\n> > right level of abstraction so things will just work with some\n> > other access methods, but I'm not sure about that.\n>\n> I believe that I've made ri_ReferencedKeyExists() use the appropriate\n> APIs to scan the index, lock the returned table tuple, etc., but do\n> you think we might be better served by introducing a new set of APIs\n> for this use case?\n\nI concur that by using the interfaces defined in genam.h and\ntableam.h, patch accounts for cases involving other access methods.\n\nThat said, I had overlooked one bit in the new code that is specific\nto btree AM, which is the hard-coding of BTEqualStrategyNumber in the\nfollowing:\n\n /* Initialize the scankey. */\n ScanKeyInit(&skey[i],\n pkattno,\n BTEqualStrategyNumber,\n regop,\n pk_vals[i]);\n\nIn the updated patch, I've added code to look up the index-specific\nstrategy number to pass here.\n\n> > Lastly, ri_PerformCheck is pretty careful about not only which\n> > snapshot it uses, but which *pair* of snapshots it uses, because\n> > sometimes it needs to worry about data changes since the start\n> > of the transaction. You've ignored all of that complexity AFAICS.\n> > That's okay (I think) for RI_FKey_check which was passing\n> > detectNewRows = false, but for sure it's not okay for\n> > ri_Check_Pk_Match. (I kind of thought we had isolation tests\n> > that would catch that, but apparently not.)\n>\n> Okay, let me closely check the ri_Check_Pk_Match() case and see if\n> there's any live bug.\n\nAs mentioned in my earlier reply, there doesn't seem to be a need for\nri_Check_Pk_Match() to set the crosscheck snapshot as it is basically\nunused.\n\n> > So, this is a cute idea, and the speedup is pretty impressive,\n> > but I don't think it's anywhere near committable. I also wonder\n> > whether we really want ri_triggers.c having its own copy of\n> > low-level stuff like the tuple-locking code you copied. Seems\n> > like a likely maintenance hazard, so maybe some more refactoring\n> > is needed.\n>\n> Okay, I will see if there's a way to avoid copying too much code.\n\nI thought sharing the tuple-locking code with ExecLockRows(), which\nseemed closest in semantics to what the new code is doing, might not\nbe such a bad idea, but not sure I came up with a great interface for\nthe shared function. Actually, there are other places having their\nown copies of tuple-locking logic, but they deal with the locking\nresult in their own unique ways, so I didn't get excited about finding\na way to make the new function accommodate their needs. I also admit\nthat I may have totally misunderstood what refactoring you were\nreferring to in your comment.\n\nUpdated patches attached. Sorry about the delay.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 20 Mar 2021 22:21:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Sat, Mar 20, 2021 at 10:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Updated patches attached. Sorry about the delay.\n\nRebased over the recent DETACH PARTITION CONCURRENTLY work.\nApparently, ri_ReferencedKeyExists() was using the wrong snapshot.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 2 Apr 2021 21:46:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Hi,\n\n+ skip = !ExecLockTableTuple(erm->relation, &tid, markSlot,\n+ estate->es_snapshot,\nestate->es_output_cid,\n+ lockmode, erm->waitPolicy, &epq_needed);\n+ if (skip)\n\nIt seems the variable skip is only used above. The variable is not needed -\nif statement can directly check the return value.\n\n+ * Locks tuple with given TID with given lockmode following given\nwait\n\ngiven appears three times in the above sentence. Maybe the following is bit\neasier to read:\n\nLocks tuple with the specified TID, lockmode following given wait policy\n\n+ * Checks whether a tuple containing the same unique key as extracted from\nthe\n+ * tuple provided in 'slot' exists in 'pk_rel'.\n\nI think 'same' is not needed here since the remaining part of the sentence\nhas adequately identified the key.\n\n+ if (leaf_pk_rel == NULL)\n+ goto done;\n\nIt would be better to avoid goto by including the cleanup statements in the\nif block and return.\n\n+ if (index_getnext_slot(scan, ForwardScanDirection, outslot))\n+ found = true;\n+\n+ /* Found tuple, try to lock it in key share mode. */\n+ if (found)\n\nSince found is only assigned in one place, the two if statements can be\ncombined into one.\n\nCheers\n\nOn Fri, Apr 2, 2021 at 5:46 AM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Sat, Mar 20, 2021 at 10:21 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > Updated patches attached. Sorry about the delay.\n>\n> Rebased over the recent DETACH PARTITION CONCURRENTLY work.\n> Apparently, ri_ReferencedKeyExists() was using the wrong snapshot.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nHi,+       skip = !ExecLockTableTuple(erm->relation, &tid, markSlot,+                                  estate->es_snapshot, estate->es_output_cid,+                                  lockmode, erm->waitPolicy, &epq_needed);+       if (skip)It seems the variable skip is only used above. The variable is not needed - if statement can directly check the return value.+ *         Locks tuple with given TID with given lockmode following given waitgiven appears three times in the above sentence. Maybe the following is bit easier to read:Locks tuple with the specified TID, lockmode following given wait policy+ * Checks whether a tuple containing the same unique key as extracted from the+ * tuple provided in 'slot' exists in 'pk_rel'.I think 'same' is not needed here since the remaining part of the sentence has adequately identified the key.+       if (leaf_pk_rel == NULL)+           goto done;It would be better to avoid goto by including the cleanup statements in the if block and return.+   if (index_getnext_slot(scan, ForwardScanDirection, outslot))+       found = true;++   /* Found tuple, try to lock it in key share mode. */+   if (found)Since found is only assigned in one place, the two if statements can be combined into one.CheersOn Fri, Apr 2, 2021 at 5:46 AM Amit Langote <amitlangote09@gmail.com> wrote:On Sat, Mar 20, 2021 at 10:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Updated patches attached.  Sorry about the delay.\n\nRebased over the recent DETACH PARTITION CONCURRENTLY work.\nApparently, ri_ReferencedKeyExists() was using the wrong snapshot.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 2 Apr 2021 07:58:36 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On 2021-Apr-02, Amit Langote wrote:\n\n> On Sat, Mar 20, 2021 at 10:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Updated patches attached. Sorry about the delay.\n> \n> Rebased over the recent DETACH PARTITION CONCURRENTLY work.\n> Apparently, ri_ReferencedKeyExists() was using the wrong snapshot.\n\nHmm, I wonder if that stuff should be using a PartitionDirectory? (I\ndidn't actually understand what your code is doing, so please forgive if\nthis is a silly question.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL! PostgreSQL was\namazing when I first started using it at 7.2, and I'm continually astounded by\nlearning new features and techniques made available by the continuing work of\nthe development team.\"\nBerend Tober, http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php\n\n\n", "msg_date": "Fri, 2 Apr 2021 12:01:02 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Hi Alvaro,\n\nOn Sat, Apr 3, 2021 at 12:01 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Apr-02, Amit Langote wrote:\n>\n> > On Sat, Mar 20, 2021 at 10:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Updated patches attached. Sorry about the delay.\n> >\n> > Rebased over the recent DETACH PARTITION CONCURRENTLY work.\n> > Apparently, ri_ReferencedKeyExists() was using the wrong snapshot.\n>\n> Hmm, I wonder if that stuff should be using a PartitionDirectory? (I\n> didn't actually understand what your code is doing, so please forgive if\n> this is a silly question.)\n\nNo problem, I wondered about that too when rebasing.\n\nMy instinct *was* that maybe there's no need for it, because\nfind_leaf_pk_rel()'s use of a PartitionDesc is pretty limited in\nduration and scope of the kind of things it calls that there's no need\nto worry about it getting invalidated while in use. But I may be\nwrong about that, because get_partition_for_tuple() can call arbitrary\nuser-defined functions, which may result in invalidation messages\nbeing processed and an unguarded PartitionDesc getting wiped out under\nus.\n\nSo, I've added PartitionDirectory protection in find_leaf_pk_rel() in\nthe attached updated version.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 4 Apr 2021 17:19:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Fri, Apr 2, 2021 at 11:55 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n>\n> + skip = !ExecLockTableTuple(erm->relation, &tid, markSlot,\n> + estate->es_snapshot, estate->es_output_cid,\n> + lockmode, erm->waitPolicy, &epq_needed);\n> + if (skip)\n>\n> It seems the variable skip is only used above. The variable is not needed - if statement can directly check the return value.\n>\n> + * Locks tuple with given TID with given lockmode following given wait\n>\n> given appears three times in the above sentence. Maybe the following is bit easier to read:\n>\n> Locks tuple with the specified TID, lockmode following given wait policy\n>\n> + * Checks whether a tuple containing the same unique key as extracted from the\n> + * tuple provided in 'slot' exists in 'pk_rel'.\n>\n> I think 'same' is not needed here since the remaining part of the sentence has adequately identified the key.\n>\n> + if (leaf_pk_rel == NULL)\n> + goto done;\n>\n> It would be better to avoid goto by including the cleanup statements in the if block and return.\n>\n> + if (index_getnext_slot(scan, ForwardScanDirection, outslot))\n> + found = true;\n> +\n> + /* Found tuple, try to lock it in key share mode. */\n> + if (found)\n>\n> Since found is only assigned in one place, the two if statements can be combined into one.\n\nThanks for taking a look. I agree with most of your suggestions and\nhave incorporated them in the v8 just posted.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 4 Apr 2021 17:20:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Sun, Apr 4, 2021 at 1:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Apr 2, 2021 at 11:55 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> >\n> > + skip = !ExecLockTableTuple(erm->relation, &tid, markSlot,\n> > + estate->es_snapshot, estate->es_output_cid,\n> > + lockmode, erm->waitPolicy, &epq_needed);\n> > + if (skip)\n> >\n> > It seems the variable skip is only used above. The variable is not needed - if statement can directly check the return value.\n> >\n> > + * Locks tuple with given TID with given lockmode following given wait\n> >\n> > given appears three times in the above sentence. Maybe the following is bit easier to read:\n> >\n> > Locks tuple with the specified TID, lockmode following given wait policy\n> >\n> > + * Checks whether a tuple containing the same unique key as extracted from the\n> > + * tuple provided in 'slot' exists in 'pk_rel'.\n> >\n> > I think 'same' is not needed here since the remaining part of the sentence has adequately identified the key.\n> >\n> > + if (leaf_pk_rel == NULL)\n> > + goto done;\n> >\n> > It would be better to avoid goto by including the cleanup statements in the if block and return.\n> >\n> > + if (index_getnext_slot(scan, ForwardScanDirection, outslot))\n> > + found = true;\n> > +\n> > + /* Found tuple, try to lock it in key share mode. */\n> > + if (found)\n> >\n> > Since found is only assigned in one place, the two if statements can be combined into one.\n>\n> Thanks for taking a look. I agree with most of your suggestions and\n> have incorporated them in the v8 just posted.\n\nThe 2nd patch does not apply on Head, please post a rebased version:\nerror: patch failed: src/backend/utils/adt/ri_triggers.c:337\nerror: src/backend/utils/adt/ri_triggers.c: patch does not apply\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 5 Jul 2021 22:26:16 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Tue, Jul 6, 2021 at 1:56 AM vignesh C <vignesh21@gmail.com> wrote:\n> The 2nd patch does not apply on Head, please post a rebased version:\n> error: patch failed: src/backend/utils/adt/ri_triggers.c:337\n> error: src/backend/utils/adt/ri_triggers.c: patch does not apply\n\nThanks for the heads up.\n\nRebased patches attached.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 6 Jul 2021 10:48:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": ">\n> Rebased patches attached.\n\n\nI'm reviewing the changes since v6, which was my last review.\n\nMaking ExecLockTableTuple() it's own function makes sense.\nSnapshots are now accounted for.\nThe changes that account for n-level partitioning makes sense as well.\n\nPasses make check-world.\nNot user facing, so no user documentation required.\nMarking as ready for committer again.\n\nRebased patches attached.I'm reviewing the changes since v6, which was my last review.Making ExecLockTableTuple() it's own function makes sense.Snapshots are now accounted for.The changes that account for n-level partitioning makes sense as well.Passes make check-world.Not user facing, so no user documentation required.Marking as ready for committer again.", "msg_date": "Mon, 30 Aug 2021 00:36:12 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Rebased patches attached.\n\nI've spent some more time digging into the snapshot-management angle.\nI think you are right that the crosscheck_snapshot isn't really an\nissue because the executor pays no attention to it for SELECT, but\nthat doesn't mean that there's no problem, because the test_snapshot\nbehavior is different too. By my reading of it, the intention of the\nexisting code is to insist that when IsolationUsesXactSnapshot()\nis true and we *weren't* saying detectNewRows, the query should be\nrestricted to only see rows visible to the transaction snapshot.\nWhich I think is proper: an RR transaction shouldn't be allowed to\ninsert referencing rows that depend on a referenced row it can't see.\nOn the other hand, it's reasonable for ri_Check_Pk_Match to use\ndetectNewRows=true, because in that case what we're doing is allowing\nan RR transaction to depend on the continued existence of a PK value\nthat was deleted and replaced since the start of its transaction.\n\nIt appears to me that commit 71f4c8c6f (DETACH PARTITION CONCURRENTLY)\nbroke the semantics here, because now things work differently with a\npartitioned PK table than with a plain table, thanks to not bothering\nto distinguish questions of how to handle partition detachment from\nquestions of visibility of individual data tuples. We evidently\nhaven't got test coverage for this :-(, which is perhaps not so\nsurprising because all this behavior long predates the isolationtester\ninfrastructure that would've allowed us to test it mechanically.\n\nAnyway, I think that (1) we should write some more test cases around\nthis behavior, (2) you need to establish the snapshot to use in two\ndifferent ways for the RI_FKey_check and ri_Check_Pk_Match cases,\nand (3) something's going to have to be done to repair the behavior\nin v14 (unless we want to back-patch this into v14, which seems a\nbit scary).\n\nIt looks like you've addressed the other complaints I raised back in\nMarch, so that's forward progress anyway. I do still find myself a\nbit dissatisfied with the code factorization, because it seems like\nfind_leaf_pk_rel() doesn't belong here but rather in some partitioning\nmodule. OTOH, if that means exposing RI_ConstraintInfo to the world,\nthat wouldn't be nice either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Nov 2021 18:19:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On 2021-Nov-11, Tom Lane wrote:\n\n> It appears to me that commit 71f4c8c6f (DETACH PARTITION CONCURRENTLY)\n> broke the semantics here, because now things work differently with a\n> partitioned PK table than with a plain table, thanks to not bothering\n> to distinguish questions of how to handle partition detachment from\n> questions of visibility of individual data tuples. We evidently\n> haven't got test coverage for this :-(, which is perhaps not so\n> surprising because all this behavior long predates the isolationtester\n> infrastructure that would've allowed us to test it mechanically.\n> \n> Anyway, I think that (1) we should write some more test cases around\n> this behavior, (2) you need to establish the snapshot to use in two\n> different ways for the RI_FKey_check and ri_Check_Pk_Match cases,\n> and (3) something's going to have to be done to repair the behavior\n> in v14 (unless we want to back-patch this into v14, which seems a\n> bit scary).\n\nI think we (I) should definitely pursue fixing whatever was broken by\nDETACH CONCURRENTLY, back to pg14, independently of this patch ... but\nI would appreciate some insight into what the problem is.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n", "msg_date": "Thu, 11 Nov 2021 20:38:50 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I think we (I) should definitely pursue fixing whatever was broken by\n> DETACH CONCURRENTLY, back to pg14, independently of this patch ... but\n> I would appreciate some insight into what the problem is.\n\nHere's what I'm on about:\n\nregression=# create table pk (f1 int primary key);\nCREATE TABLE\nregression=# insert into pk values(1);\nINSERT 0 1\nregression=# create table fk (f1 int references pk);\nCREATE TABLE\nregression=# begin isolation level repeatable read ;\nBEGIN\nregression=*# select * from pk; -- to establish xact snapshot\n f1 \n----\n 1\n(1 row)\n\nnow, in another session, do:\n\nregression=# insert into pk values(2);\nINSERT 0 1\n\nback at the RR transaction, we can't see that:\n\nregression=*# select * from pk; -- still no row 2\n f1 \n----\n 1\n(1 row)\n\nso we get:\n\nregression=*# insert into fk values(1);\nINSERT 0 1\nregression=*# insert into fk values(2);\nERROR: insert or update on table \"fk\" violates foreign key constraint \"fk_f1_fkey\"\nDETAIL: Key (f1)=(2) is not present in table \"pk\".\n\nIMO that behavior is correct. If you use READ COMMITTED, then\nSELECT can see row 2 as soon as it's committed, and so can the\nFK check, and again that's correct.\n\nIn v13, the behavior is the same if \"pk\" is a partitioned table instead\nof a plain one. In HEAD, it's not:\n\nregression=# drop table pk, fk;\nDROP TABLE\nregression=# create table pk (f1 int primary key) partition by list(f1);\nCREATE TABLE\nregression=# create table pk1 partition of pk for values in (1,2);\nCREATE TABLE\nregression=# insert into pk values(1);\nINSERT 0 1\nregression=# create table fk (f1 int references pk);\nCREATE TABLE\nregression=# begin isolation level repeatable read ;\nBEGIN\nregression=*# select * from pk; -- to establish xact snapshot\n f1 \n----\n 1\n(1 row)\n\n--- now insert row 2 in another session\n\nregression=*# select * from pk; -- still no row 2\n f1 \n----\n 1\n(1 row)\n\nregression=*# insert into fk values(1);\nINSERT 0 1\nregression=*# insert into fk values(2);\nINSERT 0 1\nregression=*#\n\nSo I say that's busted, and the cause is this hunk from 71f4c8c6f:\n\n@@ -392,11 +392,15 @@ RI_FKey_check(TriggerData *trigdata)\n \n /*\n * Now check that foreign key exists in PK table\n+ *\n+ * XXX detectNewRows must be true when a partitioned table is on the\n+ * referenced side. The reason is that our snapshot must be fresh\n+ * in order for the hack in find_inheritance_children() to work.\n */\n ri_PerformCheck(riinfo, &qkey, qplan,\n fk_rel, pk_rel,\n NULL, newslot,\n- false,\n+ pk_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE,\n SPI_OK_SELECT);\n \n if (SPI_finish() != SPI_OK_FINISH)\n\nI think you need some signalling mechanism that's less global than\nActiveSnapshot to tell the partition-lookup machinery what to do\nin this context.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Nov 2021 19:17:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "I wrote:\n> Anyway, I think that (1) we should write some more test cases around\n> this behavior, (2) you need to establish the snapshot to use in two\n> different ways for the RI_FKey_check and ri_Check_Pk_Match cases,\n> and (3) something's going to have to be done to repair the behavior\n> in v14 (unless we want to back-patch this into v14, which seems a\n> bit scary).\n\nI wrote that thinking that point (2), ie fix the choice of snapshots for\nthese RI queries, would solve the brokenness in partitioned tables,\nso that (3) would potentially only require hacking up v14.\n\nHowever after thinking more I realize that (2) will break the desired\nbehavior for concurrent partition detaches, because that's being driven\noff ActiveSnapshot. So we really need a solution that decouples the\npartition detachment logic from ActiveSnapshot, in both branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Nov 2021 20:58:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Fri, Nov 12, 2021 at 8:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Rebased patches attached.\n>\n> I've spent some more time digging into the snapshot-management angle.\n\nThanks for looking at this.\n\n> I think you are right that the crosscheck_snapshot isn't really an\n> issue because the executor pays no attention to it for SELECT, but\n> that doesn't mean that there's no problem, because the test_snapshot\n> behavior is different too. By my reading of it, the intention of the\n> existing code is to insist that when IsolationUsesXactSnapshot()\n> is true and we *weren't* saying detectNewRows, the query should be\n> restricted to only see rows visible to the transaction snapshot.\n> Which I think is proper: an RR transaction shouldn't be allowed to\n> insert referencing rows that depend on a referenced row it can't see.\n> On the other hand, it's reasonable for ri_Check_Pk_Match to use\n> detectNewRows=true, because in that case what we're doing is allowing\n> an RR transaction to depend on the continued existence of a PK value\n> that was deleted and replaced since the start of its transaction.\n>\n> It appears to me that commit 71f4c8c6f (DETACH PARTITION CONCURRENTLY)\n> broke the semantics here, because now things work differently with a\n> partitioned PK table than with a plain table, thanks to not bothering\n> to distinguish questions of how to handle partition detachment from\n> questions of visibility of individual data tuples. We evidently\n> haven't got test coverage for this :-(, which is perhaps not so\n> surprising because all this behavior long predates the isolationtester\n> infrastructure that would've allowed us to test it mechanically.\n>\n> Anyway, I think that (1) we should write some more test cases around\n> this behavior, (2) you need to establish the snapshot to use in two\n> different ways for the RI_FKey_check and ri_Check_Pk_Match cases,\n> and (3) something's going to have to be done to repair the behavior\n> in v14 (unless we want to back-patch this into v14, which seems a\n> bit scary).\n\nOkay, I'll look into getting 1 and 2 done for this patch and I guess\nwork with Alvaro on 3.\n\n> It looks like you've addressed the other complaints I raised back in\n> March, so that's forward progress anyway. I do still find myself a\n> bit dissatisfied with the code factorization, because it seems like\n> find_leaf_pk_rel() doesn't belong here but rather in some partitioning\n> module. OTOH, if that means exposing RI_ConstraintInfo to the world,\n> that wouldn't be nice either.\n\nHm yeah, fair point about the undesirability of putting partitioning\ndetails into ri_triggers.c, so will look into refactoring to avoid\nthat.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Nov 2021 12:18:27 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Fri, Nov 12, 2021 at 10:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Anyway, I think that (1) we should write some more test cases around\n> > this behavior, (2) you need to establish the snapshot to use in two\n> > different ways for the RI_FKey_check and ri_Check_Pk_Match cases,\n> > and (3) something's going to have to be done to repair the behavior\n> > in v14 (unless we want to back-patch this into v14, which seems a\n> > bit scary).\n>\n> I wrote that thinking that point (2), ie fix the choice of snapshots for\n> these RI queries, would solve the brokenness in partitioned tables,\n> so that (3) would potentially only require hacking up v14.\n>\n> However after thinking more I realize that (2) will break the desired\n> behavior for concurrent partition detaches, because that's being driven\n> off ActiveSnapshot. So we really need a solution that decouples the\n> partition detachment logic from ActiveSnapshot, in both branches.\n\nISTM that the latest snapshot would still have to be passed to the\nfind_inheritance_children_extended() *somehow* by ri_trigger.c. IIUC\nthe problem with using the ActiveSnapshot mechanism to do that is that\nit causes the SPI query to see even user table rows that it shouldn't\nbe able to, so that is why you say it is too global a mechanism for\nthis hack.\n\nWhatever mechanism we will use would still need to involve setting a\nglobal Snapshot variable though, right?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Nov 2021 23:21:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Whatever mechanism we will use would still need to involve setting a\n> global Snapshot variable though, right?\n\nIn v14 we'll certainly still be passing the snapshot(s) to SPI, which will\neventually make the snapshot active. With your patch, since we're just\nhanding the snapshot to the scan mechanism, it seems at least\ntheoretically possible that we'd not have to do PushActiveSnapshot on it.\nNot doing so might be a bad idea however; if there is any user-defined\ncode getting called, it might have expectations about ActiveSnapshot being\nrelevant. On the whole I'd be inclined to say that we still want the\nRI test_snapshot to be the ActiveSnapshot while performing the test. \n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Nov 2021 09:31:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Nov 12, 2021 at 8:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, I think that (1) we should write some more test cases around\n>> this behavior, (2) you need to establish the snapshot to use in two\n>> different ways for the RI_FKey_check and ri_Check_Pk_Match cases,\n>> and (3) something's going to have to be done to repair the behavior\n>> in v14 (unless we want to back-patch this into v14, which seems a\n>> bit scary).\n\n> Okay, I'll look into getting 1 and 2 done for this patch and I guess\n> work with Alvaro on 3.\n\nActually, it seems that DETACH PARTITION is broken for concurrent\nserializable/repeatable-read transactions quite independently of\nwhether they attempt to make any FK checks [1]. If we do what\nI speculated about there, namely wait out all such xacts before\ndetaching, it might be possible to fix (3) just by reverting the\nproblematic change in ri_triggers.c. I'm thinking the wait would\nrender it unnecessary to get FK checks to do anything weird about\npartition lookup. But I might well be missing something.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1849918.1636748862%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 12 Nov 2021 15:43:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Sat, Nov 13, 2021 at 5:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Fri, Nov 12, 2021 at 8:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Anyway, I think that (1) we should write some more test cases around\n> >> this behavior, (2) you need to establish the snapshot to use in two\n> >> different ways for the RI_FKey_check and ri_Check_Pk_Match cases,\n> >> and (3) something's going to have to be done to repair the behavior\n> >> in v14 (unless we want to back-patch this into v14, which seems a\n> >> bit scary).\n>\n> > Okay, I'll look into getting 1 and 2 done for this patch and I guess\n> > work with Alvaro on 3.\n>\n> Actually, it seems that DETACH PARTITION is broken for concurrent\n> serializable/repeatable-read transactions quite independently of\n> whether they attempt to make any FK checks [1]. If we do what\n> I speculated about there, namely wait out all such xacts before\n> detaching, it might be possible to fix (3) just by reverting the\n> problematic change in ri_triggers.c. I'm thinking the wait would\n> render it unnecessary to get FK checks to do anything weird about\n> partition lookup. But I might well be missing something.\n\nI wasn't able to make much inroads into how we might be able to get\nrid of the DETACH-related partition descriptor hacks, the item (3),\nthough I made some progress on items (1) and (2).\n\nFor (1), the attached 0001 patch adds a new isolation suite\nfk-snapshot.spec to exercise snapshot behaviors in the cases where we\nno longer go through SPI. It helped find some problems with the\nsnapshot handling in the earlier versions of the patch, mainly with\npartitioned PK tables. It also contains a test along the lines of the\nexample you showed upthread, which shows that the partition descriptor\nhack requiring ActiveSnapshot to be set results in wrong results.\nPatch includes the buggy output for that test case and marked as such\nin a comment above the test.\n\nIn updated 0002, I fixed things such that the snapshot-setting\nrequired by the partition descriptor hack is independent of\nsnapshot-setting of the RI query such that it no longer causes the PK\nindex scan to return rows that the RI query mustn't see. That fixes\nthe visibility bug illustrated in your example, and as mentioned, also\nexercised in the new test suite.\n\nI also moved find_leaf_pk_rel() into execPartition.c with a new name\nand a new set of parameters.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 19 Nov 2021 22:56:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": ">\n>\n>\n> I wasn't able to make much inroads into how we might be able to get\n> rid of the DETACH-related partition descriptor hacks, the item (3),\n> though I made some progress on items (1) and (2).\n>\n> For (1), the attached 0001 patch adds a new isolation suite\n> fk-snapshot.spec to exercise snapshot behaviors in the cases where we\n> no longer go through SPI. It helped find some problems with the\n> snapshot handling in the earlier versions of the patch, mainly with\n> partitioned PK tables. It also contains a test along the lines of the\n> example you showed upthread, which shows that the partition descriptor\n> hack requiring ActiveSnapshot to be set results in wrong results.\n> Patch includes the buggy output for that test case and marked as such\n> in a comment above the test.\n>\n> In updated 0002, I fixed things such that the snapshot-setting\n> required by the partition descriptor hack is independent of\n> snapshot-setting of the RI query such that it no longer causes the PK\n> index scan to return rows that the RI query mustn't see. That fixes\n> the visibility bug illustrated in your example, and as mentioned, also\n> exercised in the new test suite.\n>\n> I also moved find_leaf_pk_rel() into execPartition.c with a new name\n> and a new set of parameters.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\nSorry for the delay. This patch no longer applies, it has some conflict\nwith d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a\n\n\nI wasn't able to make much inroads into how we might be able to get\nrid of the DETACH-related partition descriptor hacks, the item (3),\nthough I made some progress on items (1) and (2).\n\nFor (1), the attached 0001 patch adds a new isolation suite\nfk-snapshot.spec to exercise snapshot behaviors in the cases where we\nno longer go through SPI.  It helped find some problems with the\nsnapshot handling in the earlier versions of the patch, mainly with\npartitioned PK tables.  It also contains a test along the lines of the\nexample you showed upthread, which shows that the partition descriptor\nhack requiring ActiveSnapshot to be set results in wrong results.\nPatch includes the buggy output for that test case and marked as such\nin a comment above the test.\n\nIn updated 0002, I fixed things such that the snapshot-setting\nrequired by the partition descriptor hack is independent of\nsnapshot-setting of the RI query such that it no longer causes the PK\nindex scan to return rows that the RI query mustn't see.  That fixes\nthe visibility bug illustrated in your example, and as mentioned, also\nexercised in the new test suite.\n\nI also moved find_leaf_pk_rel() into execPartition.c with a new name\nand a new set of parameters.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.comSorry for the delay. This patch no longer applies, it has some conflict with d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a", "msg_date": "Sun, 19 Dec 2021 23:59:59 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Dec 20, 2021 at 2:00 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> Sorry for the delay. This patch no longer applies, it has some conflict with d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a\n\nThanks Corey for the heads up. Rebased with some cosmetic adjustments.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 20 Dec 2021 15:20:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Sun, Dec 19, 2021 at 10:20 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Mon, Dec 20, 2021 at 2:00 PM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> > Sorry for the delay. This patch no longer applies, it has some conflict\n> with d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a\n>\n> Thanks Corey for the heads up. Rebased with some cosmetic adjustments.\n>\n> Hi,\n\n+ Assert(partidx < 0 || partidx < partdesc->nparts);\n+ partoid = partdesc->oids[partidx];\n\nIf partidx < 0, do we still need to fill out partoid and is_leaf ? It seems\nwe can return early based on (should call table_close(rel) first):\n\n+ /* No partition found. */\n+ if (partidx < 0)\n+ return NULL;\n\nCheers\n\nOn Sun, Dec 19, 2021 at 10:20 PM Amit Langote <amitlangote09@gmail.com> wrote:On Mon, Dec 20, 2021 at 2:00 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> Sorry for the delay. This patch no longer applies, it has some conflict with d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a\n\nThanks Corey for the heads up.  Rebased with some cosmetic adjustments.\nHi, +       Assert(partidx < 0 || partidx < partdesc->nparts);+       partoid = partdesc->oids[partidx];If partidx < 0, do we still need to fill out partoid and is_leaf ? It seems we can return early based on (should call table_close(rel) first):+       /* No partition found. */+       if (partidx < 0)+           return NULL;Cheers", "msg_date": "Mon, 20 Dec 2021 01:21:10 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Dec 20, 2021 at 6:19 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Sun, Dec 19, 2021 at 10:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> On Mon, Dec 20, 2021 at 2:00 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> > Sorry for the delay. This patch no longer applies, it has some conflict with d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a\n>>\n>> Thanks Corey for the heads up. Rebased with some cosmetic adjustments.\n>>\n> Hi,\n>\n> + Assert(partidx < 0 || partidx < partdesc->nparts);\n> + partoid = partdesc->oids[partidx];\n>\n> If partidx < 0, do we still need to fill out partoid and is_leaf ? It seems we can return early based on (should call table_close(rel) first):\n>\n> + /* No partition found. */\n> + if (partidx < 0)\n> + return NULL;\n\nGood catch, thanks. Patch updated.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 20 Dec 2021 22:17:16 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": ">\n>\n>\n> Good catch, thanks. Patch updated.\n>\n>\n>\nApplies clean. Passes check-world.\n\n\nGood catch, thanks.  Patch updated.\nApplies clean. Passes check-world.", "msg_date": "Mon, 20 Dec 2021 21:21:05 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Dec 20, 2021 at 5:17 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Mon, Dec 20, 2021 at 6:19 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > On Sun, Dec 19, 2021 at 10:20 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >>\n> >> On Mon, Dec 20, 2021 at 2:00 PM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> >> > Sorry for the delay. This patch no longer applies, it has some\n> conflict with d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a\n> >>\n> >> Thanks Corey for the heads up. Rebased with some cosmetic adjustments.\n> >>\n> > Hi,\n> >\n> > + Assert(partidx < 0 || partidx < partdesc->nparts);\n> > + partoid = partdesc->oids[partidx];\n> >\n> > If partidx < 0, do we still need to fill out partoid and is_leaf ? It\n> seems we can return early based on (should call table_close(rel) first):\n> >\n> > + /* No partition found. */\n> > + if (partidx < 0)\n> > + return NULL;\n>\n> Good catch, thanks. Patch updated.\n>\n> Hi,\n\n+ int lockflags = 0;\n+ TM_Result test;\n+\n+ lockflags = TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS;\n\nThe above assignment can be meged with the line where variable lockflags is\ndeclared.\n\n+ GetUserIdAndSecContext(&save_userid, &save_sec_context);\n\nsave_userid -> saved_userid\nsave_sec_context -> saved_sec_context\n\n+ * the transaction-snapshot mode. If we didn't push one already, do\n\ndidn't push -> haven't pushed\n\nFor ri_PerformCheck():\n\n+ bool source_is_pk = true;\n\nIt seems the value of source_is_pk doesn't change - the value true can be\nplugged into ri_ExtractValues() calls directly.\n\nCheers\n\nOn Mon, Dec 20, 2021 at 5:17 AM Amit Langote <amitlangote09@gmail.com> wrote:On Mon, Dec 20, 2021 at 6:19 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Sun, Dec 19, 2021 at 10:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> On Mon, Dec 20, 2021 at 2:00 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> > Sorry for the delay. This patch no longer applies, it has some conflict with d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a\n>>\n>> Thanks Corey for the heads up.  Rebased with some cosmetic adjustments.\n>>\n> Hi,\n>\n> +       Assert(partidx < 0 || partidx < partdesc->nparts);\n> +       partoid = partdesc->oids[partidx];\n>\n> If partidx < 0, do we still need to fill out partoid and is_leaf ? It seems we can return early based on (should call table_close(rel) first):\n>\n> +       /* No partition found. */\n> +       if (partidx < 0)\n> +           return NULL;\n\nGood catch, thanks.  Patch updated.\nHi,+   int         lockflags = 0;+   TM_Result   test;++   lockflags = TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS;The above assignment can be meged with the line where variable lockflags is declared.+   GetUserIdAndSecContext(&save_userid, &save_sec_context);save_userid -> saved_useridsave_sec_context -> saved_sec_context+        * the transaction-snapshot mode.  If we didn't push one already, dodidn't push -> haven't pushedFor ri_PerformCheck():+   bool        source_is_pk = true;It seems the value of source_is_pk doesn't change - the value true can be plugged into ri_ExtractValues() calls directly.Cheers", "msg_date": "Tue, 21 Dec 2021 00:56:06 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "Thanks for the review.\n\nOn Tue, Dec 21, 2021 at 5:54 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Hi,\n>\n> + int lockflags = 0;\n> + TM_Result test;\n> +\n> + lockflags = TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS;\n>\n> The above assignment can be meged with the line where variable lockflags is declared.\n\nSure.\n\n> + GetUserIdAndSecContext(&save_userid, &save_sec_context);\n>\n> save_userid -> saved_userid\n> save_sec_context -> saved_sec_context\n\nI agree that's better though I guess I had kept the names as they were\nin other functions.\n\nFixed nevertheless.\n\n> + * the transaction-snapshot mode. If we didn't push one already, do\n>\n> didn't push -> haven't pushed\n\nDone.\n\n> For ri_PerformCheck():\n>\n> + bool source_is_pk = true;\n>\n> It seems the value of source_is_pk doesn't change - the value true can be plugged into ri_ExtractValues() calls directly.\n\nOK, done.\n\nv13 is attached.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 18 Jan 2022 15:30:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Tue, Jan 18, 2022 at 3:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> v13 is attached.\n\nI noticed that the recent 641f3dffcdf's changes to\nget_constraint_index() made it basically unusable for this patch's\npurposes.\n\nReading in the thread that led to 641f3dffcdf why\nget_constraint_index() was changed the way it was, I invented in the\nattached updated patch a get_fkey_constraint_index() that is local to\nri_triggers.c for use by the new ri_ReferencedKeyExists(), replacing\nget_constraint_index() that no longer gives it the index it's looking\nfor.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Mar 2022 17:33:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Mar 14, 2022 at 1:33 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Tue, Jan 18, 2022 at 3:30 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > v13 is attached.\n>\n> I noticed that the recent 641f3dffcdf's changes to\n> get_constraint_index() made it basically unusable for this patch's\n> purposes.\n>\n> Reading in the thread that led to 641f3dffcdf why\n> get_constraint_index() was changed the way it was, I invented in the\n> attached updated patch a get_fkey_constraint_index() that is local to\n> ri_triggers.c for use by the new ri_ReferencedKeyExists(), replacing\n> get_constraint_index() that no longer gives it the index it's looking\n> for.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\nHi,\n+ partkey_isnull[j] = (key_nulls[k] == 'n' ? true :\nfalse);\n\nThe above can be shortened as:\n\n partkey_isnull[j] = key_nulls[k] == 'n';\n\n+ * May neeed to cast each of the individual values of the foreign\nkey\n\nneeed -> need\n\nCheers\n\nOn Mon, Mar 14, 2022 at 1:33 AM Amit Langote <amitlangote09@gmail.com> wrote:On Tue, Jan 18, 2022 at 3:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> v13 is attached.\n\nI noticed that the recent 641f3dffcdf's changes to\nget_constraint_index() made it basically unusable for this patch's\npurposes.\n\nReading in the thread that led to 641f3dffcdf why\nget_constraint_index() was changed the way it was, I invented in the\nattached updated patch a get_fkey_constraint_index() that is local to\nri_triggers.c for use by the new ri_ReferencedKeyExists(), replacing\nget_constraint_index() that no longer gives it the index it's looking\nfor.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.comHi,+                   partkey_isnull[j] = (key_nulls[k] == 'n' ? true : false); The above can be shortened as:  partkey_isnull[j] = key_nulls[k] == 'n';+        * May neeed to cast each of the individual values of the foreign keyneeed -> needCheers", "msg_date": "Mon, 14 Mar 2022 02:32:25 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Mar 14, 2022 at 6:28 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Mon, Mar 14, 2022 at 1:33 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Tue, Jan 18, 2022 at 3:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> > v13 is attached.\n>>\n>> I noticed that the recent 641f3dffcdf's changes to\n>> get_constraint_index() made it basically unusable for this patch's\n>> purposes.\n>>\n>> Reading in the thread that led to 641f3dffcdf why\n>> get_constraint_index() was changed the way it was, I invented in the\n>> attached updated patch a get_fkey_constraint_index() that is local to\n>> ri_triggers.c for use by the new ri_ReferencedKeyExists(), replacing\n>> get_constraint_index() that no longer gives it the index it's looking\n>> for.\n>>\n>\n> Hi,\n> + partkey_isnull[j] = (key_nulls[k] == 'n' ? true : false);\n>\n> The above can be shortened as:\n>\n> partkey_isnull[j] = key_nulls[k] == 'n';\n>\n> + * May neeed to cast each of the individual values of the foreign key\n>\n> neeed -> need\n\nBoth fixed, thanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Mar 2022 13:01:06 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "There were rebase conflicts with the recently committed\nexecPartition.c/h changes. While fixing them, I thought maybe\nfind_leaf_part_for_key() doesn't quite match in style with its\nneighbors in execPartition.h, so changed it to\nExecGetLeafPartitionForKey().\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 Apr 2022 10:05:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Thu, Apr 7, 2022 at 10:05 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> There were rebase conflicts with the recently committed\n> execPartition.c/h changes. While fixing them, I thought maybe\n> find_leaf_part_for_key() doesn't quite match in style with its\n> neighbors in execPartition.h, so changed it to\n> ExecGetLeafPartitionForKey().\n\nThis one has been marked Returned with Feedback in the CF app, which\nmakes sense given the discussion on -committers [1].\n\nAgree with the feedback given that it would be better to address *all*\nRI trigger check/action functions in the project of sidestepping SPI\nwhen doing those checks/actions, not only RI_FKey_check_ins / upd() as\nthe current patch does. I guess that will require thinking a little\nbit harder about how to modularize the new implementation so that the\nvarious trigger functions don't end up with their own bespoke\ncheck/action implementations.\n\nI'll think about that, also consider what Corey proposed in [2], and\ntry to reformulate this for v16.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/flat/E1ncXX2-000mFt-Pe%40gemulon.postgresql.org\n[2] https://www.postgresql.org/message-id/flat/CADkLM%3DeZJddpx6RDop-oCrQ%2BJ9R-wfbf6MoLxUUGjbpwTkoUXQ%40mail.gmail.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 16:47:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" }, { "msg_contents": "On Mon, Apr 11, 2022 at 4:47 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> This one has been marked Returned with Feedback in the CF app, which\n> makes sense given the discussion on -committers [1].\n>\n> Agree with the feedback given that it would be better to address *all*\n> RI trigger check/action functions in the project of sidestepping SPI\n> when doing those checks/actions, not only RI_FKey_check_ins / upd() as\n> the current patch does. I guess that will require thinking a little\n> bit harder about how to modularize the new implementation so that the\n> various trigger functions don't end up with their own bespoke\n> check/action implementations.\n>\n> I'll think about that, also consider what Corey proposed in [2], and\n> try to reformulate this for v16.\n\nI've been thinking about this and wondering if the SPI overhead is too\nbig in the other cases (cases where it is the FK table that is to be\nscanned) that it makes sense to replace the actual planner (invoked\nvia SPI) by a hard-coded mini-planner for the task of figuring out the\nbest way to scan the FK table for a given PK row affected by the main\nquery. Planner's involvement seems necessary in those cases, because\nthe choice of how to scan the FK table is not as clear cut as how to\nscan the PK table.\n\nISTM, the SPI overhead consists mainly of performing GetCachedPlan()\nand executor setup/shutdown, which can seem substantial when compared\nto the core task of scanning the PK/FK table, and does add up over\nmany rows affected by the main query, as seen by the over 2x speedup\nfor the PK table case gained by shaving it off with the proposed patch\n[1]. In the other cases, the mini-planner will need some cycles of\nits own, even though maybe not as many as by the use of SPI, so the\nspeedup might be less impressive.\n\nOther than coming up with an acceptable implementation for the\nmini-planner (maybe we have an example in plan_cluster_use_sort() to\nape), one more challenge is to figure out a way to implement the\nCASCADE/SET trigger routines. For those, we might need to introduce\nrestricted forms of ExecUpdate(), ExecDelete() that can be called\ndirectly, that is, without a full-fledged plan. Not having to worry\nabout those things does seem like a benefit of just continuing to use\nthe SPI in those cases.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1]\ndrop table pk, fk;\ncreate table pk (a int primary key);\ncreate table fk (a int references pk);\ninsert into pk select generate_series(1, 1000000);\ninsert into fk select i%1000000+1 from generate_series(1, 10000000) i;\n\nTime for the last statement:\n\nHEAD: 67566.845 ms (01:07.567)\n\nPatched: 26759.627 ms (00:26.760)\n\n\n", "msg_date": "Mon, 2 May 2022 20:50:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: simplifying foreign key/RI checks" } ]
[ { "msg_contents": "Hi,\n\nWhen I created a table consisting of 400 VARCHAR columns and tried\nto INSERT a record which rows were all the same size, there were\ncases where I got an error due to exceeding the size limit per\nrow.\n\n =# -- create a table consisting of 400 VARCHAR columns\n =# CREATE TABLE t1 (c1 VARCHAR(100),\n c2 VARCHAR(100),\n ...\n c400 VARCHAR(100));\n\n =# -- insert one record which rows are all 20 bytes\n =# INSERT INTO t1 VALUES (repeat('a', 20),\n repeat('a', 20),\n ...\n repeat('a', 20));\n ERROR: row is too big: size 8424, maximum size 8160\n\nWhat is interesting is that it failed only when the size of each\ncolumn was 20~23 bytes, as shown below.\n\n size of each column | result\n -------------------------------\n 18 bytes | success\n 19 bytes | success\n 20 bytes | failure\n 21 bytes | failure\n 22 bytes | failure\n 23 bytes | failure\n 24 bytes | success\n 25 bytes | success\n\n\nWhen the size of each column was 19 bytes or less, it succeeds\nbecause the row size is within a page size.\nWhen the size of each column was 24 bytes or more, it also\nsucceeds because columns are TOASTed and the row size is reduced\nto less than one page size.\nOTOH, when it's more than 19 bytes and less than 24 bytes,\ncolumns aren't TOASTed because it doesn't meet the condition of\nthe following if statement.\n\n --src/backend/access/table/toast_helper.c\n\n toast_tuple_find_biggest_attribute(ToastTupleContext *ttc,\n bool for_compression, bool check_main)\n ...(snip)...\n int32 biggest_size = MAXALIGN(TOAST_POINTER_SIZE);\n ...(snip)...\n if (ttc->ttc_attr[i].tai_size > biggest_size) // <- here\n {\n biggest_attno = i;\n biggest_size = ttc->ttc_attr[i].tai_size;\n }\n\n\nSince TOAST_POINTER_SIZE is 18 bytes but\nMAXALIGN(TOAST_POINTER_SIZE) is 24 bytes, columns are not TOASTed\nuntil its size becomes larger than 24 bytes.\n\nI confirmed these sizes in my environment but AFAIU they would be\nthe same size in any environment.\n\nSo, as a result of adjusting the alignment, 20~23 bytes seems to\nfail.\n\nI wonder if it might be better not to adjust the alignment here\nas an attached patch because it succeeded in inserting 20~23\nbytes records.\nOr is there reasons to add the alignment here?\n\nI understand that TOAST is not effective for small data and it's\nnot recommended to create a table containing hundreds of columns,\nbut I think cases that can be successful should be successful.\n\nAny thoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Mon, 18 Jan 2021 23:23:09 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "TOAST condition for column size" }, { "msg_contents": "On Mon, Jan 18, 2021 at 7:53 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> Hi,\n\n> I confirmed these sizes in my environment but AFAIU they would be\n> the same size in any environment.\n>\n> So, as a result of adjusting the alignment, 20~23 bytes seems to\n> fail.\n>\n> I wonder if it might be better not to adjust the alignment here\n> as an attached patch because it succeeded in inserting 20~23\n> bytes records.\n> Or is there reasons to add the alignment here?\n>\n\nBecause no benefit is to be expected by compressing it. The size will\nbe mostly the same. Also, even if we somehow try to fit this data via\ntoast, I think reading speed will be slower because for all such\ncolumns an extra fetch from toast would be required. Another thing is\nyou or others can still face the same problem with 17-byte column\ndata. I don't this is the right way to fix it. I don't have many good\nideas but I think you can try by (a) increasing block size during\nconfigure, (b) reduce the number of columns, (c) create char columns\nof somewhat bigger size say greater than 24 bytes to accommodate your\ncase.\n\nI know none of these are good workarounds but at this moment I can't\nthink of better alternatives.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Jan 2021 16:02:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TOAST condition for column size" }, { "msg_contents": "On Mon, Jan 18, 2021 at 7:53 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> When I created a table consisting of 400 VARCHAR columns and tried\n> to INSERT a record which rows were all the same size, there were\n> cases where I got an error due to exceeding the size limit per\n> row.\n>\n> =# -- create a table consisting of 400 VARCHAR columns\n> =# CREATE TABLE t1 (c1 VARCHAR(100),\n> c2 VARCHAR(100),\n> ...\n> c400 VARCHAR(100));\n>\n> =# -- insert one record which rows are all 20 bytes\n> =# INSERT INTO t1 VALUES (repeat('a', 20),\n> repeat('a', 20),\n> ...\n> repeat('a', 20));\n> ERROR: row is too big: size 8424, maximum size 8160\n>\n> What is interesting is that it failed only when the size of each\n> column was 20~23 bytes, as shown below.\n>\n> size of each column | result\n> -------------------------------\n> 18 bytes | success\n> 19 bytes | success\n> 20 bytes | failure\n> 21 bytes | failure\n> 22 bytes | failure\n> 23 bytes | failure\n> 24 bytes | success\n> 25 bytes | success\n>\n>\n> When the size of each column was 19 bytes or less, it succeeds\n> because the row size is within a page size.\n> When the size of each column was 24 bytes or more, it also\n> succeeds because columns are TOASTed and the row size is reduced\n> to less than one page size.\n> OTOH, when it's more than 19 bytes and less than 24 bytes,\n> columns aren't TOASTed because it doesn't meet the condition of\n> the following if statement.\n>\n> --src/backend/access/table/toast_helper.c\n>\n> toast_tuple_find_biggest_attribute(ToastTupleContext *ttc,\n> bool for_compression, bool check_main)\n> ...(snip)...\n> int32 biggest_size = MAXALIGN(TOAST_POINTER_SIZE);\n> ...(snip)...\n> if (ttc->ttc_attr[i].tai_size > biggest_size) // <- here\n> {\n> biggest_attno = i;\n> biggest_size = ttc->ttc_attr[i].tai_size;\n> }\n>\n>\n> Since TOAST_POINTER_SIZE is 18 bytes but\n> MAXALIGN(TOAST_POINTER_SIZE) is 24 bytes, columns are not TOASTed\n> until its size becomes larger than 24 bytes.\n>\n> I confirmed these sizes in my environment but AFAIU they would be\n> the same size in any environment.\n>\n> So, as a result of adjusting the alignment, 20~23 bytes seems to\n> fail.\n>\n> I wonder if it might be better not to adjust the alignment here\n> as an attached patch because it succeeded in inserting 20~23\n> bytes records.\n> Or is there reasons to add the alignment here?\n>\n> I understand that TOAST is not effective for small data and it's\n> not recommended to create a table containing hundreds of columns,\n> but I think cases that can be successful should be successful.\n>\n> Any thoughts?\n\nHow this can be correct? because while forming the tuple you might\nneed the alignment. So basically while computing the size we are not\nconsidering alignment and later while actually forming the tuple you\nmight have to align it so seems like it can create corruption while\nforming the tuple.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 17:17:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TOAST condition for column size" }, { "msg_contents": "On Tue, Jan 19, 2021 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jan 18, 2021 at 7:53 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> >\n> > Hi,\n> >\n> > When I created a table consisting of 400 VARCHAR columns and tried\n> > to INSERT a record which rows were all the same size, there were\n> > cases where I got an error due to exceeding the size limit per\n> > row.\n> >\n> > =# -- create a table consisting of 400 VARCHAR columns\n> > =# CREATE TABLE t1 (c1 VARCHAR(100),\n> > c2 VARCHAR(100),\n> > ...\n> > c400 VARCHAR(100));\n> >\n> > =# -- insert one record which rows are all 20 bytes\n> > =# INSERT INTO t1 VALUES (repeat('a', 20),\n> > repeat('a', 20),\n> > ...\n> > repeat('a', 20));\n> > ERROR: row is too big: size 8424, maximum size 8160\n> >\n> > What is interesting is that it failed only when the size of each\n> > column was 20~23 bytes, as shown below.\n> >\n> > size of each column | result\n> > -------------------------------\n> > 18 bytes | success\n> > 19 bytes | success\n> > 20 bytes | failure\n> > 21 bytes | failure\n> > 22 bytes | failure\n> > 23 bytes | failure\n> > 24 bytes | success\n> > 25 bytes | success\n> >\n> >\n> > When the size of each column was 19 bytes or less, it succeeds\n> > because the row size is within a page size.\n> > When the size of each column was 24 bytes or more, it also\n> > succeeds because columns are TOASTed and the row size is reduced\n> > to less than one page size.\n> > OTOH, when it's more than 19 bytes and less than 24 bytes,\n> > columns aren't TOASTed because it doesn't meet the condition of\n> > the following if statement.\n> >\n> > --src/backend/access/table/toast_helper.c\n> >\n> > toast_tuple_find_biggest_attribute(ToastTupleContext *ttc,\n> > bool for_compression, bool check_main)\n> > ...(snip)...\n> > int32 biggest_size = MAXALIGN(TOAST_POINTER_SIZE);\n> > ...(snip)...\n> > if (ttc->ttc_attr[i].tai_size > biggest_size) // <- here\n> > {\n> > biggest_attno = i;\n> > biggest_size = ttc->ttc_attr[i].tai_size;\n> > }\n> >\n> >\n> > Since TOAST_POINTER_SIZE is 18 bytes but\n> > MAXALIGN(TOAST_POINTER_SIZE) is 24 bytes, columns are not TOASTed\n> > until its size becomes larger than 24 bytes.\n> >\n> > I confirmed these sizes in my environment but AFAIU they would be\n> > the same size in any environment.\n> >\n> > So, as a result of adjusting the alignment, 20~23 bytes seems to\n> > fail.\n> >\n> > I wonder if it might be better not to adjust the alignment here\n> > as an attached patch because it succeeded in inserting 20~23\n> > bytes records.\n> > Or is there reasons to add the alignment here?\n> >\n> > I understand that TOAST is not effective for small data and it's\n> > not recommended to create a table containing hundreds of columns,\n> > but I think cases that can be successful should be successful.\n> >\n> > Any thoughts?\n>\n> How this can be correct? because while forming the tuple you might\n> need the alignment.\n>\n\nWon't it be safe because we don't align individual attrs of type\nvarchar where length is less than equal to 127?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Jan 2021 18:28:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TOAST condition for column size" }, { "msg_contents": "On Tue, 19 Jan 2021 at 6:28 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Jan 19, 2021 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Jan 18, 2021 at 7:53 PM torikoshia <torikoshia@oss.nttdata.com>\n> wrote:\n> > >\n> > > Hi,\n> > >\n> > > When I created a table consisting of 400 VARCHAR columns and tried\n> > > to INSERT a record which rows were all the same size, there were\n> > > cases where I got an error due to exceeding the size limit per\n> > > row.\n> > >\n> > > =# -- create a table consisting of 400 VARCHAR columns\n> > > =# CREATE TABLE t1 (c1 VARCHAR(100),\n> > > c2 VARCHAR(100),\n> > > ...\n> > > c400 VARCHAR(100));\n> > >\n> > > =# -- insert one record which rows are all 20 bytes\n> > > =# INSERT INTO t1 VALUES (repeat('a', 20),\n> > > repeat('a', 20),\n> > > ...\n> > > repeat('a', 20));\n> > > ERROR: row is too big: size 8424, maximum size 8160\n> > >\n> > > What is interesting is that it failed only when the size of each\n> > > column was 20~23 bytes, as shown below.\n> > >\n> > > size of each column | result\n> > > -------------------------------\n> > > 18 bytes | success\n> > > 19 bytes | success\n> > > 20 bytes | failure\n> > > 21 bytes | failure\n> > > 22 bytes | failure\n> > > 23 bytes | failure\n> > > 24 bytes | success\n> > > 25 bytes | success\n> > >\n> > >\n> > > When the size of each column was 19 bytes or less, it succeeds\n> > > because the row size is within a page size.\n> > > When the size of each column was 24 bytes or more, it also\n> > > succeeds because columns are TOASTed and the row size is reduced\n> > > to less than one page size.\n> > > OTOH, when it's more than 19 bytes and less than 24 bytes,\n> > > columns aren't TOASTed because it doesn't meet the condition of\n> > > the following if statement.\n> > >\n> > > --src/backend/access/table/toast_helper.c\n> > >\n> > > toast_tuple_find_biggest_attribute(ToastTupleContext *ttc,\n> > > bool for_compression, bool check_main)\n> > > ...(snip)...\n> > > int32 biggest_size = MAXALIGN(TOAST_POINTER_SIZE);\n> > > ...(snip)...\n> > > if (ttc->ttc_attr[i].tai_size > biggest_size) // <- here\n> > > {\n> > > biggest_attno = i;\n> > > biggest_size = ttc->ttc_attr[i].tai_size;\n> > > }\n> > >\n> > >\n> > > Since TOAST_POINTER_SIZE is 18 bytes but\n> > > MAXALIGN(TOAST_POINTER_SIZE) is 24 bytes, columns are not TOASTed\n> > > until its size becomes larger than 24 bytes.\n> > >\n> > > I confirmed these sizes in my environment but AFAIU they would be\n> > > the same size in any environment.\n> > >\n> > > So, as a result of adjusting the alignment, 20~23 bytes seems to\n> > > fail.\n> > >\n> > > I wonder if it might be better not to adjust the alignment here\n> > > as an attached patch because it succeeded in inserting 20~23\n> > > bytes records.\n> > > Or is there reasons to add the alignment here?\n> > >\n> > > I understand that TOAST is not effective for small data and it's\n> > > not recommended to create a table containing hundreds of columns,\n> > > but I think cases that can be successful should be successful.\n> > >\n> > > Any thoughts?\n> >\n> > How this can be correct? because while forming the tuple you might\n> > need the alignment.\n> >\n>\n> Won't it be safe because we don't align individual attrs of type\n> varchar where length is less than equal to 127?\n\n\nYeah right, I just missed that point.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, 19 Jan 2021 at 6:28 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Jan 19, 2021 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jan 18, 2021 at 7:53 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> >\n> > Hi,\n> >\n> > When I created a table consisting of 400 VARCHAR columns and tried\n> > to INSERT a record which rows were all the same size, there were\n> > cases where I got an error due to exceeding the size limit per\n> > row.\n> >\n> >    =# -- create a table consisting of 400 VARCHAR columns\n> >    =# CREATE TABLE t1 (c1 VARCHAR(100),\n> >                        c2 VARCHAR(100),\n> >                        ...\n> >                        c400 VARCHAR(100));\n> >\n> >    =# -- insert one record which rows are all 20 bytes\n> >    =# INSERT INTO t1 VALUES (repeat('a', 20),\n> >                              repeat('a', 20),\n> >                              ...\n> >                              repeat('a', 20));\n> >      ERROR:  row is too big: size 8424, maximum size 8160\n> >\n> > What is interesting is that it failed only when the size of each\n> > column was 20~23 bytes, as shown below.\n> >\n> >    size of each column  |  result\n> >    -------------------------------\n> >            18 bytes     |  success\n> >            19 bytes     |  success\n> >            20 bytes     |  failure\n> >            21 bytes     |  failure\n> >            22 bytes     |  failure\n> >            23 bytes     |  failure\n> >            24 bytes     |  success\n> >            25 bytes     |  success\n> >\n> >\n> > When the size of each column was 19 bytes or less, it succeeds\n> > because the row size is within a page size.\n> > When the size of each column was 24 bytes or more, it also\n> > succeeds because columns are TOASTed and the row size is reduced\n> > to less than one page size.\n> > OTOH, when it's more than 19 bytes and less than 24 bytes,\n> > columns aren't TOASTed because it doesn't meet the condition of\n> > the following if statement.\n> >\n> >   --src/backend/access/table/toast_helper.c\n> >\n> >     toast_tuple_find_biggest_attribute(ToastTupleContext *ttc,\n> >                           bool for_compression, bool check_main)\n> >         ...(snip)...\n> >         int32        biggest_size = MAXALIGN(TOAST_POINTER_SIZE);\n> >         ...(snip)...\n> >         if (ttc->ttc_attr[i].tai_size > biggest_size) // <- here\n> >         {\n> >             biggest_attno = i;\n> >             biggest_size = ttc->ttc_attr[i].tai_size;\n> >         }\n> >\n> >\n> > Since TOAST_POINTER_SIZE is 18 bytes but\n> > MAXALIGN(TOAST_POINTER_SIZE) is 24 bytes, columns are not TOASTed\n> > until its size becomes larger than 24 bytes.\n> >\n> > I confirmed these sizes in my environment but AFAIU they would be\n> > the same size in any environment.\n> >\n> > So, as a result of adjusting the alignment, 20~23 bytes seems to\n> > fail.\n> >\n> > I wonder if it might be better not to adjust the alignment here\n> > as an attached patch because it succeeded in inserting 20~23\n> > bytes records.\n> > Or is there reasons to add the alignment here?\n> >\n> > I understand that TOAST is not effective for small data and it's\n> > not recommended to create a table containing hundreds of columns,\n> > but I think cases that can be successful should be successful.\n> >\n> > Any thoughts?\n>\n> How this can be correct? because while forming the tuple you might\n> need the alignment.\n>\n\nWon't it be safe because we don't align individual attrs of type\nvarchar where length is less than equal to 127?Yeah right,  I just missed that point.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Jan 2021 19:26:34 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TOAST condition for column size" }, { "msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> writes:\n> On Tue, 19 Jan 2021 at 6:28 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> Won't it be safe because we don't align individual attrs of type\n>> varchar where length is less than equal to 127?\n\n> Yeah right, I just missed that point.\n\nYeah, the minimum on biggest_size has nothing to do with alignment\ndecisions. It's just a filter to decide whether it's worth trying\nto toast anything.\n\nHaving said that, I'm pretty skeptical of this patch: I think its\nmost likely real-world effect is going to be to waste cycles (and\ncreate TOAST-table bloat) on the way to failing anyway. I do not\nthink that toasting a 20-byte field down to 18 bytes is likely to be\na productive thing to do in typical situations. The given example\nlooks like a cherry-picked edge case rather than a useful case to\nworry about.\n\nIOW, if I were asked to review whether the current minimum is\nwell-chosen, I'd be wondering if we should increase it not\ndecrease it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 10:40:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TOAST condition for column size" }, { "msg_contents": "On 2021-01-19 19:32, Amit Kapila wrote:\n> On Mon, Jan 18, 2021 at 7:53 PM torikoshia\n> Because no benefit is to be expected by compressing it. The size will\n> be mostly the same. Also, even if we somehow try to fit this data via\n> toast, I think reading speed will be slower because for all such\n> columns an extra fetch from toast would be required. Another thing is\n> you or others can still face the same problem with 17-byte column\n> data. I don't this is the right way to fix it. I don't have many good\n> ideas but I think you can try by (a) increasing block size during\n> configure, (b) reduce the number of columns, (c) create char columns\n> of somewhat bigger size say greater than 24 bytes to accommodate your\n> case.\n> \n> I know none of these are good workarounds but at this moment I can't\n> think of better alternatives.\n\nThanks for your explanation and workarounds!\n\n\n\nOn 2021-01-20 00:40, Tom Lane wrote:\n> Dilip Kumar <dilipbalaut@gmail.com> writes:\n>> On Tue, 19 Jan 2021 at 6:28 PM, Amit Kapila <amit.kapila16@gmail.com> \n>> wrote:\n>>> Won't it be safe because we don't align individual attrs of type\n>>> varchar where length is less than equal to 127?\n> \n>> Yeah right, I just missed that point.\n> \n> Yeah, the minimum on biggest_size has nothing to do with alignment\n> decisions. It's just a filter to decide whether it's worth trying\n> to toast anything.\n> Having said that, I'm pretty skeptical of this patch: I think its\n> most likely real-world effect is going to be to waste cycles (and\n> create TOAST-table bloat) on the way to failing anyway. I do not\n> think that toasting a 20-byte field down to 18 bytes is likely to be\n> a productive thing to do in typical situations. The given example\n> looks like a cherry-picked edge case rather than a useful case to\n> worry about.\n\nI agree with you, it seems only work when there are many columns with\n19 ~ 23 bytes of data and it's not a normal case.\nI'm not sure, but a rare exception might be some geographic data.\nThat's the situation I heard that problem happened.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\n\n", "msg_date": "Wed, 20 Jan 2021 21:18:35 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: TOAST condition for column size" } ]
[ { "msg_contents": "Hi hackers,\n\nAs suggested by Masao, I am starting a new thread to follow up about \nstandby recovery conflicts.\n\nThe initial patch proposed in [1] has been split in 3 parts:\n\n- Add block information in error context of WAL REDO apply: committed \n(9d0bd95fa90a7243047a74e29f265296a9fc556d)\n- Add information when the startup process is waiting for recovery \nconflicts: committed (0650ff23038bc3eb8d8fd851744db837d921e285)\n- Add information when the cancellation occurs:  subject of this new thread\n\nAs you can see, the initial idea was also to dump information about the \nblocking backends (should they reach the cancellation stage).\n\nMain idea is to provide information like:\n\n2020-06-15 06:48:54.778 UTC [7037] LOG: about to interrupt pid: 7037, \nbackend_type: client backend, state: active, wait_event_type: Timeout, \nwait_event: PgSleep, query_start: 2020-06-15 06:48:13.008427+00\n\nSome examples, on how this could be useful:\n\n     - For example the query being canceled usually runs in 1 second, \nseeing that it started 1 minute ago (when canceled) could indicate plan \nchange.\n     - For example a lot of queries have been canceled and all of them \nwere waiting on “DataFileRead”: that could indicate bad IO response time \nat that moment.\n     - Seeing the state as “idle in transaction” could potentially \nindicate an unexpected application behavior (say the application is \nusing Begin; SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; then \nselect and then stay in an idle in transaction state that could lead to \nrecovery conflict)\n\nMain purpose is to dump information just before the cancellation occurs \nto get some clue on what was going on and get some data to work on (to \navoid future conflict and cancellation).\n\nIf you think this information can be useful then I can submit a patch in \nthis area.\n\nBertrand\n\n[1]: \nhttps://www.postgresql.org/message-id/9a60178c-a853-1440-2cdc-c3af916cff59%40amazon.com\n\n\n\n\n\n\n\n", "msg_date": "Mon, 18 Jan 2021 16:19:31 +0100", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Standby recovery conflicts: add information when the cancellation\n occurs" } ]
[ { "msg_contents": "Hello, hackers.\n\n[ABSTRACT]\n\nExecution of queries to hot standby is one of the most popular ways to\nscale application workload. Most of the modern Postgres installations\nhave two standby nodes for high-availability support. So, utilization\nof replica's CPU seems to be a reasonable idea.\nAt the same time, some queries (index scans) could be much slower on\nhot standby rather than on the primary one. It happens because the\nLP_DEAD index hint bits mechanics is ignored in index scans during\nrecovery. It is done for reasons, of course [1]:\n\n * We do this because the xmin on the primary node could easily be\n * later than the xmin on the standby node, so that what the primary\n * thinks is killed is supposed to be visible on standby. So for correct\n * MVCC for queries during recovery we must ignore these hints and check\n * all tuples.\n\nAlso, according to [2] and cases like [3], it seems to be a good idea\nto support \"ignore_killed_tuples\" on standby.\n\nThe goal of this patch is to provide full support for index hint bits\non hot standby. The mechanism should be based on well-tested\nfunctionality and not cause a lot of recovery conflicts.\n\nThis thread is the continuation (and party copy-paste) of the old\nprevious one [4].\n\n[PROBLEM]\n\nThe standby itself can set and read hint bits during recovery. Such\nbits are even correct according to standby visibility rules. But the\nproblem here - is full-page-write WAL records coming from the primary.\nSuch WAL records could bring invalid (according to standby xmin) hint\nbits.\n\nSo, if we could be sure the scan doesn’t see any invalid hint bit from\nprimary - the problem is solved. And we will even be able to allow\nstandby to set its LP_DEAD bits itself.\n\nThe idea is simple: let WAL log hint bits before FPW somehow. It could\ncause a lot of additional logs, however...\n\nBut there are ways to avoid it:\n1) Send only one `latestRemovedXid` of all tuples marked as dead\nduring page scan.\n2) Remember the latest sent `latestRemovedXid` in shared memory. And\noptimistically skip WAL records with older xid values [5].\n\nSuch WAL records would cause a lot of recovery conflicts on standbys.\nBut we could be tricky here - let use hint bits only if\nhot_standby_feedback is enabled and effective on standby. If HSF is\neffective - then conflicts are not possible. If HSF is off - then\nstandby ignores both hint bits and additional conflict resolution. The\nmajor thing here is that HSF is just optimization and has nothing with\nMVCC correctness.\n\n[DETAILS]\n\nThe patch introduces a new WAL record (named\nXLOG_INDEX_HINT_BITS_HORIZON) to define a horizon of xmin required for\nstandbys snapshot to use LP_DEAD bits for an index scan.\n\n`table_index_fetch_tuple` now returns `latest_removed_xid` value\nadditionally to `all_dead`. This value is used to advance\n`killedLatestRemovedXid` at time of updating `killedItems` (see\n`IndexHintBitAdvanceLatestRemovedXid`).\n\nPrimary sends the value of `killedLatestRemovedXid` in\nXLOG_INDEX_HINT_BITS_HORIZON before it marks page dirty after setting\nLP_DEAD bits on the index page (by calling\n`MarkBufferDirtyIndexHint`).\n\nNew WAL is always sent before possible FPW. It is required to send\nsuch a record only if its `latestRemovedXid` is newer than the one was\nsent before for the current database (see\n`LogIndexHintBitsHorizonIfNeeded`).\n\nThere is a new flag in the PGPROC structure -\n`indexIgnoreKilledTuples`. If the flag is set to true – standby\nqueries are going to use LP_DEAD bits in index scans. In such a case\nsnapshot is required to satisfice the new horizon pushed by\nXLOG_INDEX_HINT_BITS_HORIZON records.\n\nIt is safe to set `indexIgnoreKilledTuples` to any value from the\nperspective of correctness. But `true` value could cause recovery\nconflict. It is just some kind of compromise – use LP_DEAD bits but be\naware of XLOG_INDEX_HINT_BITS_HORIZON or vice versa.\n\nWhat is the way to make the right decision about this compromise? It\nis pretty simple – if `hot_standby_feedback` is on and primary\nconfirmed feedback is received – then set\n`indexIgnoreKilledTuples`(see `GetSnapshotIndexIgnoreKilledTuples`).\n\nWhile feedback is working as expected – the query will never be\ncanceled by XLOG_INDEX_HINT_BITS_HORIZON.\n\nTo support cascading standby setups (with a possible break of feedback\nchain in the middle) – an additional byte was added to the keep-alive\nmessage of the feedback protocol. This byte is used to make sure our\nxmin is honored by primary (see\n`sender_propagates_feedback_to_primary`). Also, the WAL sender now\nalways sends a keep-alive after receiving a feedback message.\n\nSo, this way, it is safe to use LP_DEAD bits received from the primary\nwhen we want to.\n\nAnd, as a result, it is safe to set LP_DEAD bits on standby.\nEven if:\n* the primary changes vacuum_defer_cleanup_age\n* standby restarted\n* standby promoted to the primary\n* base backup taken from standby\n* standby is serving queries during recovery\n– nothing could go wrong here.\n\nBecause `HeapTupleIsSurelyDead` (and index LP_DEAD as result) needs\n*heap* hint bits to be already set at standby. So, the same code\ndecides to set hint bits on the heap (it is done already on standby\nfor a long time) and in the index.\n\n[EVALUATION]\nIt is not possible to find an ideal performance test for such kind of\noptimization.\n\nBut there is a possible example in the attachment. It is a standard\npgbench schema with an additional index on balance and random balance\nvalues.\n\nOn primary test do next:\n1) transfer some money from one random of the top 100 rich accounts to\none random of the top 100 poor accounts.\n2) calculate the amount of money in the top 10 rich and top 10 poor\naccounts (and include an additional field to avoid index-only-scan).\nIn the case of standby only step 2 is used.\n\nThe patched version is about 9x faster for standby queries - like 455\nTPS versus 4192 TPS on my system. There is no visible difference for\nprimary.\n\nTo estimate the additional amount of WAL logs, I have checked records\nin WAL-segments during different conditions:\n(pg_waldump pgdata/pg_wal/XXX | grep INDEX_HINT_BITS_HORIZON | wc -l)\n\n- hot_standby_feedback=off - 5181 of 226274 records ~2%\n- hot_standby_feedback=on (without load on standby) - 70 of 202594\nrecords ~ 0.03%\n- hot_standby_feedback=on (with load on standby) - 17 of 70504 records ~ 0.02%\n\nSo, with HSF=on (which is the default value) WAL increase is not\nsignificant. Also, for HSF=off it should be possible to radically\nreduce the number of additional WAL logs by using `latestRemovedXid`\nfrom other records (like Heap2/CLEAN) in \"send only newer xid\"\noptimization (I have skipped it for now for simplicity).\n\n[CONCLUSION]\n\nThe only thing we pay – a few additional WAL records and some\nadditional moderate code complexity. But the support of hint-bits on\nstandby is a huge advantage for many workloads. I was able to get more\nthan a 900% performance boost (and it is not surprising – index hint\nbits are just great optimization). And it works for almost all index\ntypes out of the box.\n\nAnother major thing here – everything is based on old, well-tested\nmechanics: query cancelation because of snapshot conflicts, setting\nheap hint bits on standby, hot standby feedback.\n\n[REFERENCES]\n\n[1] - https://www.postgresql.org/message-id/flat/7067.1529246768%40sss.pgh.pa.us#d9e2e570ba34fc96c4300a362cbe8c38\n[2] - https://www.postgresql.org/message-id/flat/12843.1529331619%40sss.pgh.pa.us#6df9694fdfd5d550fbb38e711d162be8\n[3] - https://www.postgresql.org/message-id/flat/20170428133818.24368.33533%40wrigleys.postgresql.org\n[4] - https://www.postgresql.org/message-id/flat/CANtu0ohOvgteBYmCMc2KERFiJUvpWGB0bRTbK_WseQH-L1jkrQ%40mail.gmail.com\n[5] - https://www.postgresql.org/message-id/flat/CANtu0oigC0%2BH0UkxktyovdLLU67ikM0%2BDw3J4EQqiDDeGhcwsQ%40mail.gmail.com", "msg_date": "Mon, 18 Jan 2021 23:30:21 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, everyone.\n\nOh, I just realized that it seems like I was too naive to allow\nstandby to set LP_DEAD bits this way.\nThere is a possible consistency problem in the case of low\nminRecoveryPoint value (because hint bits do not move PageLSN\nforward).\n\nSomething like this:\n\nLSN=10 STANDBY INSERTS NEW ROW IN INDEX (index_lsn=10)\n<-----------minRecoveryPoint will go here\nLSN=20 STANDBY DELETES ROW FROM HEAP, INDEX UNTACHED (index_lsn=10)\n REPLICA SCANS INDEX AND SET hint bits (index_lsn=10)\n INDEX IS FLUSHED (minRecoveryPoint=index_lsn=10)\n CRASH\n\nOn crash recovery, a standby will be able to handle queries after\nLSN=10. But the index page contains hints bits from the future\n(LSN=20).\nSo, need to think here.\n\nThanks,\nMichail.\n\n\n", "msg_date": "Fri, 22 Jan 2021 03:56:39 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, hackers.\n\nI think I was able to fix the issue related to minRecoveryPoint and crash\nrecovery. To make sure standby will be consistent after crash recovery, we\nneed to take the current value of minRecoveryPoint into account while\nsetting LP_DEAD hints (almost the same way as it is done for *heap* hint\nbits already).\n\nI have introduced new structure IndexHintBitsData:\n-------\n /* guaranteed not visible for all backends */\n bool all_dead;\n\n /* latest removed xid if known */\n TransactionId latest_removed_xid;\n\n /* lsn of page where dead tuple located */\n XLogRecPtr page_lsn;\n-------\n\nThis structure is filled by the `heap_hot_search_buffer` function. After,\nwe decide to set or not `kill_prior_tuple` depending on its content\n(calling `IsMarkBufferDirtyIndexHintAllowed`).\n\nFor primary - it is always safe to set LP_DEAD in index if `all_dead` ==\ntrue.\n\nIn the case of standby, we need to check `latest_removed_xid` (if\navailable) first. If commit LSN of the latest removed xid is already lower\nthan minRecoveryPoint (`XLogNeedsFlush`) - it is safe to set\n`kill_prior_tuple`.\n\nSometimes we are not sure about the latest removed xid - heap record could\nbe marked dead by the XLOG_HEAP2_CLEAN record, for example. In such a case\nwe check the LSN of the *heap* page containing the tuple (LSN could be\nupdated by other transactions already - but it does not matter in that\nsituation). If page LSN is lower than minRecoveryPoint - it is safe to set\nLP_DEAD in the index too. Otherwise - just leave the index tuple alive.\n\n\nSo, to bring it all together:\n\n* Normal operation, proc->indexIgnoreKilledTuples is true:\n It is safe for standby to use hint bits from the primary FPI because\nof XLOG_INDEX_HINT_BITS_HORIZON conflict resolution.\n It is safe for standby to set its index hint bits because\n`ComputeXidHorizons` honors other read-only procs xmin and lowest xid on\nprimary (`KnownAssignedXidsGetOldestXmin`).\n\n* Normal operation, proc->indexIgnoreKilledTuples is false:\n Index hint bits are never set or taken into account.\n\n* Crash recovery, proc->indexIgnoreKilledTuples is true:\n It is safe for standby to use hint bits from the primary FPW because\nXLOG_INDEX_HINT_BITS_HORIZON is always logged before FPI, and commit record\nof transaction removed the tuple is logged before\nXLOG_INDEX_HINT_BITS_HORIZON. So, if FPI with hints was flushed (and taken\ninto account by minRecoveryPoint) - both transaction-remover and horizon\nrecords are replayed before reading queries.\n It is safe for standby to use its hint bits because they can be set\nonly if the commit record of transaction-remover is lower than\nminRecoveryPoint or LSN of heap page with removed tuples is lower than\nminRecoveryPoint.\n\n* Crash recovery, proc->indexIgnoreKilledTuples is false:\n Index hint bits are never set or taken into account.\n\nSo, now it seems correct to me.\n\nAnother interesting point here - now position of minRecoveryPoint affects\nperformance a lot. It is happening already (because of *heap* hint bits)\nbut after the patch, it is noticeable even more. Is there any sense to keep\nminRecoveryPoint at a low value?\n\nRebased and updated patch in attachment.\n\nWill be happy if someone could recheck my ideas or even the code :)\n\nThanks a lot,\nMichail.", "msg_date": "Wed, 27 Jan 2021 22:27:21 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, everyone.\n\nAfter some correspondence with Peter Geoghegan (1) and his ideas, I\nhave reworked the patch a lot and now it is much more simple with even\nbetter performance (no new WAL or conflict resolution, hot standby\nfeedback is unrelated).\n\nThe idea is pretty simple now - let’s mark the page with\n“standby-safe” LP_DEAD hints by the bit in btpo_flags\n(BTP_LP_SAFE_ON_STANDBY and similar for gist and hash).\n\nIf standby wants to set LP_DEAD - it checks BTP_LP_SAFE_ON_STANDBY on\nthe page first, if it is not set - all “primary” hints are removed\nfirst, and then the flag is set (with memory barrier to avoid memory\nordering issues in concurrent scans).\nAlso, standby checks BTP_LP_SAFE_ON_STANDBY to be sure about ignoring\ntuples marked by LP_DEAD during the scan.\n\nOf course, it is not so easy. If standby was promoted (or primary was\nrestored from standby backup) - it is still possible to receive FPI\nwith such flag set in WAL logs. So, the main problem is still there.\n\nBut we could just clear this flag while applying FPI because the page\nremains dirty after that anyway! It should not cause any checksum,\nconsistency, or pg_rewind issues as explained in (2).\nSemantically it is the same as set hint bit one milisecond after FPI\nwas applied (while page still remains dirty after FPI replay) - and\nstandby already does it with *heap* hint bits.\n\nAlso, TAP-test attached to (2) shows how it is easy to flush a hint\nbit which was set by standby to achieve different checksum comparing\nto primary already.\n\nIf standby was promoted (or restored from standby backup) it is safe\nto use LP_DEAD with or without BTP_LP_SAFE_ON_STANDBY on a page. But\nfor accuracy BTP_LP_SAFE_ON_STANDBY is cleared by primary if found.\n\nAlso, we should take into account minRecoveryPoint as described in (3)\nto avoid consistency issues during crash recovery (see\nIsIndexLpDeadAllowed).\n\nAlso, as far as I know - there is no practical sense to keep\nminRecoveryPoint at a low value. So, there is an optional patch that\nmoves minRecoveryPoint forward at each xl_running_data (to allow\nstandby to set hint bits and LP_DEADs more aggressively). It is about\nevery 15s.\n\nThere are some graphics showing performance testing results on my PC\nin the attachment (test is taken from (4)). Each test was running for\n10 minutes.\nAdditional primary performance is probably just measurement error. But\nstandby performance gain is huge.\n\nFeel free to ask if you need more proof about correctness.\n\nThanks,\nMichail.\n\n[1] - https://www.postgresql.org/message-id/flat/CAH2-Wz%3D-BoaKgkN-MnKj6hFwO1BOJSA%2ByLMMO%2BLRZK932fNUXA%40mail.gmail.com#6d7cdebd68069cc493c11b9732fd2040\n[2] - https://www.postgresql.org/message-id/flat/CANtu0oiAtteJ%2BMpPonBg6WfEsJCKrxuLK15P6GsaGDcYGjefVQ%40mail.gmail.com#091fca433185504f2818d5364819f7a4\n[3] - https://www.postgresql.org/message-id/flat/CANtu0oh28mX5gy5jburH%2Bn1mcczK5_dCQnhbBnCM%3DPfqh-A26Q%40mail.gmail.com#ecfe5a331a3058f895c0cba698fbc4d3\n[4] - https://www.postgresql.org/message-id/flat/CANtu0oiP18H31dSaEzn0B0rW6tA_q1G7%3D9Y92%2BUS_WHGOoQevg%40mail.gmail.com", "msg_date": "Thu, 11 Feb 2021 02:27:45 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "I'm trying to review the patch, but not sure if I understand this problem,\nplease see my comment below.\n\nMichail Nikolaev <michail.nikolaev@gmail.com> wrote:\n\n> Oh, I just realized that it seems like I was too naive to allow\n> standby to set LP_DEAD bits this way.\n> There is a possible consistency problem in the case of low\n> minRecoveryPoint value (because hint bits do not move PageLSN\n> forward).\n> \n> Something like this:\n> \n> LSN=10 STANDBY INSERTS NEW ROW IN INDEX (index_lsn=10)\n> <-----------minRecoveryPoint will go here\n> LSN=20 STANDBY DELETES ROW FROM HEAP, INDEX UNTACHED (index_lsn=10)\n\nWhy doesn't minRecoveryPoint get updated to 20? IMO that should happen by\nreplaying the commit record. And if the standby happens to crash before the\ncommit record could be replayed, no query should see the deletion and thus no\nhint bit should be set in the index.\n\n> REPLICA SCANS INDEX AND SET hint bits (index_lsn=10)\n> INDEX IS FLUSHED (minRecoveryPoint=index_lsn=10)\n> CRASH\n> \n> On crash recovery, a standby will be able to handle queries after\n> LSN=10. But the index page contains hints bits from the future\n> (LSN=20).\n> So, need to think here.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Thu, 06 May 2021 09:04:44 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Antonin.\n\n> I'm trying to review the patch, but not sure if I understand this problem,\n> please see my comment below.\n\nThanks a lot for your attention. It is strongly recommended to look at\nversion N3 (1) because it is a much more elegant, easy, and reliable\nsolution :) But the minRecoveryPoint-related issue affects it anyway.\n\n> Why doesn't minRecoveryPoint get updated to 20? IMO that should happen by\n> replaying the commit record. And if the standby happens to crash before the\n> commit record could be replayed, no query should see the deletion and thus no\n> hint bit should be set in the index.\n\nminRecoveryPoint is not affected by replaying the commit record in\nmost cases. It is updated in a lazy way, something like this:\nminRecoveryPoint = max LSN of flushed page. Version 3 of a patch\ncontains a code_optional.patch to move minRecoveryPoint more\naggressively to get additional performance on standby (based on\nPeter’s answer in (2).\n\nSo, “minRecoveryPoint will go here” is not because of “STANDBY INSERTS\nNEW ROW IN INDEX” it is just a random event.\n\nThanks,\nMichail.\n\n[1]: https://www.postgresql.org/message-id/CANtu0ohHu1r1xQfTzEJuxeaOMYncG7xRxUQWdH%3DcMXZSf%2Bnzvg%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAH2-WzkSUcuFukhJdSxHFgtL6zEQgNhgOzNBiTbP_4u%3Dk6igAg%40mail.gmail.com\n(“Also, btw, do you know any reason to keep minRecoveryPoint at a low\nvalue?”)\n\n\n", "msg_date": "Fri, 7 May 2021 12:46:46 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n\n> Hello, Antonin.\n> \n> > I'm trying to review the patch, but not sure if I understand this problem,\n> > please see my comment below.\n> \n> Thanks a lot for your attention. It is strongly recommended to look at\n> version N3 (1) because it is a much more elegant, easy, and reliable\n> solution :) But the minRecoveryPoint-related issue affects it anyway.\n\nIndeed I'm reviewing (1), but I wanted to discuss this particular question in\ncontext, so I replied here.\n\n> > Why doesn't minRecoveryPoint get updated to 20? IMO that should happen by\n> > replaying the commit record. And if the standby happens to crash before the\n> > commit record could be replayed, no query should see the deletion and thus no\n> > hint bit should be set in the index.\n> \n> minRecoveryPoint is not affected by replaying the commit record in\n> most cases. It is updated in a lazy way, something like this:\n> minRecoveryPoint = max LSN of flushed page. Version 3 of a patch\n> contains a code_optional.patch to move minRecoveryPoint more\n> aggressively to get additional performance on standby (based on\n> Peter’s answer in (2).\n\n> So, “minRecoveryPoint will go here” is not because of “STANDBY INSERTS\n> NEW ROW IN INDEX” it is just a random event.\n> Michail.\n\nSorry, I missed the fact that your example can be executed inside BEGIN - END\nblock, in which case minRecoveryPoint won't advance after each command.\n\nI'll continue my review by replying to (1)\n\n> [1]: https://www.postgresql.org/message-id/CANtu0ohHu1r1xQfTzEJuxeaOMYncG7xRxUQWdH%3DcMXZSf%2Bnzvg%40mail.gmail.com\n> [2]: https://www.postgresql.org/message-id/CAH2-WzkSUcuFukhJdSxHFgtL6zEQgNhgOzNBiTbP_4u%3Dk6igAg%40mail.gmail.com\n\n> (“Also, btw, do you know any reason to keep minRecoveryPoint at a low\n> value?”)\n\nI'm not an expert in this area (I'm reviewing this patch also to learn more\nabout recovery and replication), but after a breif research I think that\npostgres tries not to update the control file too frequently, see comments in\nUpdateMinRecoveryPoint(). I don't know if what you do in code_optional.patch\nwould be a problem. Actually I think that a commit record should be replayed\nmore often than XLOG_RUNNING_XACTS, shouldn't it?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 10 May 2021 13:48:10 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello,\nAntonin.\n\n> Sorry, I missed the fact that your example can be executed inside BEGIN - END\n> block, in which case minRecoveryPoint won't advance after each command.\n\nNo, the block is not executed as a single transaction, all commands\nare separated transactions (see below)\n\n> Actually I think that a commit record should be replayed\n> more often than XLOG_RUNNING_XACTS, shouldn't it?\n\nYes, but replaying commit records DOES NOT affect minRecoveryPoint in\nalmost all cases.\n\nUpdateMinRecoveryPoint is called by XLogFlush, but xact_redo_commit\ncalls XLogFlush only in two cases:\n* DropRelationFiles is called (some relation are dropped)\n* If ForceSyncCommit was used on primary - few “heavy” commands, like\nDropTableSpace, CreateTableSpace, movedb, etc.\n\nBut “regular” commit record is replayed without XLogFlush and, as\nresult, without UpdateMinRecoveryPoint.\n\nSo, in practice, UpdateMinRecoveryPoint is updated in an “async” way\nby checkpoint job. This is why there is a sense to call it on\nXLOG_RUNNING_XACTS.\n\nThanks,\nMichail.\n\n\n", "msg_date": "Mon, 10 May 2021 16:05:09 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n\n> > Sorry, I missed the fact that your example can be executed inside BEGIN - END\n> > block, in which case minRecoveryPoint won't advance after each command.\n> \n> No, the block is not executed as a single transaction, all commands\n> are separated transactions (see below)\n> \n> > Actually I think that a commit record should be replayed\n> > more often than XLOG_RUNNING_XACTS, shouldn't it?\n> \n> Yes, but replaying commit records DOES NOT affect minRecoveryPoint in\n> almost all cases.\n> \n> UpdateMinRecoveryPoint is called by XLogFlush, but xact_redo_commit\n> calls XLogFlush only in two cases:\n> * DropRelationFiles is called (some relation are dropped)\n> * If ForceSyncCommit was used on primary - few “heavy” commands, like\n> DropTableSpace, CreateTableSpace, movedb, etc.\n> \n> But “regular” commit record is replayed without XLogFlush and, as\n> result, without UpdateMinRecoveryPoint.\n\nok, I missed this. Thanks for explanation.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 10 May 2021 15:56:56 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n\n> After some correspondence with Peter Geoghegan (1) and his ideas, I\n> have reworked the patch a lot and now it is much more simple with even\n> better performance (no new WAL or conflict resolution, hot standby\n> feedback is unrelated).\n\nMy review that started in [1] continues here.\n\n(Please note that code.patch does not apply to the current master branch.)\n\nI think I understand your approach now and couldn't find a problem by reading\nthe code. What I consider worth improving is documentation, both code comments\nand nbtree/README. Especially for the problem discussed in [1] it should be\nexplained what would happen if kill_prior_tuple_min_lsn was not checked.\n\n\nAlso, in IsIndexLpDeadAllowed() you say that invalid\ndeadness->latest_removed_xid means the following:\n\n /*\n * Looks like it is tuple cleared by heap_page_prune_execute,\n * we must be sure if LSN of XLOG_HEAP2_CLEAN (or any subsequent\n * updates) less than minRecoveryPoint to avoid MVCC failure\n * after crash recovery.\n */\n\nHowever I think there's one more case: if heap_hot_search_buffer() considers\nall tuples in the chain to be \"surely dead\", but\nHeapTupleHeaderAdvanceLatestRemovedXid() skips them all for this reason:\n\n /*\n * Ignore tuples inserted by an aborted transaction or if the tuple was\n * updated/deleted by the inserting transaction.\n *\n * Look for a committed hint bit, or if no xmin bit is set, check clog.\n */\n\nI think that the dead tuples produced this way should never be visible on the\nstandby (and even if they were, they would change the page LSN so your\nalgorithm would treat them correctly) so I see no correctness problem. But it\nmight be worth explaining better the meaning of invalid \"latest_removed_xid\"\nin comments.\n\n\nIn the nbtree/README, you say\n\n \"... if the commit record of latestRemovedXid is more ...\"\n\nbut it's not clear to me what \"latestRemovedXid\" is. If you mean the\nscan->kill_prior_tuple_min_lsn field, you probably need more words to explain\nit.\n\n\n* IsIndexLpDeadAllowed()\n\n /* It all always allowed on primary if *all_dead. */\n\nshould probably be\n\n /* It is always allowed on primary if *all_dead. */\n\n\n* gistkillitems()\n\nAs the function is only called if (so->numKilled > 0), I think both\n\"killedsomething\" and \"dirty\" variables should always have the same value, so\none variable should be enough. Assert(so->numKilled) would be appropriate in\nthat case.\n\nThe situation is similar for btree and hash indexes.\n\n\ndoc.patch:\n\n\"+applying the fill page write.\"\n\n\n\n[1] https://www.postgresql.org/message-id/61470.1620647290%40antos\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 10 May 2021 17:48:26 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello,\nAntonin.\n\n> My review that started in [1] continues here.\nThanks a lot for the review.\n\n> (Please note that code.patch does not apply to the current master branch.)\nRebased.\n\n> Especially for the problem discussed in [1] it should be\n> explained what would happen if kill_prior_tuple_min_lsn was not checked.\nUpdated README, hope it is better now. Also, added few details related\nto the flush of hint bits.\n\n> However I think there's one more case: if heap_hot_search_buffer() considers\n> all tuples in the chain to be \"surely dead\", but\n> HeapTupleHeaderAdvanceLatestRemovedXid() skips them all for this reason:\nYes, good catch, missed it.\n\n> I think that the dead tuples produced this way should never be visible on the\n> standby (and even if they were, they would change the page LSN so your\n> algorithm would treat them correctly) so I see no correctness problem. But it\n> might be worth explaining better the meaning of invalid \"latest_removed_xid\"\n> in comments.\nAdded additional comment.\n\n> but it's not clear to me what \"latestRemovedXid\" is. If you mean the\n> scan->kill_prior_tuple_min_lsn field, you probably need more words to explain\n> it.\nHope it is better now.\n\n> should probably be\n> /* It is always allowed on primary if *all_dead. */\nFixed.\n\n> As the function is only called if (so->numKilled > 0), I think both\n> \"killedsomething\" and \"dirty\" variables should always have the same value, so\n> one variable should be enough. Assert(so->numKilled) would be appropriate in\n> that case.\nFixed, but partly. It is because I have added additional checks for a\nlong transaction in the case of promoted server.\n\n> \"+applying the fill page write.\"\nFixed.\n\nUpdated version in attach.\n\nThanks a lot,\nMichail.", "msg_date": "Wed, 12 May 2021 23:12:09 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello.\n\nAdded a check for standby promotion with the long transaction to the\ntest (code and docs are unchanged).\n\nThanks,\nMichail.", "msg_date": "Fri, 14 May 2021 00:37:26 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n\n> Hello.\n> \n> Added a check for standby promotion with the long transaction to the\n> test (code and docs are unchanged).\n\nI'm trying to continue the review, sorry for the delay. Following are a few\nquestion about the code:\n\n* Does the masking need to happen in the AM code, e.g. _bt_killitems()? I'd\n expect that the RmgrData.rm_fpi_mask can do all the work.\n\n Maybe you're concerned about clearing the \"LP-safe-on-standby\" bits after\n promotion, but I wouldn't consider this a problem: once the standby is\n allowed to set the hint bits (i.e. minRecoveryPoint is high enough, see\n IsIndexLpDeadAllowed() -> XLogNeedsFlush()), promotion shouldn't break\n anything because it should not allow minRecoveryPoint to go backwards.\n\n* How about modifying rm_mask() instead of introducing rm_fpi_mask()? Perhaps\n a boolean argument can be added to distinguish the purpose of the masking.\n\n* Are you sure it's o.k. to use mask_lp_flags() here? It sets the item flags\n to LP_UNUSED unconditionally, which IMO should only be done by VACUUM. I\n think you only need to revert the effect of prior ItemIdMarkDead(), so you\n only need to change the status LP_DEAD to LP_NORMAL if the tuple still has\n storage. (And maybe add an assertion to ItemIdMarkDead() confirming that\n it's only used for LP_NORMAL items?)\n\n As far as I understand, the current code only uses mask_lp_flags() during\n WAL consistency check on copies of pages which don't eventually get written\n to disk.\n\n* IsIndexLpDeadAllowed()\n\n ** is bufmgr.c the best location for this function?\n\n ** the header comment should explain the minLsn argument.\n\n ** comment\n\n/* It is always allowed on primary if *all_dead. */\n\nshould probably be\n\n/* It is always allowed on primary if ->all_dead. */\n\n* comment: XLOG_HEAP2_CLEAN has been renamed to XLOG_HEAP2_PRUNE in PG14.\n\n\nOn regression tests:\n\n* Is the purpose of the repeatable read (RR) snapshot to test that\n heap_hot_search_buffer() does not set deadness->all_dead if some transaction\n can still see a tuple of the chain? If so, I think the RR snapshot does not\n have to be used in the tests because this patch does not really affect the\n logic: heap_hot_search_buffer() only sets deadness->all_dead to false, just\n like it sets *all_dead in the current code. Besides that,\n IsIndexLpDeadAllowed() too can avoid setting of the LP_DEAD flag on an index\n tuple (at least until the commit record of the deleting/updating transaction\n gets flushed to disk), so it can hide the behaviour of\n heap_hot_search_buffer().\n\n* Unless I miss something, the tests check that the hint bits are not\n propagated from primary (or they are propagated but marked non-safe),\n however there's no test to check that standby does set the hint bits itself.\n\n* I'm also not sure if promotion needs to be tested. What's specific about the\n promoted cluster from the point of view of this feature? The only thing I\n can think of is clearing of the \"LP-safe-on-standby\" bits, but, as I said\n above, I'm not sure if the tests ever let standby to set those bits before\n the promotion.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 20 Sep 2021 11:53:57 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Antonin.\n\n> I'm trying to continue the review, sorry for the delay. Following are a few\n> question about the code:\n\nThanks for the review :) And sorry for the delay too :)\n\n> * Does the masking need to happen in the AM code, e.g. _bt_killitems()?\n> I'd expect that the RmgrData.rm_fpi_mask can do all the work.\n\nRmgrData.rm_fpi_mask clears a single BTP_LP_SAFE_ON_STANDBY bit only\nto indicate that hints bit are not safe to be used on standby.\nWhy do not clear LP_DEAD bits in rm_fpi_mask? There is no sense\nbecause we could get such bits in multiple ways:\n\n* the standby was created from the base backup of the primary\n* some pages were changed by pg_rewind\n* the standby was updated to the version having this feature (so, old\npages still contains LP_DEAD)\n\nSo, AM code needs to know when and why clear LP_DEAD bits if\nBTP_LP_SAFE_ON_STANDBY is not set.\nAlso, the important moment here is pg_memory_barrier() usage.\n\n> * How about modifying rm_mask() instead of introducing rm_fpi_mask()? Perhaps\n> a boolean argument can be added to distinguish the purpose of the masking.\n\nI have tried this way but the code was looking dirty and complicated.\nAlso, the separated fpi_mask provides some semantics to the function.\n\n> * Are you sure it's o.k. to use mask_lp_flags() here? It sets the item flags\n> to LP_UNUSED unconditionally, which IMO should only be done by VACUUM.\nOh, good catch. I made mask_lp_dead for this. Also, added such a\nsituation to the test.\n\n> ** is bufmgr.c the best location for this function?\nMoved to indexam.c and made static (is_index_lp_dead_allowed).\n\n > should probably be\n > /* It is always allowed on primary if ->all_dead. */\nFixed.\n\n> * comment: XLOG_HEAP2_CLEAN has been renamed to XLOG_HEAP2_PRUNE in PG14.\nFixed.\n\n> * Is the purpose of the repeatable read (RR) snapshot to test that\n> heap_hot_search_buffer() does not set deadness->all_dead if some transaction\n> can still see a tuple of the chain?\n\nThe main purpose is to test xactStartedInRecovery logic after the promotion.\nFor example -\n > if (scan->xactStartedInRecovery && !RecoveryInProgress())`\n\n> * Unless I miss something, the tests check that the hint bits are not\n> propagated from primary (or they are propagated but marked non-safe),\n> however there's no test to check that standby does set the hint bits itself.\n\nIt is tested on different standby, see\n > is(hints_num($node_standby_2), qq(10), 'index hint bits already\nset on second standby 2');\n\nAlso, I added checks for BTP_LP_SAFE_ON_STANDBY to make sure\neverything in the test goes by scenario.\n\nThanks a lot,\nMichail.", "msg_date": "Thu, 30 Sep 2021 01:09:23 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n\n> > * Is the purpose of the repeatable read (RR) snapshot to test that\n> > heap_hot_search_buffer() does not set deadness->all_dead if some transaction\n> > can still see a tuple of the chain?\n> \n> The main purpose is to test xactStartedInRecovery logic after the promotion.\n> For example -\n> > if (scan->xactStartedInRecovery && !RecoveryInProgress())`\n\nI understand that the RR snapshot is used to check the MVCC behaviour, however\nthis comment seems to indicate that the RR snapshot should also prevent the\nstandb from setting the hint bits.\n\n# Make sure previous queries not set the hints on standby because\n# of RR snapshot\n\nI can imagine that on the primary, but I don't think that the backend that\nchecks visibility on standby does checks other snapshots/backends. And it\ndidn't work when I ran the test manually, although I could have missed\nsomething.\n\n\nA few more notes regarding the tests:\n\n* 026_standby_index_lp_dead.pl should probably be renamed to\n 027_standby_index_lp_dead.pl (026_* was created in the master branch\n recently)\n\n\n* The test fails, although I do have convigrured the build with\n --enable-tap-tests.\n\nBEGIN failed--compilation aborted at t/026_standby_index_lp_dead.pl line 5.\nt/026_standby_index_lp_dead.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n\nI suspect the testing infrastructure changed recently.\n\n* The messages like this\n\nis(hints_num($node_standby_1), qq(10),\n 'hints are set on standby1 because FPI but marked as non-safe');\n\n say that the hints are \"marked as non-safe\", but the hints_num() function\n does not seem to check that.\n\n* wording:\n\nis(hints_num($node_standby_2), qq(10), 'index hint bits already set on second standby 2');\n->\nis(hints_num($node_standby_2), qq(10), 'index hint bits already set on standby 2');\n\n\nAnd a few more notes on the code:\n\n* There's an extra colon in mask_lp_dead():\n\nbufmask.c:148:38: warning: for loop has empty body [-Wempty-body]\n offnum = OffsetNumberNext(offnum));\n ^\nbufmask.c:148:38: note: put the semicolon on a separate line to silence this warning\n\n* the header comment of heap_hot_search_buffer() still says \"*all_dead\"\n whereas I'd expect \"->all_dead\".\n\n The same for \"*page_lsn\".\n\n* I can see no test for the INDEX_LP_DEAD_OK_MIN_LSN value of the\n IndexLpDeadAllowedResult enumeration. Shouldn't there be only two values,\n e.g. INDEX_LP_DEAD_OK and INDEX_LP_DEAD_MAYBE_OK ? Or a boolean variable (in\n index_fetch_heap()) of the appropriate name, e.g. kill_maybe_allowed, and\n rename the function is_index_lp_dead_allowed() to\n is_index_lp_dead_maybe_allowed()?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 03 Nov 2021 18:32:21 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Antonin.\n\nThanks for pushing it forward.\n\n> I understand that the RR snapshot is used to check the MVCC behaviour, however\n> this comment seems to indicate that the RR snapshot should also prevent the\n> standb from setting the hint bits.\n> # Make sure previous queries not set the hints on standby because\n> # of RR snapshot\n> I can imagine that on the primary, but I don't think that the backend that\n> checks visibility on standby does checks other snapshots/backends. And it\n> didn't work when I ran the test manually, although I could have missed\n> something.\n\nYes, it checks - you could see ComputeXidHorizons for details. It is\nthe main part of the correctness of the whole feature. I added some\ndetails about it to the test.\n\n> * 026_standby_index_lp_dead.pl should probably be renamed to\n> 027_standby_index_lp_dead.pl (026_* was created in the master branch\n> recently)\n\nDone.\n\n> BEGIN failed--compilation aborted at t/026_standby_index_lp_dead.pl line 5.\n> t/026_standby_index_lp_dead.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n\nFixed.\n\n> * The messages like this\n\nFixed.\n\n > * There's an extra colon in mask_lp_dead():\n\nOh, it is a huge error really (the loop was empty) :) Fixed.\n\n> * the header comment of heap_hot_search_buffer() still says \"*all_dead\"\n> whereas I'd expect \"->all_dead\".\n> The same for \"*page_lsn\".\n\nI was trying to mimic the style of comment (it says about “*tid” from\n2007). So, I think it is better to keep it in the same style for the\nwhole function comment.\n\n> * I can see no test for the INDEX_LP_DEAD_OK_MIN_LSN value of the\n> IndexLpDeadAllowedResult enumeration. Shouldn't there be only two values,\n> e.g. INDEX_LP_DEAD_OK and INDEX_LP_DEAD_MAYBE_OK ? Or a boolean variable (in\n> index_fetch_heap()) of the appropriate name, e.g. kill_maybe_allowed, and\n> rename the function is_index_lp_dead_allowed() to\n> is_index_lp_dead_maybe_allowed()?\n\nYes, this way it is looks better. Done. Also, I have added some checks\nfor “maybe” LSN-related logic to the test.\n\nThanks a lot,\nMichail.", "msg_date": "Fri, 5 Nov 2021 19:51:32 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n\n> > I understand that the RR snapshot is used to check the MVCC behaviour, however\n> > this comment seems to indicate that the RR snapshot should also prevent the\n> > standb from setting the hint bits.\n> > # Make sure previous queries not set the hints on standby because\n> > # of RR snapshot\n> > I can imagine that on the primary, but I don't think that the backend that\n> > checks visibility on standby does checks other snapshots/backends. And it\n> > didn't work when I ran the test manually, although I could have missed\n> > something.\n> \n> Yes, it checks - you could see ComputeXidHorizons for details. It is\n> the main part of the correctness of the whole feature. I added some\n> details about it to the test.\n\nAh, ok. I thought that only KnownAssignedXids is used on standby, but that\nwould ignore the RR snapshot. It wasn't clear to me when the xmin of the\nhot-standby backends is set, now I think it's done by GetSnapshotData().\n\n> > * I can see no test for the INDEX_LP_DEAD_OK_MIN_LSN value of the\n> > IndexLpDeadAllowedResult enumeration. Shouldn't there be only two values,\n> > e.g. INDEX_LP_DEAD_OK and INDEX_LP_DEAD_MAYBE_OK ? Or a boolean variable (in\n> > index_fetch_heap()) of the appropriate name, e.g. kill_maybe_allowed, and\n> > rename the function is_index_lp_dead_allowed() to\n> > is_index_lp_dead_maybe_allowed()?\n> \n> Yes, this way it is looks better. Done. Also, I have added some checks\n> for “maybe” LSN-related logic to the test.\n\nAttached is a proposal for a minor addition that would make sense to me, add\nit if you think it's appropriate.\n\nI think I've said enough, changing the status to \"ready for committer\" :-)\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Tue, 09 Nov 2021 12:01:44 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "I have changed approach, so it is better to start from this email:\r\nhttps://www.postgresql.org/message-id/flat/CANtu0ohHu1r1xQfTzEJuxeaOMYncG7xRxUQWdH%3DcMXZSf%2Bnzvg%40mail.gmail.com#4c81a4d623d8152f5e8889e97e750eec", "msg_date": "Tue, 09 Nov 2021 15:58:16 +0000", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Woo-hoo :)\n\n> Attached is a proposal for a minor addition that would make sense to me, add\n> it if you think it's appropriate.\n\nYes, I'll add to the patch.\n\n> I think I've said enough, changing the status to \"ready for committer\" :-)\n\nThanks a lot for your help and attention!\n\nBest regards,\nMichail.\n\n\n", "msg_date": "Tue, 9 Nov 2021 19:00:24 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello.\n\n> Attached is a proposal for a minor addition that would make sense to me, add\n> it if you think it's appropriate.\n\nAdded. Also, I updated the documentation a little.\n\n> I have changed approach, so it is better to start from this email:\n\nOops, I was thinking the comments feature in the commitfest app works\nin a different way :)\n\nBest regards,\nMichail.", "msg_date": "Tue, 9 Nov 2021 22:05:49 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hi,\n\nOn Wed, Nov 10, 2021 at 3:06 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n>\n> > Attached is a proposal for a minor addition that would make sense to me, add\n> > it if you think it's appropriate.\n>\n> Added. Also, I updated the documentation a little.\n>\n> > I have changed approach, so it is better to start from this email:\n>\n> Oops, I was thinking the comments feature in the commitfest app works\n> in a different way :)\n\nThe cfbot reports that this patch is currently failing at least on\nLinux and Windows, e.g. https://cirrus-ci.com/task/6532060239101952.\n\nI'm switching this patch on Waiting on Author.\n\n\n", "msg_date": "Wed, 12 Jan 2022 13:49:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Junien.\n\nThanks for your attention.\n\n> The cfbot reports that this patch is currently failing at least on\n> Linux and Windows, e.g. https://cirrus-ci.com/task/6532060239101952.\n\nFixed. It was the issue with the test - hangs on Windows because of\npsql + spurious vacuum sometimes.\n\n> I'm switching this patch on Waiting on Author.\n\nI have tested it multiple times on my Github repo, seems to be stable now.\nSwitching back to Ready for committer.\n\nBest regards.\nMichail.", "msg_date": "Sat, 15 Jan 2022 20:39:14 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "On Sat, Jan 15, 2022 at 08:39:14PM +0300, Michail Nikolaev wrote:\n> Hello, Junien.\n> \n> Thanks for your attention.\n> \n> > The cfbot reports that this patch is currently failing at least on\n> > Linux and Windows, e.g. https://cirrus-ci.com/task/6532060239101952.\n> \n> Fixed. It was the issue with the test - hangs on Windows because of\n> psql + spurious vacuum sometimes.\n\nIt looks like there's still a server crash caused the CI or client to hang.\n\nhttps://cirrus-ci.com/task/6350310141591552\n2022-01-13 06:31:04.182 GMT [8636][walreceiver] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly\n\t\tThis probably means the server terminated abnormally\n\t\tbefore or while processing the request.\n2022-01-13 06:31:04.182 GMT [6848][startup] LOG: invalid record length at 0/3014B58: wanted 24, got 0\n2022-01-13 06:31:04.228 GMT [8304][walreceiver] FATAL: could not connect to the primary server: connection to server on socket \"C:/Users/ContainerAdministrator/AppData/Local/Temp/_7R9Pa5CwW/.s.PGSQL.58307\" failed: Connection refused (0x0000274D/10061)\n\n\n", "msg_date": "Sat, 15 Jan 2022 12:42:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Justin.\n\nThanks for your attention.\nAfter some investigation, I think I have found the problem. It is\ncaused by XLOG_RUNNING_XACTS at an undetermined moment (some test\nparts rely on it).\n\nNow test waits for XLOG_RUNNING_XACTS to happen (maximum is 15s) and\nproceed forward.\n\nI'll move entry back to \"Ready for Committer\" once it passes tests.\n\nBest regards,\nMichail.", "msg_date": "Mon, 24 Jan 2022 10:33:43 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 24, 2022 at 10:33:43AM +0300, Michail Nikolaev wrote:\n> \n> Thanks for your attention.\n> After some investigation, I think I have found the problem. It is\n> caused by XLOG_RUNNING_XACTS at an undetermined moment (some test\n> parts rely on it).\n> \n> Now test waits for XLOG_RUNNING_XACTS to happen (maximum is 15s) and\n> proceed forward.\n> \n> I'll move entry back to \"Ready for Committer\" once it passes tests.\n\nIt looks like you didn't fetch the latest upstream commits in a while as this\nversion is still conflicting with 7a5f6b474 (Make logical decoding a part of\nthe rmgr) from 6 days ago.\n\nI rebased the pathset in attached v9. Please double check that I didn't miss\nanything in the rebase.", "msg_date": "Tue, 25 Jan 2022 19:21:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 25, 2022 at 07:21:01PM +0800, Julien Rouhaud wrote:\n> > \n> > I'll move entry back to \"Ready for Committer\" once it passes tests.\n> \n> It looks like you didn't fetch the latest upstream commits in a while as this\n> version is still conflicting with 7a5f6b474 (Make logical decoding a part of\n> the rmgr) from 6 days ago.\n> \n> I rebased the pathset in attached v9. Please double check that I didn't miss\n> anything in the rebase.\n\nFTR the cfbot is now happy with this version:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/2947.\n\nI will let you mark the patch as Ready for Committer once you validate that the\nrebase was ok.\n\n\n", "msg_date": "Tue, 25 Jan 2022 20:28:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Julien.\n\n> I rebased the pathset in attached v9. Please double check that I didn't miss\n> anything in the rebase.\n\nThanks a lot for your help.\n\n> I will let you mark the patch as Ready for Committer once you validate that the\n> rebase was ok.\n\nYes, rebase looks good.\n\nBest regards,\nMichail.\n\n\n", "msg_date": "Wed, 26 Jan 2022 10:49:55 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hi,\n\nOn 2022-01-25 19:21:01 +0800, Julien Rouhaud wrote:\n> I rebased the pathset in attached v9. Please double check that I didn't miss\n> anything in the rebase.\n\nFails to apply at the moment: http://cfbot.cputube.org/patch_37_2947.log\n\nMarked as waiting for author.\n\n- Andres\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:07:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Andres.\n\n> Fails to apply at the moment: http://cfbot.cputube.org/patch_37_2947.log\n\nThanks for notifying me. BTW, some kind of automatic email in case of\nstatus change could be very helpful.\n\n> Marked as waiting for author.\n\nNew version is attached, build is passing\n(https://cirrus-ci.com/build/5599876384817152), so, moving it back to\n\"ready for committer\" .\n\nBest regards,\nMichail.", "msg_date": "Tue, 22 Mar 2022 16:52:09 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "On Tue, 22 Mar 2022 at 09:52, Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n>\n> Thanks for notifying me. BTW, some kind of automatic email in case of\n> status change could be very helpful.\n\nI agree but realize the cfbot is quite new and I guess the priority is\nto work out any kinks before spamming people with false positives.\n\n> New version is attached, build is passing\n> (https://cirrus-ci.com/build/5599876384817152), so, moving it back to\n> \"ready for committer\" .\n\nI'm seeing a recovery test failure. Not sure if this represents an\nactual bug or just a test that needs to be adjusted for the new\nbehaviour.\n\nhttps://cirrus-ci.com/task/5711008294502400\n\n[14:42:46.885] # Failed test 'no new index hint bits are set on new standby'\n[14:42:46.885] # at t/027_standby_index_lp_dead.pl line 262.\n[14:42:46.885] # got: '12'\n[14:42:46.885] # expected: '11'\n[14:42:47.147]\n[14:42:47.147] # Failed test 'hint not marked as standby-safe'\n[14:42:47.147] # at t/027_standby_index_lp_dead.pl line 263.\n[14:42:47.147] # got: '1'\n[14:42:47.147] # expected: '0'\n[14:42:49.723] # Looks like you failed 2 tests of 30.\n[14:42:49.750] [14:42:49] t/027_standby_index_lp_dead.pl .......\n[14:42:49.761] Dubious, test returned 2 (wstat 512, 0x200)\n[14:42:49.761] Failed 2/30 subtests\n\n\n\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 28 Mar 2022 15:40:08 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "On Mon, Mar 28, 2022 at 12:40 PM Greg Stark <stark@mit.edu> wrote:\n> I'm seeing a recovery test failure. Not sure if this represents an\n> actual bug or just a test that needs to be adjusted for the new\n> behaviour.\n>\n> https://cirrus-ci.com/task/5711008294502400\n\nI doubt that the patch's use of pg_memory_barrier() in places like\n_bt_killitems() is correct. There is no way to know for sure if this\nnovel new lockless algorithm is correct or not, since it isn't\nexplained anywhere.\n\nThe existing use of memory barriers is pretty much limited to a\nhandful of performance critical code paths, none of which are in\naccess method code that reads from shared_buffers. So this is not a\nminor oversight.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Mar 2022 13:23:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "On Mon, Mar 28, 2022 at 1:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I doubt that the patch's use of pg_memory_barrier() in places like\n> _bt_killitems() is correct.\n\nI also doubt that posting list splits are handled correctly.\n\nIf there is an LP_DEAD bit set on a posting list on the primary, and\nwe need to do a posting list split against the posting tuple, we need\nto be careful -- we cannot allow our new TID to look like it's LP_DEAD\nimmediately, before our transaction even commits/aborts. We cannot\nswap out our new TID with an old LP_DEAD TID, because we'll think that\nour new TID is LP_DEAD when we shouldn't.\n\nThis is currently handled by having the inserted do an early round of\nsimple/LP_DEAD index tuple deletion, using the \"simpleonly\" argument\nfrom _bt_delete_or_dedup_one_page(). Obviously the primary cannot be\nexpected to know that one of its standbys has independently set a\nposting list's LP_DEAD bit, though. At the very least you need to\nteach the posting list split path in btree_xlog_insert() about all\nthis -- it's not necessarily sufficient to clear LP_DEAD bits in the\nindex AM's fpi_mask() routine.\n\nOverall, I think that this patch has serious design flaws, and that\nthis issue is really just a symptom of a bigger problem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Mar 2022 14:46:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Greg.\n\n> I'm seeing a recovery test failure. Not sure if this represents an\n> actual bug or just a test that needs to be adjusted for the new\n> behaviour.\n\nThanks for notifying me. It is a failure of a test added in the patch.\nIt is a little hard to make it stable (because it depends on\nminRecoveryLSN which could be changed in asynchronous way without any\ncontrol). I’ll check how to make it more stable.\n\nThanks,\nMichail.\n\n\n", "msg_date": "Tue, 29 Mar 2022 14:52:44 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Peter.\n\nThanks for your review!\n\n> I doubt that the patch's use of pg_memory_barrier() in places like\n> _bt_killitems() is correct. There is no way to know for sure if this\n> novel new lockless algorithm is correct or not, since it isn't\n> explained anywhere.\n\nThe memory barrier is used only to ensure memory ordering in case of\nclearing LP_DEAD bits. Just to make sure the flag allowing the use\nLP_DEAD is seen AFTER bits are cleared.\nYes, it should be described in more detail.\nThe flapping test is one added in the patch and not related to memory\nordering. I have already tried to make it stable once before, but it\ndepends on minRecoveryLSN propagation. I’ll think about how to make it\nstable.\n\n> If there is an LP_DEAD bit set on a posting list on the primary, and\n> we need to do a posting list split against the posting tuple, we need\n> to be careful -- we cannot allow our new TID to look like it's LP_DEAD\n> immediately, before our transaction even commits/aborts. We cannot\n> swap out our new TID with an old LP_DEAD TID, because we'll think that\n> our new TID is LP_DEAD when we shouldn't.\n\nOh, good catch! I was thinking it is safe to have additional hint bits\non primary, but it seems like no. BTW I am wondering if it is possible\nto achieve the same situation by pg_rewind and standby promotion…\n\n> Overall, I think that this patch has serious design flaws, and that\n> this issue is really just a symptom of a bigger problem.\n\nCould you please advise me on something? The ways I see:\n* give up :)\n* try to fix this concept\n* go back to concept with LP_DEAD horizon WAL and optional cancellation\n* try to limit scope on “allow standby to use LP_DEAD set on primary\nin some cases” (by marking something in btree page probably)\n* look for some new way\n\nBest regards,\nMichail.\n\n\n", "msg_date": "Tue, 29 Mar 2022 14:55:04 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "UPD:\n\n> I was thinking it is safe to have additional hint bits\n> on primary, but it seems like no.\n\nOh, sorry for the mistake, it is about standby of course.\n\n> BTW I am wondering if it is possible\n> to achieve the same situation by pg_rewind and standby promotion…\n\nLooks like it is impossible, because wal_log_hints is required in\norder to use pg_rewind.\nIt is possible to achieve a situation with some additional LP_DEAD on\nstandby compared to the primary, but any change on primary would cause\nFPI, so LP_DEAD will be cleared.\n\nThanks,\nMichail.\n\n\n", "msg_date": "Tue, 29 Mar 2022 19:51:18 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "On Tue, Mar 29, 2022 at 4:55 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> > Overall, I think that this patch has serious design flaws, and that\n> > this issue is really just a symptom of a bigger problem.\n>\n> Could you please advise me on something? The ways I see:\n> * give up :)\n\nI would never tell anybody to give up on something like this, because\nI don't really have the right to do so. And because it really isn't my\nstyle.\n\n> * try to fix this concept\n> * go back to concept with LP_DEAD horizon WAL and optional cancellation\n> * try to limit scope on “allow standby to use LP_DEAD set on primary\n\nThe simple answer is: I don't know. I could probably come up with a\nbetter answer than that, but it would take real effort, and time.\n\n> in some cases” (by marking something in btree page probably)\n> * look for some new way\n\nYou seem like a smart guy, and I respect the effort that you have put\nin already -- I really do. But I think that you have unrealistic ideas\nabout how to be successful with a project like this.\n\nThe reality is that the Postgres development process gives authority\nto a relatively small number of committers. This is not a perfect\nsystem, at all, but it's the best system that we have. Only a minority\nof the committers are truly experienced with the areas of the code\nthat your patch touches -- so the number of people that are ever\nlikely to commit a patch like that is very small (even if the patch\nwas perfect). You need to convince at least one of them to do so, or\nelse your patch doesn't get into PostgreSQL, no matter what else may\nbe true. I hope that my remarks don't seem disdainful or belittling --\nthat is not my intention. These are just facts.\n\nI think that you could do a better job of explaining and promoting the\nproblem that you're trying to solve here. Emphasis on the problem, not\nso much the solution. Only a very small number of patches don't need\nto be promoted. Of course I can see that the general idea has merit,\nbut that isn't enough. Why do *you* care about this problem so much?\nThe answer isn't self-evident. You have to tell us why it matters so\nmuch.\n\nYou must understand that this whole area is *scary*. The potential for\nserious data corruption bugs is very real. And because the whole area\nis so complicated (it is at the intersection of 2-3 complicated\nareas), we can expect those bugs to be hidden for a long time. We\nmight never be 100% sure that we've fixed all of them if the initial\ndesign is not generally robust. Most patches are not like that.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 29 Mar 2022 17:20:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "On Tue, Mar 29, 2022 at 5:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Tue, Mar 29, 2022 at 4:55 AM Michail Nikolaev\n> <michail.nikolaev@gmail.com> wrote:\n>\n> I think that you could do a better job of explaining and promoting the\n> problem that you're trying to solve here. Emphasis on the problem, not\n> so much the solution.\n\n\nAs a specific recommendation here - submit patches with a complete commit\nmessage. Tweak it for each new version so that any prior discussion that\ninformed the general design of the patch is reflected in the commit message.\n\nThis doesn't solve the \"experience\" issue by itself but does allow someone\nwith interest to jump in without having to read an entire thread,\nincluding false-starts and half-ideas, to understand what the patch is\ndoing, and why. At the end of the day the patch should largely speak for\nitself, and depend minimally on the discussion thread, to be understood.\n\nDavid J.\n\nOn Tue, Mar 29, 2022 at 5:20 PM Peter Geoghegan <pg@bowt.ie> wrote:On Tue, Mar 29, 2022 at 4:55 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\nI think that you could do a better job of explaining and promoting the\nproblem that you're trying to solve here. Emphasis on the problem, not\nso much the solution.As a specific recommendation here - submit patches with a complete commit message.  Tweak it for each new version so that any prior discussion that informed the general design of the patch is reflected in the commit message.This doesn't solve the \"experience\" issue by itself but does allow someone with interest to jump in without having to read an entire thread, including false-starts and half-ideas, to understand what the patch is doing, and why.  At the end of the day the patch should largely speak for itself, and depend minimally on the discussion thread, to be understood.David J.", "msg_date": "Tue, 29 Mar 2022 17:53:08 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "On Tue, Mar 22, 2022 at 6:52 AM Michail Nikolaev <michail.nikolaev@gmail.com>\nwrote:\n\n> Hello, Andres.\n>\n> > Fails to apply at the moment: http://cfbot.cputube.org/patch_37_2947.log\n>\n> Thanks for notifying me. BTW, some kind of automatic email in case of\n> status change could be very helpful.\n>\n> > Marked as waiting for author.\n>\n> New version is attached, build is passing\n> (https://cirrus-ci.com/build/5599876384817152), so, moving it back to\n> \"ready for committer\" .\n>\n>\nThis may be a naive comment but I'm curious: The entire new second\nparagraph of the README scares me:\n\n+There are restrictions on settings LP_DEAD bits by the standby related to\n+minRecoveryPoint value. In case of crash recovery standby will start to\nprocess\n+queries after replaying WAL to minRecoveryPoint position (some kind of\nrewind to\n+the previous state). A the same time setting of LP_DEAD bits are not\nprotected\n+by WAL in any way. So, to mark tuple as dead we must be sure it was\n\"killed\"\n+before minRecoveryPoint (comparing the LSN of commit record). Another valid\n+option is to compare \"killer\" LSN with index page LSN because\nminRecoveryPoint\n+would be moved forward when the index page flushed. Also, in some cases\nxid of\n+\"killer\" is unknown - for example, tuples were cleared by XLOG_HEAP2_PRUNE.\n+In that case, we compare the LSN of the heap page to index page LSN.\n\nIn terms of having room for bugs this description seems like a lot of logic\nto have to get correct.\n\nCould we just do this first pass as:\n\nEnable recovery mode LP_DEAD hint bit updates after the first streamed\nCHECKPOINT record comes over from the primary.\n\n?\n\nNow, maybe there aren't any real concerns here but even then breaking up\nthe patches into enabling the general feature in a limited way and then\nensuring that it behaves sanely during the standby crash recovery window\nwould likely increase the appeal and ease the burden on the potential\ncommitter.\n\nThe proposed theory here seems sound to my inexperienced ears. I have no\nidea whether there are other bits, and/or assumptions, lurking around that\ninterfere with this though.\n\nDavid J.\n\nOn Tue, Mar 22, 2022 at 6:52 AM Michail Nikolaev <michail.nikolaev@gmail.com> wrote:Hello, Andres.\n\n> Fails to apply at the moment: http://cfbot.cputube.org/patch_37_2947.log\n\nThanks for notifying me. BTW, some kind of automatic email in case of\nstatus change could be very helpful.\n\n> Marked as waiting for author.\n\nNew version is attached, build is passing\n(https://cirrus-ci.com/build/5599876384817152), so, moving it back to\n\"ready for committer\" .This may be a naive comment but I'm curious: The entire new second paragraph of the README scares me:+There are restrictions on settings LP_DEAD bits by the standby related to+minRecoveryPoint value. In case of crash recovery standby will start to process+queries after replaying WAL to minRecoveryPoint position (some kind of rewind to+the previous state). A the same time setting of LP_DEAD bits are not protected+by WAL in any way. So, to mark tuple as dead we must be sure it was \"killed\"+before minRecoveryPoint (comparing the LSN of commit record). Another valid+option is to compare \"killer\" LSN with index page LSN because minRecoveryPoint+would be moved forward when the index page flushed. Also, in some cases xid of+\"killer\" is unknown - for example, tuples were cleared by XLOG_HEAP2_PRUNE.+In that case, we compare the LSN of the heap page to index page LSN.In terms of having room for bugs this description seems like a lot of logic to have to get correct.Could we just do this first pass as:Enable recovery mode LP_DEAD hint bit updates after the first streamed CHECKPOINT record comes over from the primary.?Now, maybe there aren't any real concerns here but even then breaking up the patches into enabling the general feature in a limited way and then ensuring that it behaves sanely during the standby crash recovery window would likely increase the appeal and ease the burden on the potential committer.The proposed theory here seems sound to my inexperienced ears.  I have no idea whether there are other bits, and/or assumptions, lurking around that interfere with this though.David J.", "msg_date": "Tue, 29 Mar 2022 18:44:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, Peter.\n\n> The simple answer is: I don't know. I could probably come up with a\n> better answer than that, but it would take real effort, and time.\n\nI remember you had an idea about using the LP_REDIRECT bit in btree\nindexes as some kind of “recently dead” flag (1).\nIs this idea still in progress? Maybe an additional bit could provide\na space for a better solution.\n\n> I think that you could do a better job of explaining and promoting the\n> problem that you're trying to solve here. Emphasis on the problem, not\n> so much the solution.\n\nSystem I am working on highly depends on the performance of reading\nfrom standby. In our workloads queries on standby are sometimes\n10-100x slower than on primary due to absence of LP_DEAD support.\nOther users have the same issues (2). I believe such functionality is\ngreat optimization for read replicas with both analytics and OLTP\n(read-only) workloads.\n\n> You must understand that this whole area is *scary*. The potential for\n> serious data corruption bugs is very real. And because the whole area\n> is so complicated (it is at the intersection of 2-3 complicated\n> areas), we can expect those bugs to be hidden for a long time. We\n> might never be 100% sure that we've fixed all of them if the initial\n> design is not generally robust. Most patches are not like that.\n\nMoved to “Waiting for Author” for now.\n\n[1]: https://www.postgresql.org/message-id/flat/CAH2-Wz%3D-BoaKgkN-MnKj6hFwO1BOJSA%2ByLMMO%2BLRZK932fNUXA%40mail.gmail.com#6d7cdebd68069cc493c11b9732fd2040\n[2]: https://www.postgresql.org/message-id/flat/20170428133818.24368.33533%40wrigleys.postgresql.org\n\nThanks,\nMichail.\n\n\n", "msg_date": "Fri, 1 Apr 2022 02:57:44 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "Hello, David.\n\nThanks for your review!\n\n> As a specific recommendation here - submit patches with a complete commit message.\n> Tweak it for each new version so that any prior discussion that informed the general design of\n> the patch is reflected in the commit message.\n\nYes, agreed. Applied to my other patch (1).\n\n> In terms of having room for bugs this description seems like a lot of logic to have to get correct.\n\nYes, it is the scary part. But it is contained in single\nis_index_lp_dead_maybe_allowed function for now.\n\n> Could we just do this first pass as:\n> Enable recovery mode LP_DEAD hint bit updates after the first streamed CHECKPOINT record comes over from the primary.\n> ?\n\nNot sure, but yes, it is better to split the patch into more detailed commits.\n\nThanks,\nMichail.\n\n[1]: https://www.postgresql.org/message-id/flat/CANtu0ogzo4MsR7My9%2BNhu3to5%3Dy7G9zSzUbxfWYOn9W5FfHjTA%40mail.gmail.com#341a3c3b033f69b260120b3173a66382\n\n\n", "msg_date": "Fri, 1 Apr 2022 02:58:37 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" }, { "msg_contents": "On Thu, Mar 31, 2022 at 4:57 PM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> I remember you had an idea about using the LP_REDIRECT bit in btree\n> indexes as some kind of “recently dead” flag (1).\n> Is this idea still in progress? Maybe an additional bit could provide\n> a space for a better solution.\n\nI think that the best way to make the patch closer to being\ncommittable is to make the on-disk representation more explicit.\nRelying on an implicit or contextual definition for anything seems\nlike something to avoid. This is probably the single biggest problem\nthat I see with the patch.\n\nI suggest that you try to \"work backwards\". If the patch was already\ncommitted today, but had subtle bugs, then how would we be able to\nidentify the bugs relatively easily? What would our strategy be then?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 10 Apr 2022 21:12:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Full support for index LP_DEAD hint bits on standby" } ]
[ { "msg_contents": "whelk failed today [1] with this surprising symptom:\n\n--- snip ---\ndiff -w -U3 C:/buildfarm/buildenv/HEAD/pgsql.build/contrib/pageinspect/expected/page.out C:/buildfarm/buildenv/HEAD/pgsql.build/contrib/pageinspect/results/page.out\n--- C:/buildfarm/buildenv/HEAD/pgsql.build/contrib/pageinspect/expected/page.out\t2020-03-08 09:00:35.036254700 +0100\n+++ C:/buildfarm/buildenv/HEAD/pgsql.build/contrib/pageinspect/results/page.out\t2021-01-18 22:10:10.889655500 +0100\n@@ -90,8 +90,8 @@\n FROM heap_page_items(get_raw_page('test1', 0)),\n LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2);\n t_infomask | t_infomask2 | raw_flags | combined_flags \n-------------+-------------+-----------------------------------------------------------+--------------------\n- 2816 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID} | {HEAP_XMIN_FROZEN}\n+------------+-------------+-----------------------------------------+----------------\n+ 2304 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID} | {}\n (1 row)\n \n -- output the decoded flag HEAP_XMIN_FROZEN instead\n@@ -99,8 +99,8 @@\n FROM heap_page_items(get_raw_page('test1', 0)),\n LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2);\n t_infomask | t_infomask2 | raw_flags | combined_flags \n-------------+-------------+-----------------------------------------------------------+--------------------\n- 2816 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID} | {HEAP_XMIN_FROZEN}\n+------------+-------------+-----------------------------------------+----------------\n+ 2304 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID} | {}\n (1 row)\n \n -- tests for decoding of combined flags\n--- snip ---\n\nSearching the buildfarm logs turned up exactly one previous occurrence,\nalso on whelk [2]. So I'm not sure what to make of it. Could the\nimmediately preceding VACUUM FREEZE command have silently skipped this\npage for some reason? That'd be a bug I should think.\n\nAlso, not really a bug, but why is this test script running exactly\nthe same query twice in a row? If that's of value, and not just a\ncopy-and-paste error, the comments sure don't explain why. But what\nit looks like is that these queries were different when first added,\nand then 58b4cb30a5b made them the same when it probably should have\nremoved one.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=whelk&dt=2021-01-18%2020%3A42%3A13\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=whelk&dt=2020-04-17%2023%3A42%3A10\n\n\n", "msg_date": "Mon, 18 Jan 2021 16:48:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "On 2021-Jan-18, Tom Lane wrote:\n\n> Searching the buildfarm logs turned up exactly one previous occurrence,\n> also on whelk [2]. So I'm not sure what to make of it. Could the\n> immediately preceding VACUUM FREEZE command have silently skipped this\n> page for some reason? That'd be a bug I should think.\n\nHmm, doesn't vacuum skip pages when they are pinned? I don't think\nVACUUM FREEZE would be treated especially -- only \"aggressive\"\nwraparound would be an exception, IIRC. This would reflect in the\nrelfrozenxid for the table after vacuum, but I'm not sure if there's a\ndecent way to make the regression tests reflect that.\n\n> Also, not really a bug, but why is this test script running exactly\n> the same query twice in a row? If that's of value, and not just a\n> copy-and-paste error, the comments sure don't explain why. But what\n> it looks like is that these queries were different when first added,\n> and then 58b4cb30a5b made them the same when it probably should have\n> removed one.\n\nAgreed.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Mon, 18 Jan 2021 19:30:19 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jan-18, Tom Lane wrote:\n>> Searching the buildfarm logs turned up exactly one previous occurrence,\n>> also on whelk [2]. So I'm not sure what to make of it. Could the\n>> immediately preceding VACUUM FREEZE command have silently skipped this\n>> page for some reason? That'd be a bug I should think.\n\n> Hmm, doesn't vacuum skip pages when they are pinned? I don't think\n> VACUUM FREEZE would be treated especially -- only \"aggressive\"\n> wraparound would be an exception, IIRC.\n\nRight. If that's the explanation, then adding DISABLE_PAGE_SKIPPING\nto the test's VACUUM options should fix it. However, to believe that\ntheory you have to have some reason to think that some other process\nmight have the page pinned. What would that be? test1 only has one\nsmall tuple in it, so it doesn't seem credible that autovacuum or\nautoanalyze would have fired on it.\n\n[ thinks for a bit... ] Does the checkpointer pin pages it's writing\nout? I guess it'd have to ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jan 2021 17:35:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "On 2021-Jan-18, Tom Lane wrote:\n\n> Right. If that's the explanation, then adding DISABLE_PAGE_SKIPPING\n> to the test's VACUUM options should fix it. However, to believe that\n> theory you have to have some reason to think that some other process\n> might have the page pinned. What would that be? test1 only has one\n> small tuple in it, so it doesn't seem credible that autovacuum or\n> autoanalyze would have fired on it.\n\nI guess the machine would have to be pretty constrained. (It takes\nalmost seven minutes to go through the pg_upgrade test, so it does seems\nsmall.)\n\n> [ thinks for a bit... ] Does the checkpointer pin pages it's writing\n> out? I guess it'd have to ...\n\nIt does, per SyncOneBuffer(), called from BufferSync(), called from\nCheckPointBuffers().\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Mon, 18 Jan 2021 19:40:05 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jan-18, Tom Lane wrote:\n>> [ thinks for a bit... ] Does the checkpointer pin pages it's writing\n>> out? I guess it'd have to ...\n\n> It does, per SyncOneBuffer(), called from BufferSync(), called from\n> CheckPointBuffers().\n\nRight, then we don't need any strange theories about autovacuum,\njust bad timing luck. whelk does seem pretty slow, so it's not\nmuch of a stretch to imagine that it's more susceptible to this\ncorner case than faster machines.\n\nSo, do we have any other tests that are invoking a manual vacuum\nand assuming it won't skip any pages? By this theory, they'd\nall be failures waiting to happen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Jan 2021 17:47:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "On Mon, Jan 18, 2021 at 05:47:40PM -0500, Tom Lane wrote:\n> Right, then we don't need any strange theories about autovacuum,\n> just bad timing luck. whelk does seem pretty slow, so it's not\n> much of a stretch to imagine that it's more susceptible to this\n> corner case than faster machines.\n>\n> So, do we have any other tests that are invoking a manual vacuum\n> and assuming it won't skip any pages? By this theory, they'd\n> all be failures waiting to happen.\n\nThat looks possible by looking at the code around lazy_scan_heap(),\nbut that's narrow.\n\ncheck_heap.sql and heap_surgery.sql have one VACUUM FREEZE each and it\nseems to me that we had better be sure that no pages are skipped for\ntheir cases?\n\nThe duplicated query result looks to be an oversight from 58b4cb3 when\nthe thing got rewritten, so it can just go away. Good catch.\n--\nMichael", "msg_date": "Tue, 19 Jan 2021 14:15:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Jan 18, 2021 at 05:47:40PM -0500, Tom Lane wrote:\n>> So, do we have any other tests that are invoking a manual vacuum\n>> and assuming it won't skip any pages? By this theory, they'd\n>> all be failures waiting to happen.\n\n> check_heap.sql and heap_surgery.sql have one VACUUM FREEZE each and it\n> seems to me that we had better be sure that no pages are skipped for\n> their cases?\n\nIt looks to me like heap_surgery ought to be okay, because it's operating\non a temp table; if there are any page access conflicts on that, we've\ngot BIG trouble ;-)\n\nPoking around, I found a few other places where it looked like a skipped\npage could produce diffs in the expected output:\ncontrib/amcheck/t/001_verify_heapam.pl\ncontrib/pg_visibility/sql/pg_visibility.sql\n\nThere are lots of other vacuums of course, but they don't look like\na missed page would have any effect on the visible results, so I think\nwe should leave them alone.\n\nIn short I propose the attached patch, which also gets rid of\nthat duplicate query.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 19 Jan 2021 17:03:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "Hi,\n\nOn 2021-01-18 19:40:05 -0300, Alvaro Herrera wrote:\n> > [ thinks for a bit... ] Does the checkpointer pin pages it's writing\n> > out? I guess it'd have to ...\n> \n> It does, per SyncOneBuffer(), called from BufferSync(), called from\n> CheckPointBuffers().\n\nI think you don't event need checkpointer to be involved, normal buffer\nreplacement would do the trick. We briefly pin the page in BufferAlloc()\neven if the page is clean. Longer when it's dirty, of course.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Jan 2021 17:50:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think you don't event need checkpointer to be involved, normal buffer\n> replacement would do the trick. We briefly pin the page in BufferAlloc()\n> even if the page is clean. Longer when it's dirty, of course.\n\nTrue, but it seems unlikely that the pages in question here would be\nchosen as replacement victims. These are non-parallel tests, so\nthere's little competitive pressure. I could believe that a background\nautovacuum is active, but not that it's dirtied so many pages that\ntables the test script just created need to get swapped out.\n\nThe checkpointer theory seems good because it requires no assumptions\nat all about competing demand for buffers. If the clock sweep gets\nto the table page (which we know is recently dirtied) at just the right\ntime, we'll see a failure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 20:57:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "On Tue, Jan 19, 2021 at 05:03:49PM -0500, Tom Lane wrote:\n> It looks to me like heap_surgery ought to be okay, because it's operating\n> on a temp table; if there are any page access conflicts on that, we've\n> got BIG trouble ;-)\n\nBah, of course. I managed to miss this part.\n\n> Poking around, I found a few other places where it looked like a skipped\n> page could produce diffs in the expected output:\n> contrib/amcheck/t/001_verify_heapam.pl\n> contrib/pg_visibility/sql/pg_visibility.sql\n> \n> There are lots of other vacuums of course, but they don't look like\n> a missed page would have any effect on the visible results, so I think\n> we should leave them alone.\n\nYeah, I got to wonder a bit about check_btree.sql on a second look,\nbut that's no big deal to leave it alone either.\n\n> In short I propose the attached patch, which also gets rid of\n> that duplicate query.\n\nAgreed, +1.\n--\nMichael", "msg_date": "Wed, 20 Jan 2021 15:29:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jan 19, 2021 at 05:03:49PM -0500, Tom Lane wrote:\n>> In short I propose the attached patch, which also gets rid of\n>> that duplicate query.\n\n> Agreed, +1.\n\nPushed, thanks for looking at it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 11:50:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Odd, intermittent failure in contrib/pageinspect" } ]
[ { "msg_contents": "Looking over the recently committed work for btree tuple deletion (d168b66)\nshould this variable not be declared static as in the attached patch?\n\nThanks,\nMark.", "msg_date": "Mon, 18 Jan 2021 19:19:46 -0500", "msg_from": "Mark G <markg735@gmail.com>", "msg_from_op": true, "msg_subject": "Make gaps array static" } ]
[ { "msg_contents": "PSA a trivial patch just to pgindent the file\nsrc/backend/replication/logical/worker.c\n\n(I am modifying this file in a separate patch, but every time I used\npgindent for my own code I would keep seeing these existing format\nproblems).\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 19 Jan 2021 12:22:45 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "pgindent for worker.c" }, { "msg_contents": "On Tue, Jan 19, 2021 at 6:53 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA a trivial patch just to pgindent the file\n> src/backend/replication/logical/worker.c\n>\n> (I am modifying this file in a separate patch, but every time I used\n> pgindent for my own code I would keep seeing these existing format\n> problems).\n>\n\nSorry for the inconvenience. This seems to be a leftover from my\ncommit 0926e96c49, so I will take care of this. I think we need to\nchange this file in the upcoming patches for logical replication of\n2PC so, I'll push this change separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Jan 2021 08:01:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgindent for worker.c" } ]
[ { "msg_contents": "Hi all,\n\nThe following functions in ilist.h and bufpage.h use some arguments\nonly in assertions:\n- dlist_next_node\n- dlist_prev_node\n- slist_has_next\n- slist_next_node\n- PageValidateSpecialPointer\n\nWithout PG_USED_FOR_ASSERTS_ONLY, this can lead to compilation\nwarnings when not using assertions, and one example of that is\nplpgsql_check. We don't have examples on HEAD where\nPG_USED_FOR_ASSERTS_ONLY is used on arguments, but that looks to work\nproperly with gcc.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 19 Jan 2021 10:52:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Paint some PG_USED_FOR_ASSERTS_ONLY in inline functions of ilist.h\n and bufpage.h" }, { "msg_contents": "On Tue, Jan 19, 2021 at 9:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> The following functions in ilist.h and bufpage.h use some arguments\n> only in assertions:\n> - dlist_next_node\n> - dlist_prev_node\n> - slist_has_next\n> - slist_next_node\n> - PageValidateSpecialPointer\n>\n> Without PG_USED_FOR_ASSERTS_ONLY, this can lead to compilation\n> warnings when not using assertions, and one example of that is\n> plpgsql_check.\n\nFor the record, that's due to that extra flags in the Makefile:\n\noverride CFLAGS += -I$(top_builddir)/src/pl/plpgsql/src -Wall\n\nI think that we're still far from being able to get a clean output\nusing -Wall on postgres itself, so I don't know how much we can\npromise to external code, but fixing those may be a good step.\n\n> We don't have examples on HEAD where\n> PG_USED_FOR_ASSERTS_ONLY is used on arguments, but that looks to work\n> properly with gcc.\n\nYeah I don't see any explicit mention on that on gcc manual. For the\nrecord it also work as expected using clang, and the attached patch\nremove all warnings when compiling plpgsql_check.\n\n\n", "msg_date": "Tue, 19 Jan 2021 16:27:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Paint some PG_USED_FOR_ASSERTS_ONLY in inline functions of\n ilist.h and bufpage.h" }, { "msg_contents": "On Tue, Jan 19, 2021 at 04:27:43PM +0800, Julien Rouhaud wrote:\n> Yeah I don't see any explicit mention on that on gcc manual. For the\n> record it also work as expected using clang, and the attached patch\n> remove all warnings when compiling plpgsql_check.\n\nFWIW, the part of the GCC docs that I looked at is here:\nhttps://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#Common-Variable-Attributes\n\nAnd what I have done does not seem completely legal either for\nfunction arguments, even if I am not getting any complaints when\ncompiling that.\n--\nMichael", "msg_date": "Tue, 19 Jan 2021 21:37:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Paint some PG_USED_FOR_ASSERTS_ONLY in inline functions of\n ilist.h and bufpage.h" } ]
[ { "msg_contents": "Hi,\n\nPostgreSQL has the feature to warn about running out of transaction ID.\nThe following message is an example.\n\n```\n2021-01-19 10:59:27 JST [client backend] WARNING: database \"postgres\" \nmust be vacuumed within xxx transactions\n2021-01-19 10:59:27 JST [client backend] HINT: To avoid a database \nshutdown, execute a database-wide VACUUM in that database.\n You might also need to commit or roll back old prepared \ntransactions, or drop stale replication slots.\n```\n\nBut, the threshold for the warning is not configurable.\nThe value is hard-coded to 40M.\n\n```\nvarsup.c\n\t/*\n\t * We'll start complaining loudly when we get within 40M transactions \nof\n\t * data loss. This is kind of arbitrary, but if you let your gas gauge\n\t * get down to 2% of full, would you be looking for the next gas \nstation?\n\t * We need to be fairly liberal about this number because there are \nlots\n\t * of scenarios where most transactions are done by automatic clients \nthat\n\t * won't pay attention to warnings. (No, we're not gonna make this\n\t * configurable. If you know enough to configure it, you know enough \nto\n\t * not get in this kind of trouble in the first place.)\n\t */\n```\n\nI think it's useful to configure the threshold for warning due to run \nout of\ntransaction ID like \"checkpoint_warning\" parameter.\n\nActually, when a user's workload is too write-heavy,\nthere was a case we want to get the warning message earlier.\n\n\nI understood that there is another way to handle it.\nFor example, to monitor frozen transaction ID to execute the following \nquery\nand check to see if the custom threshold is exceeded.\n\n```\nSELECT max(age(datfrozenxid)) FROM pg_database;\n```\n\nBut, I think to warn to a server log is a simpler way.\n\n\nI would like to know your opinion.\nIf it's useful for us, I'll make patches.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 19 Jan 2021 11:44:44 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "configurable the threshold for warning due to run out of transaction\n ID" }, { "msg_contents": "On Tue, 2021-01-19 at 11:44 +0900, Masahiro Ikeda wrote:\n> PostgreSQL has the feature to warn about running out of transaction ID.\n> The following message is an example.\n> \n> 2021-01-19 10:59:27 JST [client backend] WARNING: database \"postgres\" must be vacuumed within xxx transactions\n> 2021-01-19 10:59:27 JST [client backend] HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.\n> You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\n> \n> But, the threshold for the warning is not configurable.\n> The value is hard-coded to 40M.\n> \n> varsup.c\n> \n> /*\n> * We'll start complaining loudly when we get within 40M transactions of\n> * data loss. This is kind of arbitrary, but if you let your gas gauge\n> * get down to 2% of full, would you be looking for the next gas station?\n> * We need to be fairly liberal about this number because there are lots\n> * of scenarios where most transactions are done by automatic clients that\n> * won't pay attention to warnings. (No, we're not gonna make this\n> * configurable. If you know enough to configure it, you know enough to\n> * not get in this kind of trouble in the first place.)\n> */\n> \n> I think it's useful to configure the threshold for warning due to run out of\n> transaction ID like \"checkpoint_warning\" parameter.\n\nI think the argument in the comment is a good one: people who know enough to\nincrease the number because they consume lots of transactions and want to be\nwarned earlier are probably people who care enough about their database to have\nsome monitoring in place that warns them about approaching transaction wraparound\n(\"datfrozenxid\" and \"datminmxid\" in \"pg_database\").\n\nPeople who lower the limit to get rid of the warning are hopeless, and there is\nno need to support such activity.\n\nSo I don't see much point in making this configurable.\nWe have enough parameters as it is.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 19 Jan 2021 11:28:42 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: configurable the threshold for warning due to run out of\n transaction ID" } ]
[ { "msg_contents": "Hi all\r\n\r\nAfter executing command [pg_dump -?], some help information is as follows.\r\n\r\npg_dump -?\r\n-----------------------------------------------------------------\r\n -N, --exclude-schema=PATTERN do NOT dump the specified schema(s) ※\r\n -T, --exclude-table=PATTERN do NOT dump the specified table(s) ※\r\n -x, --no-privileges do not dump privileges (grant/revoke)\r\n --exclude-table-data=PATTERN do NOT dump data for the specified table(s) ※\r\n --no-comments do not dump comments\r\n --no-publications do not dump publications\r\n --no-security-labels do not dump security label assignments\r\n --no-subscriptions do not dump subscriptions\r\n --no-synchronized-snapshots do not use synchronized snapshots in parallel jobs\r\n --no-tablespaces do not dump tablespace assignments\r\n --no-unlogged-table-data do not dump unlogged table data\r\n--------------------------------------------------------------------\r\n\r\nI think it would be better to change [do NOT dump] to [do not dump].\r\n\r\nHere is a patch.\r\n\r\nBest Regards!", "msg_date": "Tue, 19 Jan 2021 03:37:31 +0000", "msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "[patch] Help information for pg_dump" }, { "msg_contents": "On Tue, Jan 19, 2021 at 9:07 AM Zhang, Jie <zhangjie2@cn.fujitsu.com> wrote:\n>\n> Hi all\n>\n> After executing command [pg_dump -?], some help information is as follows.\n>\n> pg_dump -?\n> -----------------------------------------------------------------\n> -N, --exclude-schema=PATTERN do NOT dump the specified schema(s) ※\n> -T, --exclude-table=PATTERN do NOT dump the specified table(s) ※\n> -x, --no-privileges do not dump privileges (grant/revoke)\n> --exclude-table-data=PATTERN do NOT dump data for the specified table(s) ※\n> --no-comments do not dump comments\n> --no-publications do not dump publications\n> --no-security-labels do not dump security label assignments\n> --no-subscriptions do not dump subscriptions\n> --no-synchronized-snapshots do not use synchronized snapshots in parallel jobs\n> --no-tablespaces do not dump tablespace assignments\n> --no-unlogged-table-data do not dump unlogged table data\n> --------------------------------------------------------------------\n>\n> I think it would be better to change [do NOT dump] to [do not dump].\n>\n> Here is a patch.\n\n+1. Looks like SQL keywords are mentioned in capital letters in both\npg_dump and pg_dumpall cases, so changing \"do NOT\" to \"do not\" seems\nokay to me.\n\nPatch LGTM.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 11:24:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] Help information for pg_dump" }, { "msg_contents": "On Tue, Jan 19, 2021 at 11:24 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Jan 19, 2021 at 9:07 AM Zhang, Jie <zhangjie2@cn.fujitsu.com> wrote:\n> >\n> > Hi all\n> >\n> > After executing command [pg_dump -?], some help information is as follows.\n> >\n> > pg_dump -?\n> > -----------------------------------------------------------------\n> > -N, --exclude-schema=PATTERN do NOT dump the specified schema(s) ※\n> > -T, --exclude-table=PATTERN do NOT dump the specified table(s) ※\n> > -x, --no-privileges do not dump privileges (grant/revoke)\n> > --exclude-table-data=PATTERN do NOT dump data for the specified table(s) ※\n> > --no-comments do not dump comments\n> > --no-publications do not dump publications\n> > --no-security-labels do not dump security label assignments\n> > --no-subscriptions do not dump subscriptions\n> > --no-synchronized-snapshots do not use synchronized snapshots in parallel jobs\n> > --no-tablespaces do not dump tablespace assignments\n> > --no-unlogged-table-data do not dump unlogged table data\n> > --------------------------------------------------------------------\n> >\n> > I think it would be better to change [do NOT dump] to [do not dump].\n> >\n> > Here is a patch.\n>\n> +1. Looks like SQL keywords are mentioned in capital letters in both\n> pg_dump and pg_dumpall cases, so changing \"do NOT\" to \"do not\" seems\n> okay to me.\n>\n> Patch LGTM.\n\nAlso \"do NOT\" is inconsistent with the other message where we are\nsaying \"do not\" so +1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 16:20:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] Help information for pg_dump" } ]
[ { "msg_contents": "Presently there doesn't seem to be a way to tell whether a lock is\nsession-level or transaction-level in the pg_locks view.\n\nI was expecting this to be a quick patch, but the comment on the definition\nof PROCLOCKTAG in lock.h notes that shmem state for heavyweight locks does\nnot track whether the lock is session-level or txn-level. That explains why\nit's not already exposed in pg_locks.\n\nAFAICS it'd be necessary to expand PROCLOG to expose this in shmem.\nProbably by adding a small bitfield where bit 0 is set if there's a txn\nlevel lock and bit 1 is set if there's a session level lock. But I'm not\nconvinced that expanding PROCLOCK is justifiable for this. sizeof(PROCLOCK)\nis 64 on a typical x64 machine. Adding anything to it increases it to 72\nbytes.\n\n(gdb) ptype /o struct PROCLOCK\n/* offset | size */ type = struct PROCLOCK {\n/* 0 | 16 */ PROCLOCKTAG tag;\n/* 16 | 8 */ PGPROC *groupLeader;\n/* 24 | 4 */ LOCKMASK holdMask;\n/* 28 | 4 */ LOCKMASK releaseMask;\n/* 32 | 16 */ SHM_QUEUE lockLink;\n/* 48 | 16 */ SHM_QUEUE procLink;\n/* 64 | 1 */ unsigned char locktypes;\n/* XXX 7-byte padding */\n\n /* total size (bytes): 72 */\n }\n\nGoing over 64 sets off possible alarm bells about cache line sizing to me,\nbut maybe it's not that critical? It'd also require (8 * max_locks_per_xact\n* (MaxBackends+max_prepared_xacts)) extra shmem space; that could land up\nbeing 128k on a default setup and a couple of megabytes on a big system.\nNot huge, but not insignificant if it's hot data.\n\nIt's frustrating to be unable to tell the difference between session-level\nand txn-level locks in diagnostic output. And the deadlock detector has no\nway to tell the difference when selecting a victim for a deadlock abort -\nit'd probably make sense to prefer to send a deadlock abort for txn-only\nlockers. But I'm not sure I see a sensible way to add the info - PROCLOCK\nis already free of any padding, and I wouldn't want to use hacks like\npointer-tagging.\n\nThoughts anyone?\n\nPresently there doesn't seem to be a way to tell whether a lock is session-level or transaction-level in the pg_locks view. I was expecting this to be a quick patch, but the comment on the definition of PROCLOCKTAG in lock.h notes that shmem state for heavyweight locks does not track whether the lock is session-level or txn-level. That explains why it's not already exposed in pg_locks.AFAICS it'd be necessary to expand PROCLOG to expose this in shmem. Probably by adding a small bitfield where bit 0 is set if there's a txn level lock and bit 1 is set if there's a session level lock. But I'm not convinced that expanding PROCLOCK is justifiable for this. sizeof(PROCLOCK) is 64 on a typical x64 machine. Adding anything to it increases it to 72 bytes.(gdb) ptype /o struct PROCLOCK/* offset    |  size */  type = struct PROCLOCK {/*    0      |    16 */    PROCLOCKTAG tag;/*   16      |     8 */    PGPROC *groupLeader;/*   24      |     4 */    LOCKMASK holdMask;/*   28      |     4 */    LOCKMASK releaseMask;/*   32      |    16 */    SHM_QUEUE lockLink;/*   48      |    16 */    SHM_QUEUE procLink;/*   64      |     1 */    unsigned char locktypes;/* XXX  7-byte padding  */                           /* total size (bytes):   72 */                         }Going over 64 sets off possible alarm bells about cache line sizing to me, but maybe it's not that critical? It'd also require (8 * max_locks_per_xact * (MaxBackends+max_prepared_xacts)) extra shmem space; that could land up being 128k on a default setup and a couple of megabytes on a big system. Not huge, but not insignificant if it's hot data.It's frustrating to be unable to tell the difference between session-level and txn-level locks in diagnostic output. And the deadlock detector has no way to tell the difference when selecting a victim for a deadlock abort - it'd probably make sense to prefer to send a deadlock abort for txn-only lockers. But I'm not sure I see a sensible way to add the info - PROCLOCK is already free of any padding, and I wouldn't want to use hacks like pointer-tagging.Thoughts anyone?", "msg_date": "Tue, 19 Jan 2021 14:16:07 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "How to expose session vs txn lock info in pg_locks view?" }, { "msg_contents": "Hi,\n\nOn 2021-01-19 14:16:07 +0800, Craig Ringer wrote:\n> AFAICS it'd be necessary to expand PROCLOG to expose this in shmem.\n> Probably by adding a small bitfield where bit 0 is set if there's a txn\n> level lock and bit 1 is set if there's a session level lock. But I'm not\n> convinced that expanding PROCLOCK is justifiable for this. sizeof(PROCLOCK)\n> is 64 on a typical x64 machine. Adding anything to it increases it to 72\n> bytes.\n\nIndeed - I really don't want to increase the size, it's already a\nproblem.\n\n\n> It's frustrating to be unable to tell the difference between session-level\n> and txn-level locks in diagnostic output.\n\nIt'd be useful, I agree.\n\n\n> And the deadlock detector has no way to tell the difference when\n> selecting a victim for a deadlock abort - it'd probably make sense to\n> prefer to send a deadlock abort for txn-only lockers.\n\nI'm doubtful this is worth going for.\n\n\n> But I'm not sure I see a sensible way to add the info - PROCLOCK is\n> already free of any padding, and I wouldn't want to use hacks like\n> pointer-tagging.\n\nI think there's an easy way to squeeze out space: make groupLeader be an\ninteger index into allProcs instead. That requires only 4 bytes...\n\nAlternatively, I think it'd be reasonably easy to add the scope as a bit\nin LOCKMASK - there's plenty space.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 23 Jan 2021 17:12:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: How to expose session vs txn lock info in pg_locks view?" }, { "msg_contents": "On Sun, 24 Jan 2021 at 09:12, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-01-19 14:16:07 +0800, Craig Ringer wrote:\n> > AFAICS it'd be necessary to expand PROCLOG to expose this in shmem.\n> > Probably by adding a small bitfield where bit 0 is set if there's a txn\n> > level lock and bit 1 is set if there's a session level lock. But I'm not\n> > convinced that expanding PROCLOCK is justifiable for this.\n> sizeof(PROCLOCK)\n> > is 64 on a typical x64 machine. Adding anything to it increases it to 72\n> > bytes.\n>\n> Indeed - I really don't want to increase the size, it's already a\n> problem.\n>\n>\n> > It's frustrating to be unable to tell the difference between\n> session-level\n> > and txn-level locks in diagnostic output.\n>\n> It'd be useful, I agree.\n>\n>\n> > And the deadlock detector has no way to tell the difference when\n> > selecting a victim for a deadlock abort - it'd probably make sense to\n> > prefer to send a deadlock abort for txn-only lockers.\n>\n> I'm doubtful this is worth going for.\n>\n>\n> > But I'm not sure I see a sensible way to add the info - PROCLOCK is\n> > already free of any padding, and I wouldn't want to use hacks like\n> > pointer-tagging.\n>\n> I think there's an easy way to squeeze out space: make groupLeader be an\n> integer index into allProcs instead. That requires only 4 bytes...\n>\n> Alternatively, I think it'd be reasonably easy to add the scope as a bit\n> in LOCKMASK - there's plenty space.\n>\n\nI was wondering about that, but concerned that there would be impacts I did\nnot understand.\n\nI'm happy to pursue that angle.\n\nOn Sun, 24 Jan 2021 at 09:12, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-01-19 14:16:07 +0800, Craig Ringer wrote:\n> AFAICS it'd be necessary to expand PROCLOG to expose this in shmem.\n> Probably by adding a small bitfield where bit 0 is set if there's a txn\n> level lock and bit 1 is set if there's a session level lock. But I'm not\n> convinced that expanding PROCLOCK is justifiable for this. sizeof(PROCLOCK)\n> is 64 on a typical x64 machine. Adding anything to it increases it to 72\n> bytes.\n\nIndeed - I really don't want to increase the size, it's already a\nproblem.\n\n\n> It's frustrating to be unable to tell the difference between session-level\n> and txn-level locks in diagnostic output.\n\nIt'd be useful, I agree.\n\n\n> And the deadlock detector has no way to tell the difference when\n> selecting a victim for a deadlock abort - it'd probably make sense to\n> prefer to send a deadlock abort for txn-only lockers.\n\nI'm doubtful this is worth going for.\n\n\n> But I'm not sure I see a sensible way to add the info - PROCLOCK is\n> already free of any padding, and I wouldn't want to use hacks like\n> pointer-tagging.\n\nI think there's an easy way to squeeze out space: make groupLeader be an\ninteger index into allProcs instead. That requires only 4 bytes...\n\nAlternatively, I think it'd be reasonably easy to add the scope as a bit\nin LOCKMASK - there's plenty space.I was wondering about that, but concerned that there would be impacts I did not understand.I'm happy to pursue that angle.", "msg_date": "Mon, 1 Feb 2021 18:42:03 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: How to expose session vs txn lock info in pg_locks view?" }, { "msg_contents": "On Mon, 1 Feb 2021 at 18:42, Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n> On Sun, 24 Jan 2021 at 09:12, Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2021-01-19 14:16:07 +0800, Craig Ringer wrote:\n>> > AFAICS it'd be necessary to expand PROCLOG to expose this in shmem.\n>> > Probably by adding a small bitfield where bit 0 is set if there's a txn\n>> > level lock and bit 1 is set if there's a session level lock. But I'm not\n>> > convinced that expanding PROCLOCK is justifiable for this.\n>> sizeof(PROCLOCK)\n>> > is 64 on a typical x64 machine. Adding anything to it increases it to 72\n>> > bytes.\n>>\n>> Indeed - I really don't want to increase the size, it's already a\n>> problem.\n>>\n>>\n>> > It's frustrating to be unable to tell the difference between\n>> session-level\n>> > and txn-level locks in diagnostic output.\n>>\n>> It'd be useful, I agree.\n>>\n>>\n>> > And the deadlock detector has no way to tell the difference when\n>> > selecting a victim for a deadlock abort - it'd probably make sense to\n>> > prefer to send a deadlock abort for txn-only lockers.\n>>\n>> I'm doubtful this is worth going for.\n>>\n>>\n>> > But I'm not sure I see a sensible way to add the info - PROCLOCK is\n>> > already free of any padding, and I wouldn't want to use hacks like\n>> > pointer-tagging.\n>>\n>> I think there's an easy way to squeeze out space: make groupLeader be an\n>> integer index into allProcs instead. That requires only 4 bytes...\n>>\n>> Alternatively, I think it'd be reasonably easy to add the scope as a bit\n>> in LOCKMASK - there's plenty space.\n>>\n>\n> I was wondering about that, but concerned that there would be impacts I\n> did not understand.\n>\n> I'm happy to pursue that angle.\n>\n\nJust so this thread isn't left dangling, I'm just not going to get time to\nfollow up on this work with a concrete patch and test suite change.\n\nIf anyone else later on wants to differentiate between session and txn\nLWLocks they could start with the approach proposed here.\n\nOn Mon, 1 Feb 2021 at 18:42, Craig Ringer <craig.ringer@enterprisedb.com> wrote:On Sun, 24 Jan 2021 at 09:12, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-01-19 14:16:07 +0800, Craig Ringer wrote:\n> AFAICS it'd be necessary to expand PROCLOG to expose this in shmem.\n> Probably by adding a small bitfield where bit 0 is set if there's a txn\n> level lock and bit 1 is set if there's a session level lock. But I'm not\n> convinced that expanding PROCLOCK is justifiable for this. sizeof(PROCLOCK)\n> is 64 on a typical x64 machine. Adding anything to it increases it to 72\n> bytes.\n\nIndeed - I really don't want to increase the size, it's already a\nproblem.\n\n\n> It's frustrating to be unable to tell the difference between session-level\n> and txn-level locks in diagnostic output.\n\nIt'd be useful, I agree.\n\n\n> And the deadlock detector has no way to tell the difference when\n> selecting a victim for a deadlock abort - it'd probably make sense to\n> prefer to send a deadlock abort for txn-only lockers.\n\nI'm doubtful this is worth going for.\n\n\n> But I'm not sure I see a sensible way to add the info - PROCLOCK is\n> already free of any padding, and I wouldn't want to use hacks like\n> pointer-tagging.\n\nI think there's an easy way to squeeze out space: make groupLeader be an\ninteger index into allProcs instead. That requires only 4 bytes...\n\nAlternatively, I think it'd be reasonably easy to add the scope as a bit\nin LOCKMASK - there's plenty space.I was wondering about that, but concerned that there would be impacts I did not understand.I'm happy to pursue that angle.Just so this thread isn't left dangling, I'm just not going to get time to follow up on this work with a concrete patch and test suite change.If anyone else later on wants to differentiate between session and txn LWLocks they could start with the approach proposed here.", "msg_date": "Tue, 29 Jun 2021 13:36:47 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: How to expose session vs txn lock info in pg_locks view?" } ]
[ { "msg_contents": "Fixes:\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -I../../../../src/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -c -o fd.o fd.c\nfd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n part = pg_pwritev(fd, iov, iovcnt, offset);\n ^~~~~~~~~~\n../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'\n ^~~~~~~\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9: note: 'pwritev' has been marked as being introduced in macOS 11.0\n here, but the deployment target is macOS 10.15.0\nssize_t pwritev(int, const struct iovec *, int, off_t) __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0));\n ^\nfd.c:3661:10: note: enclose 'pwritev' in a __builtin_available check to silence this warning\n part = pg_pwritev(fd, iov, iovcnt, offset);\n ^~~~~~~~~~\n../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'\n ^~~~~~~\n1 warning generated.\n\nThis results in a runtime error:\nrunning bootstrap script ... dyld: lazy symbol binding failed: Symbol not found: _pwritev\n Referenced from: /usr/local/pgsql/bin/postgres\n Expected in: /usr/lib/libSystem.B.dylib\n\ndyld: Symbol not found: _pwritev\n Referenced from: /usr/local/pgsql/bin/postgres\n Expected in: /usr/lib/libSystem.B.dylib\n\nchild process was terminated by signal 6: Abort trap: 6\n\nTo fix this we set -Werror=unguarded-availability-new so that a compile\ntest for pwritev will fail if the symbol is unavailable on the requested\nSDK version.\n---\n configure | 88 ++++++++++++++++++++++++++++++++++++++++++++--------\n configure.ac | 19 +++++++++++-\n 2 files changed, 93 insertions(+), 14 deletions(-)\n\ndiff --git a/configure b/configure\nindex 8af4b99021..503b0d27e6 100755\n--- a/configure\n+++ b/configure\n@@ -5373,6 +5373,47 @@ if test x\"$pgac_cv_prog_CC_cflags__Werror_vla\" = x\"yes\"; then\n fi\n \n \n+ # Prevent usage of symbols marked as newer than our target.\n+\n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Werror=unguarded-availability-new, for CFLAGS\" >&5\n+$as_echo_n \"checking whether ${CC} supports -Werror=unguarded-availability-new, for CFLAGS... \" >&6; }\n+if ${pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new+:} false; then :\n+ $as_echo_n \"(cached) \" >&6\n+else\n+ pgac_save_CFLAGS=$CFLAGS\n+pgac_save_CC=$CC\n+CC=${CC}\n+CFLAGS=\"${CFLAGS} -Werror=unguarded-availability-new\"\n+ac_save_c_werror_flag=$ac_c_werror_flag\n+ac_c_werror_flag=yes\n+cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n+/* end confdefs.h. */\n+\n+int\n+main ()\n+{\n+\n+ ;\n+ return 0;\n+}\n+_ACEOF\n+if ac_fn_c_try_compile \"$LINENO\"; then :\n+ pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new=yes\n+else\n+ pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new=no\n+fi\n+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n+ac_c_werror_flag=$ac_save_c_werror_flag\n+CFLAGS=\"$pgac_save_CFLAGS\"\n+CC=\"$pgac_save_CC\"\n+fi\n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" >&5\n+$as_echo \"$pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" >&6; }\n+if test x\"$pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" = x\"yes\"; then\n+ CFLAGS=\"${CFLAGS} -Werror=unguarded-availability-new\"\n+fi\n+\n+\n # -Wvla is not applicable for C++\n \n { $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Wendif-labels, for CFLAGS\" >&5\n@@ -15715,6 +15756,40 @@ $as_echo \"#define HAVE_PS_STRINGS 1\" >>confdefs.h\n \n fi\n \n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: checking for pwritev\" >&5\n+$as_echo_n \"checking for pwritev... \" >&6; }\n+cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n+/* end confdefs.h. */\n+#ifdef HAVE_SYS_TYPES_H\n+#include <sys/types.h>\n+#endif\n+#ifdef HAVE_SYS_UIO_H\n+#include <sys/uio.h>\n+#endif\n+int\n+main ()\n+{\n+struct iovec *iov;\n+off_t offset;\n+offset = 0;\n+pwritev(0, iov, 0, offset);\n+\n+ ;\n+ return 0;\n+}\n+_ACEOF\n+if ac_fn_c_try_compile \"$LINENO\"; then :\n+ { $as_echo \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\n+$as_echo \"yes\" >&6; }\n+\n+$as_echo \"#define HAVE_PWRITEV 1\" >>confdefs.h\n+\n+else\n+ { $as_echo \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\n+$as_echo \"no\" >&6; }\n+fi\n+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n+\n ac_fn_c_check_func \"$LINENO\" \"dlopen\" \"ac_cv_func_dlopen\"\n if test \"x$ac_cv_func_dlopen\" = xyes; then :\n $as_echo \"#define HAVE_DLOPEN 1\" >>confdefs.h\n@@ -15871,19 +15946,6 @@ esac\n \n fi\n \n-ac_fn_c_check_func \"$LINENO\" \"pwritev\" \"ac_cv_func_pwritev\"\n-if test \"x$ac_cv_func_pwritev\" = xyes; then :\n- $as_echo \"#define HAVE_PWRITEV 1\" >>confdefs.h\n-\n-else\n- case \" $LIBOBJS \" in\n- *\" pwritev.$ac_objext \"* ) ;;\n- *) LIBOBJS=\"$LIBOBJS pwritev.$ac_objext\"\n- ;;\n-esac\n-\n-fi\n-\n ac_fn_c_check_func \"$LINENO\" \"random\" \"ac_cv_func_random\"\n if test \"x$ac_cv_func_random\" = xyes; then :\n $as_echo \"#define HAVE_RANDOM 1\" >>confdefs.h\ndiff --git a/configure.ac b/configure.ac\nindex 868a94c9ba..30fa39c859 100644\n--- a/configure.ac\n+++ b/configure.ac\n@@ -494,6 +494,8 @@ if test \"$GCC\" = yes -a \"$ICC\" = no; then\n AC_SUBST(PERMIT_DECLARATION_AFTER_STATEMENT)\n # Really don't want VLAs to be used in our dialect of C\n PGAC_PROG_CC_CFLAGS_OPT([-Werror=vla])\n+ # Prevent usage of symbols marked as newer than our target.\n+ PGAC_PROG_CC_CFLAGS_OPT([-Werror=unguarded-availability-new])\n # -Wvla is not applicable for C++\n PGAC_PROG_CC_CFLAGS_OPT([-Wendif-labels])\n PGAC_PROG_CXX_CFLAGS_OPT([-Wendif-labels])\n@@ -1726,6 +1728,22 @@ if test \"$pgac_cv_var_PS_STRINGS\" = yes ; then\n AC_DEFINE([HAVE_PS_STRINGS], 1, [Define to 1 if the PS_STRINGS thing exists.])\n fi\n \n+AC_MSG_CHECKING([for pwritev])\n+AC_COMPILE_IFELSE([AC_LANG_PROGRAM(\n+[#ifdef HAVE_SYS_TYPES_H\n+#include <sys/types.h>\n+#endif\n+#ifdef HAVE_SYS_UIO_H\n+#include <sys/uio.h>\n+#endif],\n+[struct iovec *iov;\n+off_t offset;\n+offset = 0;\n+pwritev(0, iov, 0, offset);\n+])], [AC_MSG_RESULT(yes)\n+AC_DEFINE([HAVE_PWRITEV], 1, [Define to 1 if you have the `pwritev' function.])],\n+[AC_MSG_RESULT(no)])\n+\n AC_REPLACE_FUNCS(m4_normalize([\n \tdlopen\n \texplicit_bzero\n@@ -1739,7 +1757,6 @@ AC_REPLACE_FUNCS(m4_normalize([\n \tpread\n \tpreadv\n \tpwrite\n-\tpwritev\n \trandom\n \tsrandom\n \tstrlcat\n-- \n2.30.0\n\n\n\n", "msg_date": "Tue, 19 Jan 2021 04:16:25 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> Fixes:\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -I../../../../src/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -c -o fd.o fd.c\n> fd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n\nWe already dealt with that by not selecting an SDK newer than the\nunderlying OS (see 4823621db). I do not believe that your proposal\nis more reliable than that approach, and it's surely uglier. Are\nwe really going to abandon Autoconf's built-in checking method every\ntime Apple adds an API they should have had ten years ago? If so,\nyou forgot preadv ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 10:27:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Tue, Jan 19, 2021 at 8:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > Fixes:\n> > gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -I../../../../src/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -c -o fd.o fd.c\n> > fd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n>\n> We already dealt with that by not selecting an SDK newer than the\n> underlying OS (see 4823621db).\nTried that, doesn't work, not even sure how it could possibly fix this\nissue at all,\nthis can not be fixed properly by selecting a specific SDK version\nalone, it's the\nsymbols valid for a specific target deployment version that matters here.\n> I do not believe that your proposal\n> is more reliable than that approach, and it's surely uglier. Are\n> we really going to abandon Autoconf's built-in checking method every\n> time Apple adds an API they should have had ten years ago? If so,\n> you forgot preadv ...\nI didn't run into an issue there for some reason...but this was the cleanest fix\nI could come up with that seemed to work.\n>\n> regards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 08:36:49 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Tue, Jan 19, 2021 at 8:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We already dealt with that by not selecting an SDK newer than the\n>> underlying OS (see 4823621db).\n\n> Tried that, doesn't work, not even sure how it could possibly fix this\n> issue at all,\n\nIt worked for me and for Sergey, so we need to figure out what's different\nabout your setup. What do you get from \"xcrun --show-sdk-path\" and\n\"xcrun --sdk macosx --show-sdk-path\"? What have you got under\n/Library/Developer/CommandLineTools/SDKs ?\n\n> this can not be fixed properly by selecting a specific SDK version\n> alone, it's the symbols valid for a specific target deployment version\n> that matters here.\n\nI don't think I believe that argument. As a counterexample, supposing\nthat somebody were intentionally cross-compiling on an older OSX platform\nbut using a newer SDK, shouldn't they get an executable suited to the\nSDK's target version?\n\n(I realize that Apple thinks we ought to handle that through run-time\nnot compile-time adaptation, but I have no interest in going there.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 10:57:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Tue, Jan 19, 2021 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > On Tue, Jan 19, 2021 at 8:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> We already dealt with that by not selecting an SDK newer than the\n> >> underlying OS (see 4823621db).\n>\n> > Tried that, doesn't work, not even sure how it could possibly fix this\n> > issue at all,\n>\n> It worked for me and for Sergey, so we need to figure out what's different\n> about your setup. What do you get from \"xcrun --show-sdk-path\" and\n> \"xcrun --sdk macosx --show-sdk-path\"? What have you got under\n> /Library/Developer/CommandLineTools/SDKs ?\n$ xcrun --show-sdk-path\n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk\n$ xcrun --sdk macosx --show-sdk-path\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n$ ls -laht /Library/Developer/CommandLineTools/SDKs\ntotal 0\ndrwxr-xr-x 5 root wheel 160B Jan 14 2020 .\ndrwxr-xr-x 8 root wheel 256B Jan 14 2020 MacOSX10.15.sdk\ndrwxr-xr-x 7 root wheel 224B Jan 14 2020 MacOSX10.14.sdk\nlrwxr-xr-x 1 root wheel 15B Jan 14 2020 MacOSX.sdk -> MacOSX10.15.sdk\n>\n> > this can not be fixed properly by selecting a specific SDK version\n> > alone, it's the symbols valid for a specific target deployment version\n> > that matters here.\n>\n> I don't think I believe that argument. As a counterexample, supposing\n> that somebody were intentionally cross-compiling on an older OSX platform\n> but using a newer SDK, shouldn't they get an executable suited to the\n> SDK's target version?\nYep, that's exactly what this should fix:\n\nMACOSX_DEPLOYMENT_TARGET=11.0 ./configure\nchecking for pwritev... yes\n\nWhich fails at runtime on 10.15:\ndyld: lazy symbol binding failed: Symbol not found: _pwritev\n Referenced from: /usr/local/pgsql/bin/postgres (which was built for\nMac OS X 11.0)\n Expected in: /usr/lib/libSystem.B.dylib\n\ndyld: Symbol not found: _pwritev\n Referenced from: /usr/local/pgsql/bin/postgres (which was built for\nMac OS X 11.0)\n Expected in: /usr/lib/libSystem.B.dylib\n\nchild process was terminated by signal 6: Abort trap: 6\n\nMACOSX_DEPLOYMENT_TARGET=10.15 ./configure\nchecking for pwritev... no\n\nNoticed a couple small issues, I'll send a v2.\n>\n> (I realize that Apple thinks we ought to handle that through run-time\n> not compile-time adaptation, but I have no interest in going there.)\n>\n> regards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 09:49:58 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Tue, Jan 19, 2021 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It worked for me and for Sergey, so we need to figure out what's different\n>> about your setup. What do you get from \"xcrun --show-sdk-path\" and\n>> \"xcrun --sdk macosx --show-sdk-path\"? What have you got under\n>> /Library/Developer/CommandLineTools/SDKs ?\n\n> $ xcrun --show-sdk-path\n> /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk\n> $ xcrun --sdk macosx --show-sdk-path\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n> $ ls -laht /Library/Developer/CommandLineTools/SDKs\n> total 0\n> drwxr-xr-x 5 root wheel 160B Jan 14 2020 .\n> drwxr-xr-x 8 root wheel 256B Jan 14 2020 MacOSX10.15.sdk\n> drwxr-xr-x 7 root wheel 224B Jan 14 2020 MacOSX10.14.sdk\n> lrwxr-xr-x 1 root wheel 15B Jan 14 2020 MacOSX.sdk -> MacOSX10.15.sdk\n\nAh, got it. So \"xcrun --show-sdk-path\" tells us the right thing (that\nis, it *does* give us a symlink to a 10.15 SDK) but by refusing to\nbelieve we've got the right thing, we end up picking MacOSX11.1.sdk.\nDrat. I suppose we could drop the heuristic about wanting a version\nnumber in the SDK path, but I really don't want to do that. Now I'm\nthinking about trying to dereference the symlink after the first step.\n\nBTW, it's curious that you get a reference to the MacOSX.sdk symlink\nwhere both Sergey and I got references to the actual directory.\nDo you happen to recall the order in which you installed/upgraded\nXcode and its command line tools?\n\n>> I don't think I believe that argument. As a counterexample, supposing\n>> that somebody were intentionally cross-compiling on an older OSX platform\n>> but using a newer SDK, shouldn't they get an executable suited to the\n>> SDK's target version?\n\n> Yep, that's exactly what this should fix:\n> MACOSX_DEPLOYMENT_TARGET=11.0 ./configure\n> checking for pwritev... yes\n> Which fails at runtime on 10.15:\n\nWell yeah, exactly. It should fail at run-time, because you\ncross-compiled an executable that's not built for the machine\nyou're on. What we need is to prevent configure from setting up\na cross-compile situation by default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 12:17:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Tue, Jan 19, 2021 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > On Tue, Jan 19, 2021 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It worked for me and for Sergey, so we need to figure out what's different\n> >> about your setup. What do you get from \"xcrun --show-sdk-path\" and\n> >> \"xcrun --sdk macosx --show-sdk-path\"? What have you got under\n> >> /Library/Developer/CommandLineTools/SDKs ?\n>\n> > $ xcrun --show-sdk-path\n> > /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk\n> > $ xcrun --sdk macosx --show-sdk-path\n> > /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n> > $ ls -laht /Library/Developer/CommandLineTools/SDKs\n> > total 0\n> > drwxr-xr-x 5 root wheel 160B Jan 14 2020 .\n> > drwxr-xr-x 8 root wheel 256B Jan 14 2020 MacOSX10.15.sdk\n> > drwxr-xr-x 7 root wheel 224B Jan 14 2020 MacOSX10.14.sdk\n> > lrwxr-xr-x 1 root wheel 15B Jan 14 2020 MacOSX.sdk -> MacOSX10.15.sdk\n>\n> Ah, got it. So \"xcrun --show-sdk-path\" tells us the right thing (that\n> is, it *does* give us a symlink to a 10.15 SDK) but by refusing to\n> believe we've got the right thing, we end up picking MacOSX11.1.sdk.\n> Drat. I suppose we could drop the heuristic about wanting a version\n> number in the SDK path, but I really don't want to do that. Now I'm\n> thinking about trying to dereference the symlink after the first step.\nThe MacOSX11.1.sdk can build for a 10.15 target just fine when passed\nan appropriate MACOSX_DEPLOYMENT_TARGET, so that SDK should be\nfine.\n>\n> BTW, it's curious that you get a reference to the MacOSX.sdk symlink\n> where both Sergey and I got references to the actual directory.\n> Do you happen to recall the order in which you installed/upgraded\n> Xcode and its command line tools?\nI generally just upgrade to the latest as it becomes available.\n>\n> >> I don't think I believe that argument. As a counterexample, supposing\n> >> that somebody were intentionally cross-compiling on an older OSX platform\n> >> but using a newer SDK, shouldn't they get an executable suited to the\n> >> SDK's target version?\n>\n> > Yep, that's exactly what this should fix:\n> > MACOSX_DEPLOYMENT_TARGET=11.0 ./configure\n> > checking for pwritev... yes\n> > Which fails at runtime on 10.15:\n>\n> Well yeah, exactly. It should fail at run-time, because you\n> cross-compiled an executable that's not built for the machine\n> you're on. What we need is to prevent configure from setting up\n> a cross-compile situation by default.\nThe toolchain already selects the correct deployment target by default, the\nissue is just that the configure test for pwritev was being done in a way that\nignored the deployment target version, I fixed that.\n>\n> regards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 10:42:07 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Tue, Jan 19, 2021 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ah, got it. So \"xcrun --show-sdk-path\" tells us the right thing (that\n>> is, it *does* give us a symlink to a 10.15 SDK) but by refusing to\n>> believe we've got the right thing, we end up picking MacOSX11.1.sdk.\n>> Drat. I suppose we could drop the heuristic about wanting a version\n>> number in the SDK path, but I really don't want to do that. Now I'm\n>> thinking about trying to dereference the symlink after the first step.\n\n> The MacOSX11.1.sdk can build for a 10.15 target just fine when passed\n> an appropriate MACOSX_DEPLOYMENT_TARGET, so that SDK should be\n> fine.\n\nBut our out-of-the-box default should be to build for the current\nplatform; we don't want users to have to set MACOSX_DEPLOYMENT_TARGET\nfor that case. Besides, the problem we're having is exactly that Apple's\ndefinition of \"builds for a 10.15 target just fine\" is different from\nours. They think you should use a run-time test not a compile-time test\nto discover whether preadv is available, and we don't want to do that.\n\nIn almost all of the cases I've seen so far, Apple's compiler actually\ndoes default to using an SDK matching the platform. The problem we\nhave is that we try to name the SDK explicitly, and the current\nmethod is failing to pick the right one in your case. There are\nseveral reasons for using an explicit -isysroot rather than just\nletting the compiler default:\n\n* We have seen cases in which the compiler acts as though it has\n*no* default sysroot, and we have to help it out.\n\n* The explicit root reduces version-skew build hazards for extensions\nthat are not built at the same time as the core system.\n\n* There are a few tests in configure itself that need to know the\nsysroot path to check for files there.\n\nAnyway, the behavior you're seeing shows that 4823621db is still a\nbit shy of a load. I'm thinking about the attached as a further\nfix --- can you verify it helps for you?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 19 Jan 2021 15:54:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Tue, Jan 19, 2021 at 1:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > On Tue, Jan 19, 2021 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Ah, got it. So \"xcrun --show-sdk-path\" tells us the right thing (that\n> >> is, it *does* give us a symlink to a 10.15 SDK) but by refusing to\n> >> believe we've got the right thing, we end up picking MacOSX11.1.sdk.\n> >> Drat. I suppose we could drop the heuristic about wanting a version\n> >> number in the SDK path, but I really don't want to do that. Now I'm\n> >> thinking about trying to dereference the symlink after the first step.\n>\n> > The MacOSX11.1.sdk can build for a 10.15 target just fine when passed\n> > an appropriate MACOSX_DEPLOYMENT_TARGET, so that SDK should be\n> > fine.\n>\n> But our out-of-the-box default should be to build for the current\n> platform; we don't want users to have to set MACOSX_DEPLOYMENT_TARGET\n> for that case. Besides, the problem we're having is exactly that Apple's\n> definition of \"builds for a 10.15 target just fine\" is different from\n> ours. They think you should use a run-time test not a compile-time test\n> to discover whether preadv is available, and we don't want to do that.\nThe default for MACOSX_DEPLOYMENT_TARGET is always the current\nrunning OS version from my understanding. So if I build with MacOSX11.1.sdk\non 10.15 with default settings the binaries will work fine because the\nMACOSX_DEPLOYMENT_TARGET gets set to 10.15 automatically even\nif the same SDK is capable of producing incompatible binaries if you set\nMACOSX_DEPLOYMENT_TARGET to 11.0.\n>\n> In almost all of the cases I've seen so far, Apple's compiler actually\n> does default to using an SDK matching the platform. The problem we\n> have is that we try to name the SDK explicitly, and the current\n> method is failing to pick the right one in your case. There are\n> several reasons for using an explicit -isysroot rather than just\n> letting the compiler default:\nNo, it's only the MACOSX_DEPLOYMENT_TARGET that matches the\nplatform, SDK can be arbitrary more or less, but it will work fine because\nthe autoselected MACOSX_DEPLOYMENT_TARGET will force compatibility\nno matter what SDK version you use. This is always how it has worked\nfrom what I've seen.\n>\n> * We have seen cases in which the compiler acts as though it has\n> *no* default sysroot, and we have to help it out.\n>\n> * The explicit root reduces version-skew build hazards for extensions\n> that are not built at the same time as the core system.\nThe deployment target is effectively entirely separate from SDK version,\nso it really shouldn't make a difference unless the SDK is significantly\nolder or newer than the running version from what I can tell.\n>\n> * There are a few tests in configure itself that need to know the\n> sysroot path to check for files there.\n>\n> Anyway, the behavior you're seeing shows that 4823621db is still a\n> bit shy of a load. I'm thinking about the attached as a further\n> fix --- can you verify it helps for you?\nBest I can tell it provides no change for me(this patch is tested on top of it)\nbecause it does not provide any MACOSX_DEPLOYMENT_TARGET\nbased feature detection for pwritev at all.\n>\n> regards, tom lane\n>\n\n\n", "msg_date": "Tue, 19 Jan 2021 15:47:50 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Tue, Jan 19, 2021 at 3:47 PM James Hilliard\n<james.hilliard1@gmail.com> wrote:\n>\n> On Tue, Jan 19, 2021 at 1:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > James Hilliard <james.hilliard1@gmail.com> writes:\n> > > On Tue, Jan 19, 2021 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Ah, got it. So \"xcrun --show-sdk-path\" tells us the right thing (that\n> > >> is, it *does* give us a symlink to a 10.15 SDK) but by refusing to\n> > >> believe we've got the right thing, we end up picking MacOSX11.1.sdk.\n> > >> Drat. I suppose we could drop the heuristic about wanting a version\n> > >> number in the SDK path, but I really don't want to do that. Now I'm\n> > >> thinking about trying to dereference the symlink after the first step.\n> >\n> > > The MacOSX11.1.sdk can build for a 10.15 target just fine when passed\n> > > an appropriate MACOSX_DEPLOYMENT_TARGET, so that SDK should be\n> > > fine.\n> >\n> > But our out-of-the-box default should be to build for the current\n> > platform; we don't want users to have to set MACOSX_DEPLOYMENT_TARGET\n> > for that case. Besides, the problem we're having is exactly that Apple's\n> > definition of \"builds for a 10.15 target just fine\" is different from\n> > ours. They think you should use a run-time test not a compile-time test\n> > to discover whether preadv is available, and we don't want to do that.\n> The default for MACOSX_DEPLOYMENT_TARGET is always the current\n> running OS version from my understanding. So if I build with MacOSX11.1.sdk\n> on 10.15 with default settings the binaries will work fine because the\n> MACOSX_DEPLOYMENT_TARGET gets set to 10.15 automatically even\n> if the same SDK is capable of producing incompatible binaries if you set\n> MACOSX_DEPLOYMENT_TARGET to 11.0.\n> >\n> > In almost all of the cases I've seen so far, Apple's compiler actually\n> > does default to using an SDK matching the platform. The problem we\n> > have is that we try to name the SDK explicitly, and the current\n> > method is failing to pick the right one in your case. There are\n> > several reasons for using an explicit -isysroot rather than just\n> > letting the compiler default:\n> No, it's only the MACOSX_DEPLOYMENT_TARGET that matches the\n> platform, SDK can be arbitrary more or less, but it will work fine because\n> the autoselected MACOSX_DEPLOYMENT_TARGET will force compatibility\n> no matter what SDK version you use. This is always how it has worked\n> from what I've seen.\n> >\n> > * We have seen cases in which the compiler acts as though it has\n> > *no* default sysroot, and we have to help it out.\n> >\n> > * The explicit root reduces version-skew build hazards for extensions\n> > that are not built at the same time as the core system.\n> The deployment target is effectively entirely separate from SDK version,\n> so it really shouldn't make a difference unless the SDK is significantly\n> older or newer than the running version from what I can tell.\n> >\n> > * There are a few tests in configure itself that need to know the\n> > sysroot path to check for files there.\n> >\n> > Anyway, the behavior you're seeing shows that 4823621db is still a\n> > bit shy of a load. I'm thinking about the attached as a further\n> > fix --- can you verify it helps for you?\n> Best I can tell it provides no change for me(this patch is tested on top of it)\n> because it does not provide any MACOSX_DEPLOYMENT_TARGET\n> based feature detection for pwritev at all.\nActually, this looks path looks wrong in general, the value for\n\"xcrun --sdk macosx --show-sdk-path\" should take precedence over\n\"xcrun --show-sdk-path\" as the latter may be used for IOS potentially.\nOn my system \"xcodebuild -version -sdk macosx Path\" and\n\"xcrun --sdk macosx --show-sdk-path\" both point to the\ncorrect latest MacOSX11.1.sdk SDK while \"xcrun --show-sdk-path\"\npoints to the older one.\n> >\n> > regards, tom lane\n> >\n\n\n", "msg_date": "Tue, 19 Jan 2021 16:07:26 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> Actually, this looks path looks wrong in general, the value for\n> \"xcrun --sdk macosx --show-sdk-path\" should take precedence over\n> \"xcrun --show-sdk-path\" as the latter may be used for IOS potentially.\n\nWhat is \"potentially\"? I've found no direct means to control the\nSDK path at all, but so far it appears that \"xcrun --show-sdk-path\"\nagrees with the compiler's default -isysroot path as seen in the\ncompiler's -v output. I suspect that this isn't coincidental,\nbut reflects xcrun actually being used in the compiler launch\nprocess. If it were to flip over to using a IOS SDK, that would\nmean that bare \"cc\" would generate nonfunctional executables,\nwhich just about any onlooker would agree is broken.\n\nI'm really not excited about trying to make the build work with\na non-native SDK as you are proposing. I think that's just going\nto lead to a continuing stream of problems, because of Apple's\nopinions about how cross-version compatibility should work.\nIt also seems like unnecessary complexity, because there is always\n(AFAICS) a native SDK version available. We just need to find it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 20:37:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Tue, Jan 19, 2021 at 6:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > Actually, this looks path looks wrong in general, the value for\n> > \"xcrun --sdk macosx --show-sdk-path\" should take precedence over\n> > \"xcrun --show-sdk-path\" as the latter may be used for IOS potentially.\n>\n> What is \"potentially\"?\n\nWell I'm not sure the SDK parameter always defaults to macos although\nI guess it probably does as I couldn't figure out a way to change it:\n$ xcodebuild -showsdks\niOS SDKs:\n iOS 14.3 -sdk iphoneos14.3\niOS Simulator SDKs:\n Simulator - iOS 14.3 -sdk iphonesimulator14.3\nmacOS SDKs:\n DriverKit 20.2 -sdk driverkit.macosx20.2\n macOS 11.1 -sdk macosx11.1\ntvOS SDKs:\n tvOS 14.3 -sdk appletvos14.3\ntvOS Simulator SDKs:\n Simulator - tvOS 14.3 -sdk appletvsimulator14.3\nwatchOS SDKs:\n watchOS 7.2 -sdk watchos7.2\nwatchOS Simulator SDKs:\n Simulator - watchOS 7.2 -sdk watchsimulator7.2\n\n> I've found no direct means to control the\n> SDK path at all, but so far it appears that \"xcrun --show-sdk-path\"\n> agrees with the compiler's default -isysroot path as seen in the\n> compiler's -v output. I suspect that this isn't coincidental,\n> but reflects xcrun actually being used in the compiler launch\n> process. If it were to flip over to using a IOS SDK, that would\n> mean that bare \"cc\" would generate nonfunctional executables,\n> which just about any onlooker would agree is broken.\n\nSo there's some more weirdness involved here, whether or not you\nhave the command line install seems to affect the output of the\n\"xcrun --show-sdk-path\" command, but not the\n\"xcrun --sdk macosx --show-sdk-path\" command.\n\nThis is what I get without the command line tools:\n$ xcrun --show-sdk-path\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk\n$ xcrun --sdk macosx --show-sdk-path\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\nthis last one is just a symlink to the other path.\n\nWith command line tools this is different however:\n$ xcrun --show-sdk-path\n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk\n$ xcrun --sdk macosx --show-sdk-path\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n\nNote that the /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk\nis different from the normal SDK and doesn't seem to be able to generate\nbinaries that target a 11.0 deployment target on my 10.15 system, however\nI am unsure if this behavior can be relied upon.\n\nSo in terms of what works best, the newer normal SDK has the most flexibility\nas it can produce both 10.15 target binaries and 11.0 target binaries\ndepending on the MACOSX_DEPLOYMENT_TARGET while the command\nline tools SDK can only produce 10.15 target binaries it would appear.\n\nNote that with my patch the binaries will always be compatible with the\nhost system by default, even if the SDK is capable of producing binaries\nthat are incompatible so building postgres works with and without the\ncommand line tools SDK.\n\nSo I think \"xcrun --sdk macosx --show-sdk-path\" is probably preferable\nbut either should work as long as we can properly detect deployment\ntarget symbol availability, regardless this SDK sysroot selection issue is\neffectively an entirely different issue from the feature detection not properly\nrespecting the configured deployment target.\n\n>\n> I'm really not excited about trying to make the build work with\n> a non-native SDK as you are proposing. I think that's just going\n> to lead to a continuing stream of problems, because of Apple's\n> opinions about how cross-version compatibility should work.\n\nWell the minimum required target version is pretty much strictly based on\nMACOSX_DEPLOYMENT_TARGET so our feature detection still needs\nto use that, otherwise cross target compilation for newer or older targets will\nnot work correctly.\n\n From my understanding the reason AC_REPLACE_FUNCS does not\nthrow an error for deployment target incompatible functions is that it only\nchecks if the function exists and not if it is actually useable, this is\nwhy I had to add an explicit AC_LANG_PROGRAM compile test to\nproperly trigger a compile failure if the function is not usable for a\nparticular deployment target version, merely checking if the function\nexists in the header is not sufficient.\n\n> It also seems like unnecessary complexity, because there is always\n> (AFAICS) a native SDK version available. We just need to find it.\n\nBest I can tell this is not true, it is some(most?) of the time but\nit's not something\nwe can rely upon as systems may only contain a newer SDK, but this newer SDK\nis still capable of producing binaries that can run on the build host system so\nthis shouldn't be an issue as long as we can do target feature\ndetection properly.\n\n>\n> regards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 14:52:19 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Tue, Jan 19, 2021 at 6:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I've found no direct means to control the\n>> SDK path at all, but so far it appears that \"xcrun --show-sdk-path\"\n>> agrees with the compiler's default -isysroot path as seen in the\n>> compiler's -v output. I suspect that this isn't coincidental,\n>> but reflects xcrun actually being used in the compiler launch\n>> process. If it were to flip over to using a IOS SDK, that would\n>> mean that bare \"cc\" would generate nonfunctional executables,\n>> which just about any onlooker would agree is broken.\n\n> So there's some more weirdness involved here, whether or not you\n> have the command line install seems to affect the output of the\n> \"xcrun --show-sdk-path\" command, but not the\n> \"xcrun --sdk macosx --show-sdk-path\" command.\n\nYeah, that's what we discovered in the other thread. It seems that\nwith \"--sdk macosx\" you'll always get a pointer to the (solitary)\nSDK under /Applications/Xcode.app, but with the short \"xcrun\n--show-sdk-path\" command you might get either that or a pointer to\nsomething under /Library/Developer/CommandLineTools.\n\nI now believe what is actually happening with the short command is\nthat it's iterating through the available SDKs (according to some not\nvery clear search path) and picking the first one it finds that\nmatches the host system version. That matches the ktrace evidence\nthat shows it reading the SDKSettings.plist file in each SDK\ndirectory. The fact that it can seize on either an actual directory\nor an equivalent symlink might be due to chance ordering of directory\nentries. (It'd be interesting to see \"ls -f\" output for your\n/Library/Developer/CommandLineTools/SDKs directory ... though if\nyou've been experimenting with deinstall/reinstall, there's no\nreason to suppose the entry order is still the same.)\n\nI'm not sure that the case of not having the \"command line tools\"\ninstalled is interesting for our purposes. AFAIK you have to have\nthat in order to have access to required tools like bison and gmake.\n(That reminds me, I was intending to add something to our docs\nabout how-to-build-from-source to say that you need to install those.)\n\n> Note that with my patch the binaries will always be compatible with the\n> host system by default, even if the SDK is capable of producing binaries\n> that are incompatible so building postgres works with and without the\n> command line tools SDK.\n\nYeah. I don't see that as a benefit actually. Adding the\n-no_weak_imports linker switch (or the other one you're suggesting)\nmeans that you *cannot* cross-compile for a newer macOS version,\neven if you set PG_SYSROOT and/or MACOSX_DEPLOYMENT_TARGET with the\nintention of doing so. You'll still get a build that reflects the set\nof kernel calls available on the host system. Admittedly, this is a\ncase that's not likely to be of interest to very many people, but\nI don't see why a method with that restriction is superior to picking\na default SDK that matches the host system (and can be overridden).\n\n> So I think \"xcrun --sdk macosx --show-sdk-path\" is probably preferable\n> but either should work as long as we can properly detect deployment\n> target symbol availability, regardless this SDK sysroot selection issue is\n> effectively an entirely different issue from the feature detection not properly\n> respecting the configured deployment target.\n\nNo, I think it's pretty much equivalent. If we pick the right SDK\nthen we'll get the build we want.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 18:07:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Wed, Jan 20, 2021 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > On Tue, Jan 19, 2021 at 6:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I've found no direct means to control the\n> >> SDK path at all, but so far it appears that \"xcrun --show-sdk-path\"\n> >> agrees with the compiler's default -isysroot path as seen in the\n> >> compiler's -v output. I suspect that this isn't coincidental,\n> >> but reflects xcrun actually being used in the compiler launch\n> >> process. If it were to flip over to using a IOS SDK, that would\n> >> mean that bare \"cc\" would generate nonfunctional executables,\n> >> which just about any onlooker would agree is broken.\n>\n> > So there's some more weirdness involved here, whether or not you\n> > have the command line install seems to affect the output of the\n> > \"xcrun --show-sdk-path\" command, but not the\n> > \"xcrun --sdk macosx --show-sdk-path\" command.\n>\n> Yeah, that's what we discovered in the other thread. It seems that\n> with \"--sdk macosx\" you'll always get a pointer to the (solitary)\n> SDK under /Applications/Xcode.app, but with the short \"xcrun\n> --show-sdk-path\" command you might get either that or a pointer to\n> something under /Library/Developer/CommandLineTools.\n>\n> I now believe what is actually happening with the short command is\n> that it's iterating through the available SDKs (according to some not\n> very clear search path) and picking the first one it finds that\n> matches the host system version. That matches the ktrace evidence\n> that shows it reading the SDKSettings.plist file in each SDK\n> directory. The fact that it can seize on either an actual directory\n> or an equivalent symlink might be due to chance ordering of directory\n> entries. (It'd be interesting to see \"ls -f\" output for your\n> /Library/Developer/CommandLineTools/SDKs directory ... though if\n\nWell at the moment I completely deleted that directory...and the build\nworks fine with my patch still.\n\n> you've been experimenting with deinstall/reinstall, there's no\n> reason to suppose the entry order is still the same.)\n>\n> I'm not sure that the case of not having the \"command line tools\"\n> installed is interesting for our purposes. AFAIK you have to have\n> that in order to have access to required tools like bison and gmake.\n> (That reminds me, I was intending to add something to our docs\n> about how-to-build-from-source to say that you need to install those.)\n\nYeah, not 100% sure but I was able to build just fine after deleting my\ncommand line tools. I think it just switched to using the normal SDK\ntoolchain, I guess that's the fallback logic doing that.\n\nIt would be pretty annoying to have to install an outdated SDK just to\nbuild postgres for no other reason than the autoconf feature detection\nbeing broken.\n\n>\n> > Note that with my patch the binaries will always be compatible with the\n> > host system by default, even if the SDK is capable of producing binaries\n> > that are incompatible so building postgres works with and without the\n> > command line tools SDK.\n>\n> Yeah. I don't see that as a benefit actually. Adding the\n> -no_weak_imports linker switch (or the other one you're suggesting)\n> means that you *cannot* cross-compile for a newer macOS version,\n> even if you set PG_SYSROOT and/or MACOSX_DEPLOYMENT_TARGET with the\n> intention of doing so.\n\nBest I can tell this isn't true, I was able to cross compile for a newer\nMACOSX_DEPLOYMENT_TARGET than my build host just fine. The\nbinary fails with a \"Symbol not found: _pwritev\" error when I try\nto run it on the system that built it.\n\nIn regards to the -no_weak_imports switch...that is something different\nfrom my understanding as it just strips the weak imports forcing the\nfallback code paths to be taken instead, essentially functioning as if\nthe weak symbols are never available. It's largely separate from the\ndeployment target from my understanding as weak symbols are feature\nthat lets you use newer syscalls while still providing backwards\ncompatible fallbacks for older systems.\n\n> You'll still get a build that reflects the set\n> of kernel calls available on the host system. Admittedly, this is a\n> case that's not likely to be of interest to very many people, but\n> I don't see why a method with that restriction is superior to picking\n> a default SDK that matches the host system (and can be overridden).\n\nBut to fix the build when using a newer SDK overriding the SDK location\ndoes not help, you would have to override the broken feature detection.\n\n>\n> > So I think \"xcrun --sdk macosx --show-sdk-path\" is probably preferable\n> > but either should work as long as we can properly detect deployment\n> > target symbol availability, regardless this SDK sysroot selection issue is\n> > effectively an entirely different issue from the feature detection not properly\n> > respecting the configured deployment target.\n>\n> No, I think it's pretty much equivalent. If we pick the right SDK\n> then we'll get the build we want.\n\nGenerally any recent SDK installed should work as long as the feature detection\nin autoconf isn't broken. I'm not really sure what's the most correct option in\nregards to picking a SDK version, however the feature detection should be\nfixed regardless IMO.\n\n>\n> regards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 16:49:50 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On 21.01.2021 02:07, Tom Lane wrote:\n> I now believe what is actually happening with the short command is\n> that it's iterating through the available SDKs (according to some not\n> very clear search path) and picking the first one it finds that\n> matches the host system version. That matches the ktrace evidence\n> that shows it reading the SDKSettings.plist file in each SDK\n> directory.\n\nYes, you are right. After some more digging...\n\nIt searches the DEVELOPER_DIR first and then \n/Library/Developer/CommandLineTools, which is hardcoded.\n\nMy DEVELOPER_DIR is\n% xcode-select -p\n/Applications/Xcode.app/Contents/Developer\n\n(For more detail try \"otool -tV /usr/lib/libxcselect.dylib -p \n_xcselect_get_developer_dir_path\".)\n\nIt reads ProductVersion from \n/System/Library/CoreServices/SystemVersion.plist\n\n% plutil -p /System/Library/CoreServices/SystemVersion.plist | grep \nProductVersion\n \"ProductVersion\" => \"10.15.7\"\n\nStrips anything after the second dot, and prepends \"macosx\" to it, which \ngives \"macosx10.15\".\n\nThen it scans through SDK dirs looking up CanonicalName from \nSDKSettings.plist until it finds a match with \"macosx10.15\".\n\n\nThe overall callstack:\n\n% sudo dtrace -n 'syscall::getdirentries64:entry { ustack() }' -c 'xcrun \n--show-sdk-path'\ndtrace: description 'syscall::getdirentries64:entry ' matched 1 probe\n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\ndtrace: pid 20183 has exited\nCPU ID FUNCTION:NAME\n 0 846 getdirentries64:entry\n libsystem_kernel.dylib`__getdirentries64+0xa\n libsystem_c.dylib`readdir$INODE64+0x23\n libsystem_c.dylib`scandir$INODE64+0x6c\n libxcrun.dylib`cltools_lookup_sdk_by_key+0x5f\n libxcrun.dylib`cltools_lookup_boot_system_sdk+0xda\n libxcrun.dylib`xcinfocache_resolve_sdkroot+0xc0\n libxcrun.dylib`xcrun_main2+0x57a\n libxcrun.dylib`xcrun_main+0x9\n libxcselect.dylib`xcselect_invoke_xcrun_via_library+0xc8\n libxcselect.dylib`xcselect_invoke_xcrun+0x25a\n xcrun`DYLD-STUB$$getprogname\n libdyld.dylib`start+0x1\n xcrun`0x2\n\n 0 846 getdirentries64:entry\n libsystem_kernel.dylib`__getdirentries64+0xa\n libsystem_c.dylib`readdir$INODE64+0x23\n libsystem_c.dylib`scandir$INODE64+0x6c\n libxcrun.dylib`cltools_lookup_sdk_by_key+0x5f\n libxcrun.dylib`cltools_lookup_boot_system_sdk+0xf3\n libxcrun.dylib`xcinfocache_resolve_sdkroot+0xc0\n libxcrun.dylib`xcrun_main2+0x57a\n libxcrun.dylib`xcrun_main+0x9\n libxcselect.dylib`xcselect_invoke_xcrun_via_library+0xc8\n libxcselect.dylib`xcselect_invoke_xcrun+0x25a\n xcrun`DYLD-STUB$$getprogname\n libdyld.dylib`start+0x1\n xcrun`0x2\n\n\nThe SDK search path:\n\n% sudo dtrace -n 'pid$target:::entry \n/probefunc==\"cltools_lookup_sdk_by_key\"/ { trace(copyinstr(arg0)); \ntrace(copyinstr(arg1)) }' -c 'xcrun --show-sdk-path'\ndtrace: description 'pid$target:::entry ' matched 17293 probes\n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\ndtrace: pid 20191 has exited\nCPU ID FUNCTION:NAME\n 8 398290 cltools_lookup_sdk_by_key:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer \n macosx10.15\n 9 398290 cltools_lookup_sdk_by_key:entry \n/Library/Developer/CommandLineTools macosx10.15\n\n\nThe properties read from SDKSettings.plist:\n\n% sudo dtrace -n 'pid$target:::entry \n/probefunc==\"_cltools_lookup_property_in_path\"/ { \ntrace(copyinstr(arg0)); trace(copyinstr(arg1)); trace(copyinstr(arg2)) \n}' -c 'xcrun --show-sdk-path'\ndtrace: description 'pid$target:::entry ' matched 17293 probes\n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\ndtrace: pid 20195 has exited\nCPU ID FUNCTION:NAME\n 8 398288 _cltools_lookup_property_in_path:entry / \n System/Library/CoreServices/SystemVersion.plist \nProductVersion\n 8 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/DriverKit20.2.sdk \n SDKSettings.plist IsBaseSDK\n 8 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/DriverKit20.2.sdk \n SDKSettings.plist CanonicalName\n 4 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/DriverKit20.2.sdk \n SDKSettings.plist CanonicalNameForBuildSettings\n 4 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk \n SDKSettings.plist IsBaseSDK\n 4 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk \n SDKSettings.plist CanonicalName\n 4 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk \n SDKSettings.plist CanonicalNameForBuildSettings\n 4 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk \n SDKSettings.plist PLATFORM_NAME\n 4 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk \n SDKSettings.plist IsBaseSDK\n 2 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk \n SDKSettings.plist CanonicalName\n 2 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk \n SDKSettings.plist CanonicalNameForBuildSettings\n 2 398288 _cltools_lookup_property_in_path:entry \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk \n SDKSettings.plist PLATFORM_NAME\n 2 398288 _cltools_lookup_property_in_path:entry \n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk SDKSettings.plist \n IsBaseSDK\n 2 398288 _cltools_lookup_property_in_path:entry \n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk SDKSettings.plist \n CanonicalName\n 2 398288 _cltools_lookup_property_in_path:entry \n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk SDKSettings.plist \n CanonicalNameForBuildSettings\n 0 398288 _cltools_lookup_property_in_path:entry \n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk SDKSettings.plist \n PLATFORM_NAME\n 0 398288 _cltools_lookup_property_in_path:entry \n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk \nSDKSettings.plist IsBaseSDK\n 0 398288 _cltools_lookup_property_in_path:entry \n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk \nSDKSettings.plist CanonicalName\n\n\nBTW, on my machine /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk \nis skipped because it points to 11.0:\n\n% ls -l /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk\nlrwxr-xr-x 1 root wheel 14 Nov 17 02:21 \n/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -> MacOSX11.0.sdk\n\nFor more detail try\n% otool -tV \n/Applications/Xcode.app/Contents/Developer/usr/lib/libxcrun.dylib -p \n_cltools_lookup_boot_system_sdk\n\n\n", "msg_date": "Thu, 21 Jan 2021 11:39:48 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Wed, Jan 20, 2021 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm not sure that the case of not having the \"command line tools\"\n>> installed is interesting for our purposes. AFAIK you have to have\n>> that in order to have access to required tools like bison and gmake.\n>> (That reminds me, I was intending to add something to our docs\n>> about how-to-build-from-source to say that you need to install those.)\n\n> Yeah, not 100% sure but I was able to build just fine after deleting my\n> command line tools.\n\nHm. I've never been totally clear on what's included in the \"command line\ntools\", although it's now apparent that one thing that gets installed is\nan SDK matching the host OS version. However, Apple's description at [1]\nsays\n\n Command Line Tools\n\n Download the macOS SDK, headers, and build tools such as the Apple\n LLVM compiler and Make. These tools make it easy to install open\n source software or develop on UNIX within Terminal. macOS can\n automatically download these tools the first time you try to build\n software, and they are available on the downloads page.\n\nwhich certainly strongly implies that gmake is not there otherwise.\nAt this point I lack any \"bare\" macOS system to check it on. I wonder\nwhether you have a copy of make available from MacPorts or Homebrew.\nOr maybe uninstalling the command line tools doesn't really remove\neverything?\n\n> It would be pretty annoying to have to install an outdated SDK just to\n> build postgres for no other reason than the autoconf feature detection\n> being broken.\n\nIt's only as \"outdated\" as your host system ;-). Besides, it doesn't\nlook like Apple's really giving you a choice not to.\n\nThe long and short of this is that I'm unwilling to buy into maintaining\nour own substitutes for standard autoconf probes in order to make it\npossible to use the wrong SDK version. The preadv/pwritev case is already\nmessy enough, and I fear that trying to support such scenarios is going to\nlead to more and more pain in the future.\n\n\t\t\tregards, tom lane\n\n[1] https://developer.apple.com/xcode/features/\n\n\n", "msg_date": "Thu, 21 Jan 2021 13:38:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Thu, Jan 21, 2021 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > On Wed, Jan 20, 2021 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'm not sure that the case of not having the \"command line tools\"\n> >> installed is interesting for our purposes. AFAIK you have to have\n> >> that in order to have access to required tools like bison and gmake.\n> >> (That reminds me, I was intending to add something to our docs\n> >> about how-to-build-from-source to say that you need to install those.)\n>\n> > Yeah, not 100% sure but I was able to build just fine after deleting my\n> > command line tools.\n>\n> Hm. I've never been totally clear on what's included in the \"command line\n> tools\", although it's now apparent that one thing that gets installed is\n> an SDK matching the host OS version. However, Apple's description at [1]\n> says\n>\n> Command Line Tools\n>\n> Download the macOS SDK, headers, and build tools such as the Apple\n> LLVM compiler and Make. These tools make it easy to install open\n> source software or develop on UNIX within Terminal. macOS can\n> automatically download these tools the first time you try to build\n> software, and they are available on the downloads page.\n>\n> which certainly strongly implies that gmake is not there otherwise.\n> At this point I lack any \"bare\" macOS system to check it on. I wonder\n> whether you have a copy of make available from MacPorts or Homebrew.\n> Or maybe uninstalling the command line tools doesn't really remove\n> everything?\nYeah, not entirely sure there but I do use homebrew.\n>\n> > It would be pretty annoying to have to install an outdated SDK just to\n> > build postgres for no other reason than the autoconf feature detection\n> > being broken.\n>\n> It's only as \"outdated\" as your host system ;-). Besides, it doesn't\n> look like Apple's really giving you a choice not to.\nThe newer SDK should work fine as long as long as the autoconf feature\ndetection is fixed somehow.\n>\n> The long and short of this is that I'm unwilling to buy into maintaining\n> our own substitutes for standard autoconf probes in order to make it\n> possible to use the wrong SDK version. The preadv/pwritev case is already\n> messy enough, and I fear that trying to support such scenarios is going to\n> lead to more and more pain in the future.\nWell it's actually a larger issue, if it isn't fixed then the ability\nto change the\nMACOSX_DEPLOYMENT_TARGET doesn't work properly, not only for\nthe case of having a newer SDK on an older host but it would also prevent\nMACOSX_DEPLOYMENT_TARGET from working in general such as for\nbuilding with support for older targets from newer hosts, I'll see if there's\nmaybe a better way to fix the feature detection that's less of a maintenance\nissue.\n>\n> regards, tom lane\n>\n> [1] https://developer.apple.com/xcode/features/\n\n\n", "msg_date": "Thu, 21 Jan 2021 15:17:29 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On 22.01.2021 01:17, James Hilliard wrote:\n> On Thu, Jan 21, 2021 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> James Hilliard <james.hilliard1@gmail.com> writes:\n>>> On Wed, Jan 20, 2021 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> I'm not sure that the case of not having the \"command line tools\"\n>>>> installed is interesting for our purposes. AFAIK you have to have\n>>>> that in order to have access to required tools like bison and gmake.\n>>>> (That reminds me, I was intending to add something to our docs\n>>>> about how-to-build-from-source to say that you need to install those.)\n>>\n>>> Yeah, not 100% sure but I was able to build just fine after deleting my\n>>> command line tools.\n>>\n>> Hm. I've never been totally clear on what's included in the \"command line\n>> tools\", although it's now apparent that one thing that gets installed is\n>> an SDK matching the host OS version. However, Apple's description at [1]\n>> says\n>>\n>> Command Line Tools\n>>\n>> Download the macOS SDK, headers, and build tools such as the Apple\n>> LLVM compiler and Make. These tools make it easy to install open\n>> source software or develop on UNIX within Terminal. macOS can\n>> automatically download these tools the first time you try to build\n>> software, and they are available on the downloads page.\n>>\n>> which certainly strongly implies that gmake is not there otherwise.\n>> At this point I lack any \"bare\" macOS system to check it on. I wonder\n>> whether you have a copy of make available from MacPorts or Homebrew.\n>> Or maybe uninstalling the command line tools doesn't really remove\n>> everything?\n> Yeah, not entirely sure there but I do use homebrew.\n\n\nFWIW, I tested with a clean install of Catalina. Before I install \nanything at all, I already have xcode-select, xcrun and all the shims in \n/usr/bin for developer tools, including cc, make, git, xcodebuild... \nJust about everything listed in the FILES section of \"man xcode-select\".\n\nWhen I run any tool (except xcode-select), a GUI dialog pops up offering \nto install the Command Line Tools. So apparently those shims are not \nfunctional yet. I rejected the installation.\n\nInstead I downloaded Xcode12.1.xip via [1], the latest version with \nmacosx10.15 SDK. I unpacked it and installed by dragging Xcode.app to \n/Applications. It seems to me there is no magic behind the scenes, just \nmoving the directory. I selectively checked that the shims in /usr/bin \ndidn't change after that.\n\nNow, running \"cc\" tells me that I have to accept the Xcode license \nagreement. After accepting it, all the shims in /usr/bin start to work, \nforwarding to the real tools inside Xcode.app.\n\nIf I run the Homebrew installer, it says that it's going to install the \nCommand Line Tools. I don't know why it needs them, all the tools are \nthere already. I thought that CLT is a lighter-weight option when you \ndon't want the full Xcode installation, but Homebrew requires them anyway.\n\nI rejected to install CLT and abandoned Homebrew. Then I just cloned and \nbuilt Postgres successfully. So it looks like Xcode is really enough, at \nleast on a recent macOS version.\n\n\n[1] https://xcodereleases.com\n\n-- \nSergey Shinderuk\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 22 Jan 2021 02:32:46 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> I rejected to install CLT and abandoned Homebrew. Then I just cloned and \n> built Postgres successfully. So it looks like Xcode is really enough, at \n> least on a recent macOS version.\n\nHm. I seem to recall having had to install CLT as well as Xcode back\nin the day, but maybe Apple improved that. On the other side of the\ncoin, it also seems to be possible to build PG with only CLT and not\nXcode. I didn't try to verify that with a scorched-earth test, but\nI did trash Xcode (and empty trash) on my wife's Mac, and I could\nstill build and \"make check\" with only the CLT in place.\n\n[ pokes more carefully... ] Ah-hah, I see why I needed the CLT.\nI bet you'll find that you can't build from \"git clean -dfx\" state\nwith only Xcode, because comparing the contents of\n/Applications/Xcode.app/Contents/Developer/usr/bin and\n/Library/Developer/CommandLineTools/usr/bin on my own Mac,\nI observe that only the CLT provides bison and flex. I also see\ninstall_name_tool only in the CLT; we don't depend on that today,\nbut may soon (see the latest thread about coping with SIP).\n\nOn the whole it looks like we should recommend installing the CLT\nand not bothering with Xcode, which is about 10X the size:\n\n$ du -hs /Library/Developer/CommandLineTools\n1.1G /Library/Developer/CommandLineTools\n$ du -hs /Applications/Xcode.app\n 15G /Applications/Xcode.app\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jan 2021 12:12:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> If I run the Homebrew installer, it says that it's going to install the \n> Command Line Tools. I don't know why it needs them, all the tools are \n> there already. I thought that CLT is a lighter-weight option when you \n> don't want the full Xcode installation, but Homebrew requires them anyway.\n\nBTW, reading [1] I see\n\n You can install Xcode, the CLT, or both; Homebrew supports all three\n configurations.\n\nSo I'm not sure why you got that prompt, unless you were using a formula\nthat knew you were going to need bison.\n\n\t\t\tregards, tom lane\n\n[1] https://docs.brew.sh/Installation#3\n\n\n", "msg_date": "Fri, 22 Jan 2021 13:38:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Thu, Jan 21, 2021 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > On Wed, Jan 20, 2021 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'm not sure that the case of not having the \"command line tools\"\n> >> installed is interesting for our purposes. AFAIK you have to have\n> >> that in order to have access to required tools like bison and gmake.\n> >> (That reminds me, I was intending to add something to our docs\n> >> about how-to-build-from-source to say that you need to install those.)\n>\n> > Yeah, not 100% sure but I was able to build just fine after deleting my\n> > command line tools.\n>\n> Hm. I've never been totally clear on what's included in the \"command line\n> tools\", although it's now apparent that one thing that gets installed is\n> an SDK matching the host OS version. However, Apple's description at [1]\n> says\n>\n> Command Line Tools\n>\n> Download the macOS SDK, headers, and build tools such as the Apple\n> LLVM compiler and Make. These tools make it easy to install open\n> source software or develop on UNIX within Terminal. macOS can\n> automatically download these tools the first time you try to build\n> software, and they are available on the downloads page.\n>\n> which certainly strongly implies that gmake is not there otherwise.\n> At this point I lack any \"bare\" macOS system to check it on. I wonder\n> whether you have a copy of make available from MacPorts or Homebrew.\n> Or maybe uninstalling the command line tools doesn't really remove\n> everything?\n>\n> > It would be pretty annoying to have to install an outdated SDK just to\n> > build postgres for no other reason than the autoconf feature detection\n> > being broken.\n>\n> It's only as \"outdated\" as your host system ;-). Besides, it doesn't\n> look like Apple's really giving you a choice not to.\n>\n> The long and short of this is that I'm unwilling to buy into maintaining\n> our own substitutes for standard autoconf probes in order to make it\n> possible to use the wrong SDK version. The preadv/pwritev case is already\n> messy enough, and I fear that trying to support such scenarios is going to\n> lead to more and more pain in the future.\n\nI found a cleaner alternative to the compile test that appears to work:\nhttps://postgr.es/m/20210122193230.25295-1-james.hilliard1%40gmail.com\n\nBest I can tell the target deployment version check logic requires that the\n<sys/uio.h> header be included in order for the check to function properly.\n\nIt seems we just need to avoid AC_REPLACE_FUNCS for these cases since\nit doesn't allow for passing headers.\n\n>\n> regards, tom lane\n>\n> [1] https://developer.apple.com/xcode/features/\n\n\n", "msg_date": "Fri, 22 Jan 2021 13:53:31 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On 22.01.2021 20:12, Tom Lane wrote:\n> [ pokes more carefully... ] Ah-hah, I see why I needed the CLT.\n> I bet you'll find that you can't build from \"git clean -dfx\" state\n> with only Xcode, because comparing the contents of\n> /Applications/Xcode.app/Contents/Developer/usr/bin and\n> /Library/Developer/CommandLineTools/usr/bin on my own Mac,\n> I observe that only the CLT provides bison and flex. I also see\n> install_name_tool only in the CLT; we don't depend on that today,\n> but may soon (see the latest thread about coping with SIP).\n> \n\nI did git clone from scratch. Xcode really has all the tools.\n\nconfigure:9519: checking for bison\nconfigure:9537: found /usr/bin/bison\nconfigure:9549: result: /usr/bin/bison\nconfigure:9571: using bison (GNU Bison) 2.3\nconfigure:9609: checking for flex\nconfigure:9654: result: /usr/bin/flex\nconfigure:9674: using flex 2.5.35 Apple(flex-32)\n\n% xcrun --find bison\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/bison\n\n% xcrun --find install_name_tool\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/install_name_tool\n\n\n> On the whole it looks like we should recommend installing the CLT\n> and not bothering with Xcode, which is about 10X the size:\n> \n> $ du -hs /Library/Developer/CommandLineTools\n> 1.1G /Library/Developer/CommandLineTools\n> $ du -hs /Applications/Xcode.app\n> 15G /Applications/Xcode.app\n> \n\nFair.\n\n\n> BTW, reading [1] I see\n> \n> You can install Xcode, the CLT, or both; Homebrew supports all three\n> configurations.\n> \n> So I'm not sure why you got that prompt, unless you were using a formula\n> that knew you were going to need bison.\n> \n> [1] https://docs.brew.sh/Installation#3\n\nApparently, this documentation is wrong. I’m not installing any \nparticular formula, just running the Homebrew installer script.\n\n% /bin/bash -c \"$(curl -fsSL \nhttps://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\nPassword:\n==> This script will install:\n[...]\n==> The following new directories will be created:\n[...]\n==> The Xcode Command Line Tools will be installed.\n\nPress RETURN to continue or any other key to abort\n\n==> Installing Command Line Tools for Xcode-12.3\n==> /usr/bin/sudo /usr/sbin/softwareupdate -i Command\\ Line\\ Tools\\ for\\ \nXcode-12.3\nSoftware Update Tool\n\nDownloading Command Line Tools for Xcode\n[...]\n\nI checked the script [1], and it really requires the CLT. Here is the \nexplanation [2] for this:\n\n\tThere is actually no such requirement. However, there are\n\tformulae that will be forced to build from source if you do not\n\thave the CLT. They can still be built from source with Xcode\n\tonly, but because the pre-built bottles are compiled in an\n\tenvironment that has both Xcode and the CLT installed, there are\n\tsome cases where the bottles end up having a hard dependency on\n\tthe CLT. A major example is gcc. So installing the CLT may help\n\tyou avoid some lengthy source builds.\n\n\tWe ensure that all Homebrew formulae can be built with Xcode.app\n\talone. Most formulae can be built with just the CLT, and those\n\tthat require the full Xcode.app have an explicit depends_on\n\t:xcode => :build. Some users would prefer to use only the CLT\n\tbecause it's a much smaller download and takes less time to\n\tinstall and upgrade than Xcode.\n\n\n[1] https://github.com/Homebrew/install/blob/master/install.sh#L191\n[2] https://github.com/Homebrew/brew/issues/1613\n\n\nRegards.\n\n-- \nSergey Shinderuk\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 23 Jan 2021 08:02:01 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On 23.01.2021 08:02, Sergey Shinderuk wrote:\n> I checked the script [1], and it really requires the CLT. Here is the \n> explanation [2] for this:\n> \n>     There is actually no such requirement. However, there are\n>     formulae that will be forced to build from source if you do not\n>     have the CLT. They can still be built from source with Xcode\n>     only, but because the pre-built bottles are compiled in an\n>     environment that has both Xcode and the CLT installed, there are\n>     some cases where the bottles end up having a hard dependency on\n>     the CLT. A major example is gcc. So installing the CLT may help\n>     you avoid some lengthy source builds.\n> \n>     We ensure that all Homebrew formulae can be built with Xcode.app\n>     alone. Most formulae can be built with just the CLT, and those\n>     that require the full Xcode.app have an explicit depends_on\n>     :xcode => :build. Some users would prefer to use only the CLT\n>     because it's a much smaller download and takes less time to\n>     install and upgrade than Xcode.\n\n\nIn the gcc formula [1]:\n\n # The bottles are built on systems with the CLT installed, and do not \nwork\n # out of the box on Xcode-only systems due to an incorrect sysroot.\n pour_bottle? do\n reason \"The bottle needs the Xcode CLT to be installed.\"\n satisfy { MacOS::CLT.installed? }\n end\n\n\nI guess this is the \"xcrun --show-sdk-path\" thing we've alredy disccussed :)\n\n\n[1] https://github.com/Homebrew/homebrew-core/blob/master/Formula/gcc.rb#L36\n\n-- \nSergey Shinderuk\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 23 Jan 2021 08:12:16 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On 23.01.2021 08:02, Sergey Shinderuk wrote:\n>> On the whole it looks like we should recommend installing the CLT\n>> and not bothering with Xcode, which is about 10X the size:\n>>\n>> $ du -hs /Library/Developer/CommandLineTools\n>> 1.1G    /Library/Developer/CommandLineTools\n>> $ du -hs /Applications/Xcode.app\n>>   15G    /Applications/Xcode.app\n>>\n> \n> Fair.\n\nBTW, Homebrew prefers the CLT SDK:\nhttps://github.com/Homebrew/brew/blob/master/Library/Homebrew/os/mac.rb#L138\n\n # Prefer CLT SDK when both Xcode and the CLT are installed.\n # Expected results:\n # 1. On Xcode-only systems, return the Xcode SDK.\n # 2. On Xcode-and-CLT systems where headers are provided by the \nsystem, return nil.\n # 3. On CLT-only systems with no CLT SDK, return nil.\n # 4. On CLT-only systems with a CLT SDK, where headers are \nprovided by the system, return nil.\n # 5. On CLT-only systems with a CLT SDK, where headers are not \nprovided by the system, return the CLT SDK.\n\n\nHere is the relevant discussion:\nhttps://github.com/Homebrew/brew/pull/7134\n\nI like the example of Git compiled against the wrong \nLIBCURL_VERSION_NUM. Clearly, there are other issues with \ncross-compiling to a newer SDK, besides autoconf probes and weak imports.\n\n-- \nSergey Shinderuk\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 23 Jan 2021 11:27:16 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Sat, Jan 23, 2021 at 1:27 AM Sergey Shinderuk\n<s.shinderuk@postgrespro.ru> wrote:\n>\n> On 23.01.2021 08:02, Sergey Shinderuk wrote:\n> >> On the whole it looks like we should recommend installing the CLT\n> >> and not bothering with Xcode, which is about 10X the size:\n> >>\n> >> $ du -hs /Library/Developer/CommandLineTools\n> >> 1.1G /Library/Developer/CommandLineTools\n> >> $ du -hs /Applications/Xcode.app\n> >> 15G /Applications/Xcode.app\n> >>\n> >\n> > Fair.\n>\n> BTW, Homebrew prefers the CLT SDK:\n> https://github.com/Homebrew/brew/blob/master/Library/Homebrew/os/mac.rb#L138\n>\n> # Prefer CLT SDK when both Xcode and the CLT are installed.\n> # Expected results:\n> # 1. On Xcode-only systems, return the Xcode SDK.\n> # 2. On Xcode-and-CLT systems where headers are provided by the\n> system, return nil.\n> # 3. On CLT-only systems with no CLT SDK, return nil.\n> # 4. On CLT-only systems with a CLT SDK, where headers are\n> provided by the system, return nil.\n> # 5. On CLT-only systems with a CLT SDK, where headers are not\n> provided by the system, return the CLT SDK.\n>\n>\n> Here is the relevant discussion:\n> https://github.com/Homebrew/brew/pull/7134\n>\n> I like the example of Git compiled against the wrong\n> LIBCURL_VERSION_NUM. Clearly, there are other issues with\n> cross-compiling to a newer SDK, besides autoconf probes and weak imports.\n\n From my understanding homebrew considers supporting deployment targets\nother than that of the build host entirely out of scope, due to its\nnature of being\na meta build system homebrew probably needs to sidestep package target\ndeployment bugs in this way as they are likely to be quite common. Homebrew\nalso has to ensure compatibility with the binary bottles. Homebrew handles\nbackwards compatibility largely by using host OS specific binaries, which is not\nthe typical way package binaries are distributed on OSX.\n\nSo it appears that their reasoning for doing this may not be directly applicable\nto the situation we have with postgres as they have additional concerns. In\ntheory we should generally not have to worry about this much as long as target\ndeployment feature detection is functional as either SDK would generally work\nfor producing binaries that can run on the build host.\n\n>\n> --\n> Sergey Shinderuk\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\n\n", "msg_date": "Sat, 23 Jan 2021 02:22:23 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Fri, Jan 22, 2021 at 1:53 PM James Hilliard\n<james.hilliard1@gmail.com> wrote:\n>\n> On Thu, Jan 21, 2021 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > James Hilliard <james.hilliard1@gmail.com> writes:\n> > > On Wed, Jan 20, 2021 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> I'm not sure that the case of not having the \"command line tools\"\n> > >> installed is interesting for our purposes. AFAIK you have to have\n> > >> that in order to have access to required tools like bison and gmake.\n> > >> (That reminds me, I was intending to add something to our docs\n> > >> about how-to-build-from-source to say that you need to install those.)\n> >\n> > > Yeah, not 100% sure but I was able to build just fine after deleting my\n> > > command line tools.\n> >\n> > Hm. I've never been totally clear on what's included in the \"command line\n> > tools\", although it's now apparent that one thing that gets installed is\n> > an SDK matching the host OS version. However, Apple's description at [1]\n> > says\n> >\n> > Command Line Tools\n> >\n> > Download the macOS SDK, headers, and build tools such as the Apple\n> > LLVM compiler and Make. These tools make it easy to install open\n> > source software or develop on UNIX within Terminal. macOS can\n> > automatically download these tools the first time you try to build\n> > software, and they are available on the downloads page.\n> >\n> > which certainly strongly implies that gmake is not there otherwise.\n> > At this point I lack any \"bare\" macOS system to check it on. I wonder\n> > whether you have a copy of make available from MacPorts or Homebrew.\n> > Or maybe uninstalling the command line tools doesn't really remove\n> > everything?\n> >\n> > > It would be pretty annoying to have to install an outdated SDK just to\n> > > build postgres for no other reason than the autoconf feature detection\n> > > being broken.\n> >\n> > It's only as \"outdated\" as your host system ;-). Besides, it doesn't\n> > look like Apple's really giving you a choice not to.\n> >\n> > The long and short of this is that I'm unwilling to buy into maintaining\n> > our own substitutes for standard autoconf probes in order to make it\n> > possible to use the wrong SDK version. The preadv/pwritev case is already\n> > messy enough, and I fear that trying to support such scenarios is going to\n> > lead to more and more pain in the future.\n>\n> I found a cleaner alternative to the compile test that appears to work:\n> https://postgr.es/m/20210122193230.25295-1-james.hilliard1%40gmail.com\n>\n> Best I can tell the target deployment version check logic requires that the\n> <sys/uio.h> header be included in order for the check to function properly.\n>\n> It seems we just need to avoid AC_REPLACE_FUNCS for these cases since\n> it doesn't allow for passing headers.\n\nI did manage to verify that AC_REPLACE_FUNCS generates an incorrect conftest.c\nfor OSX which is why it is incorrectly detecting the availability of pwritev.\n\nconftest.c:\n/* confdefs.h */\n#define PACKAGE_NAME \"PostgreSQL\"\n#define PACKAGE_TARNAME \"postgresql\"\n#define PACKAGE_VERSION \"14devel\"\n#define PACKAGE_STRING \"PostgreSQL 14devel\"\n#define PACKAGE_BUGREPORT \"pgsql-bugs@lists.postgresql.org\"\n#define PACKAGE_URL \"https://www.postgresql.org/\"\n#define CONFIGURE_ARGS \"\"\n#define PG_MAJORVERSION \"14\"\n#define PG_MAJORVERSION_NUM 14\n#define PG_MINORVERSION_NUM 0\n#define PG_VERSION \"14devel\"\n#define DEF_PGPORT 5432\n#define DEF_PGPORT_STR \"5432\"\n#define BLCKSZ 8192\n#define RELSEG_SIZE 131072\n#define XLOG_BLCKSZ 8192\n#define ENABLE_THREAD_SAFETY 1\n#define PG_KRB_SRVNAM \"postgres\"\n#define STDC_HEADERS 1\n#define HAVE_SYS_TYPES_H 1\n#define HAVE_SYS_STAT_H 1\n#define HAVE_STDLIB_H 1\n#define HAVE_STRING_H 1\n#define HAVE_MEMORY_H 1\n#define HAVE_STRINGS_H 1\n#define HAVE_INTTYPES_H 1\n#define HAVE_STDINT_H 1\n#define HAVE_UNISTD_H 1\n#define HAVE_PTHREAD_PRIO_INHERIT 1\n#define HAVE_PTHREAD 1\n#define HAVE_STRERROR_R 1\n#define HAVE_GETPWUID_R 1\n#define STRERROR_R_INT 1\n#define HAVE_LIBM 1\n#define HAVE_LIBREADLINE 1\n#define HAVE_LIBZ 1\n#define HAVE_SPINLOCKS 1\n#define HAVE_ATOMICS 1\n#define HAVE__BOOL 1\n#define HAVE_STDBOOL_H 1\n#define HAVE_COPYFILE_H 1\n#define HAVE_EXECINFO_H 1\n#define HAVE_GETOPT_H 1\n#define HAVE_IFADDRS_H 1\n#define HAVE_LANGINFO_H 1\n#define HAVE_POLL_H 1\n#define HAVE_SYS_EVENT_H 1\n#define HAVE_SYS_IPC_H 1\n#define HAVE_SYS_RESOURCE_H 1\n#define HAVE_SYS_SELECT_H 1\n#define HAVE_SYS_SEM_H 1\n#define HAVE_SYS_SHM_H 1\n#define HAVE_SYS_SOCKIO_H 1\n#define HAVE_SYS_UIO_H 1\n#define HAVE_SYS_UN_H 1\n#define HAVE_TERMIOS_H 1\n#define HAVE_WCTYPE_H 1\n#define HAVE_NET_IF_H 1\n#define HAVE_SYS_UCRED_H 1\n#define HAVE_NETINET_TCP_H 1\n#define HAVE_READLINE_READLINE_H 1\n#define HAVE_READLINE_HISTORY_H 1\n#define PG_PRINTF_ATTRIBUTE printf\n#define HAVE_FUNCNAME__FUNC 1\n#define HAVE__STATIC_ASSERT 1\n#define HAVE_TYPEOF 1\n#define HAVE__BUILTIN_TYPES_COMPATIBLE_P 1\n#define HAVE__BUILTIN_CONSTANT_P 1\n#define HAVE__BUILTIN_UNREACHABLE 1\n#define HAVE_COMPUTED_GOTO 1\n#define HAVE_STRUCT_TM_TM_ZONE 1\n#define HAVE_UNION_SEMUN 1\n#define HAVE_STRUCT_SOCKADDR_UN 1\n#define HAVE_STRUCT_SOCKADDR_STORAGE 1\n#define HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY 1\n#define HAVE_STRUCT_SOCKADDR_STORAGE_SS_LEN 1\n#define HAVE_STRUCT_SOCKADDR_SA_LEN 1\n#define HAVE_STRUCT_ADDRINFO 1\n#define HAVE_LOCALE_T 1\n#define LOCALE_T_IN_XLOCALE 1\n#define restrict __restrict\n#define pg_restrict __restrict\n#define HAVE_STRUCT_OPTION 1\n#define HAVE_X86_64_POPCNTQ 1\n#define SIZEOF_OFF_T 8\n#define SIZEOF_BOOL 1\n#define PG_USE_STDBOOL 1\n#define HAVE_INT_TIMEZONE 1\n#define ACCEPT_TYPE_RETURN int\n#define ACCEPT_TYPE_ARG1 int\n#define ACCEPT_TYPE_ARG2 struct sockaddr *\n#define ACCEPT_TYPE_ARG3 socklen_t\n#define WCSTOMBS_L_IN_XLOCALE 1\n#define HAVE_BACKTRACE_SYMBOLS 1\n#define HAVE_CLOCK_GETTIME 1\n#define HAVE_COPYFILE 1\n#define HAVE_FDATASYNC 1\n#define HAVE_GETIFADDRS 1\n#define HAVE_GETRLIMIT 1\n#define HAVE_KQUEUE 1\n#define HAVE_MBSTOWCS_L 1\n#define HAVE_MEMSET_S 1\n#define HAVE_POLL 1\n#define HAVE_PTHREAD_IS_THREADED_NP 1\n#define HAVE_READLINK 1\n#define HAVE_READV 1\n#define HAVE_SETSID 1\n#define HAVE_SHM_OPEN 1\n#define HAVE_STRSIGNAL 1\n#define HAVE_SYMLINK 1\n#define HAVE_USELOCALE 1\n#define HAVE_WCSTOMBS_L 1\n#define HAVE_WRITEV 1\n#define HAVE__BUILTIN_BSWAP16 1\n#define HAVE__BUILTIN_BSWAP32 1\n#define HAVE__BUILTIN_BSWAP64 1\n#define HAVE__BUILTIN_CLZ 1\n#define HAVE__BUILTIN_CTZ 1\n#define HAVE__BUILTIN_POPCOUNT 1\n#define HAVE_FSEEKO 1\n#define HAVE_DECL_POSIX_FADVISE 0\n#define HAVE_DECL_FDATASYNC 0\n#define HAVE_DECL_STRLCAT 1\n#define HAVE_DECL_STRLCPY 1\n#define HAVE_DECL_STRNLEN 1\n#define HAVE_DECL_F_FULLFSYNC 1\n#define HAVE_DECL_RTLD_GLOBAL 1\n#define HAVE_DECL_RTLD_NOW 1\n#define HAVE_IPV6 1\n#define HAVE_DLOPEN 1\n#define HAVE_FLS 1\n#define HAVE_GETOPT 1\n#define HAVE_GETPEEREID 1\n#define HAVE_GETRUSAGE 1\n#define HAVE_INET_ATON 1\n#define HAVE_LINK 1\n#define HAVE_MKDTEMP 1\n#define HAVE_PREAD 1\n#define HAVE_PREADV 1\n#define HAVE_PWRITE 1\n/* end confdefs.h. */\n/* Define pwritev to an innocuous variant, in case <limits.h> declares pwritev.\n For example, HP-UX 11i <limits.h> declares gettimeofday. */\n#define pwritev innocuous_pwritev\n\n/* System header to define __stub macros and hopefully few prototypes,\n which can conflict with char pwritev (); below.\n Prefer <limits.h> to <assert.h> if __STDC__ is defined, since\n <limits.h> exists even on freestanding compilers. */\n\n#ifdef __STDC__\n# include <limits.h>\n#else\n# include <assert.h>\n#endif\n\n#undef pwritev\n\n/* Override any GCC internal prototype to avoid an error.\n Use char because int might match the return type of a GCC\n builtin and then its argument prototype would still apply. */\n#ifdef __cplusplus\nextern \"C\"\n#endif\nchar pwritev ();\n/* The GNU C library defines this for functions which it implements\n to always fail with ENOSYS. Some functions are actually named\n something starting with __ and the normal name is an alias. */\n#if defined __stub_pwritev || defined __stub___pwritev\nchoke me\n#endif\n\nint\nmain ()\n{\nreturn pwritev ();\n ;\n return 0;\n}\n\nThe correct declaration for pwritev on OSX is:\nssize_t pwritev(int, const struct iovec *, int, off_t)\n__DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0),\nwatchos(7.0), tvos(14.0));\nwhile the conftest.c generated by AC_REPLACE_FUNCS declares:\nchar pwritev ();\nwhich results in a broken conftest binary.\n\nOn OSX if the declaration is missing __API_AVAILABLE then the target\ndeployment version will not be checked properly and you might end up\nwith a broken binary.\n\n>\n> >\n> > regards, tom lane\n> >\n> > [1] https://developer.apple.com/xcode/features/\n\n\n", "msg_date": "Fri, 5 Feb 2021 20:53:09 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH 1/1] Fix detection of pwritev support for OSX." } ]
[ { "msg_contents": "Hi all,\n\nI was looking again at the thread that reported a problem when using\nALTER DEFAULT PRIVILEGES with duplicated object names:\nhttps://www.postgresql.org/message-id/ae2a7dc1-9d71-8cba-3bb9-e4cb7eb1f44e@hot.ee\n\nAnd while reviewing the thing, I have spotted that there is a specific\npath for pg_default_acl in RemoveRoleFromObjectACL() that has zero\ncoverage. This can be triggered with DROP OWNED BY, and it is\nactually safe to run as long as this is done in a separate transaction\nto avoid any interactions with parallel regression sessions.\nprivileges.sql already has similar tests, so I'd like to add some\ncoverage as per the attached (the duplicated role name is wanted).\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 19 Jan 2021 21:30:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Some coverage for DROP OWNED BY with pg_default_acl" }, { "msg_contents": "On 2021-Jan-19, Michael Paquier wrote:\n\n> And while reviewing the thing, I have spotted that there is a specific\n> path for pg_default_acl in RemoveRoleFromObjectACL() that has zero\n> coverage. This can be triggered with DROP OWNED BY, and it is\n> actually safe to run as long as this is done in a separate transaction\n> to avoid any interactions with parallel regression sessions.\n> privileges.sql already has similar tests, so I'd like to add some\n> coverage as per the attached (the duplicated role name is wanted).\n\nHeh, interesting case. Added coverage is good, so +1.\nSince the role regress_priv_user2 is \"private\" to the privileges.sql\nscript, there's no danger of a concurrent test getting the added lines\nin trouble AFAICS.\n\n> +SELECT count(*) FROM pg_shdepend\n> + WHERE deptype = 'a' AND\n> + refobjid = 'regress_priv_user2'::regrole AND\n> +\tclassid = 'pg_default_acl'::regclass;\n> + count \n> +-------\n> + 5\n> +(1 row)\n\nShrug. Seems sufficient.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Tue, 19 Jan 2021 17:49:03 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Some coverage for DROP OWNED BY with pg_default_acl" }, { "msg_contents": "On Tue, Jan 19, 2021 at 05:49:03PM -0300, Alvaro Herrera wrote:\n> Heh, interesting case. Added coverage is good, so +1.\n\nThanks. I read through it again and applied the test.\n\n> Since the role regress_priv_user2 is \"private\" to the privileges.sql\n> script, there's no danger of a concurrent test getting the added lines\n> in trouble AFAICS.\n\nIt seems to me that it could lead to some trouble if a test running in\nparallel expects a set of ACLs with no extra noise, as this stuff adds\ndata to the catalogs for all objects created while the default\npermissions are visible. Perhaps that's an over-defensive position,\nbut it does not hurt either to be careful similarly to the test run a\ncouple of lines above.\n--\nMichael", "msg_date": "Wed, 20 Jan 2021 13:35:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Some coverage for DROP OWNED BY with pg_default_acl" } ]
[ { "msg_contents": "Hi,\n\nWhen I review the [1], I find that the tuple's nulls array use char type.\nHowever there are many places use boolean array to repsent the nulls array,\nso I think we can replace the char type nulls array to boolean type. This\nchange will break the SPI_xxx API, I'm not sure whether this chagnges cause\nother problems or not. Any thought?\n\n[1] - https://www.postgresql.org/message-id/flat/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Tue, 19 Jan 2021 22:06:38 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Use boolean array for nulls parameters" }, { "msg_contents": "I personally don't see any benefit in this change. The focus shouldn't be\non fixing things that aren't broken. Perhaps, there is more value in using\nbitmap data type to keep track of NULL values, which is typical storage vs\nperformance debate, and IMHO, it's better to err on using slightly more\nstorage for much better performance. IIRC, the bitmap idea has previously\ndiscussed been rejected too.\n\nOn Tue, Jan 19, 2021 at 7:07 PM japin <japinli@hotmail.com> wrote:\n\n>\n> Hi,\n>\n> When I review the [1], I find that the tuple's nulls array use char type.\n> However there are many places use boolean array to repsent the nulls array,\n> so I think we can replace the char type nulls array to boolean type. This\n> change will break the SPI_xxx API, I'm not sure whether this chagnges cause\n> other problems or not. Any thought?\n>\n> [1] -\n> https://www.postgresql.org/message-id/flat/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com\n>\n> --\n> Regrads,\n> Japin Li.\n> ChengDu WenWu Information Technology Co.,Ltd.\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nI personally don't see any benefit in this change. The focus shouldn't be on fixing things that aren't broken. Perhaps, there is more value in using bitmap data type to keep track of NULL values, which is typical storage vs performance debate, and IMHO, it's better to err on using slightly more storage for much better performance. IIRC, the bitmap idea has previously discussed been rejected too.On Tue, Jan 19, 2021 at 7:07 PM japin <japinli@hotmail.com> wrote:\nHi,\n\nWhen I review the [1], I find that the tuple's nulls array use char type.\nHowever there are many places use boolean array to repsent the nulls array,\nso I think we can replace the char type nulls array to boolean type.  This\nchange will break the SPI_xxx API, I'm not sure whether this chagnges cause\nother problems or not.  Any thought?\n\n[1] - https://www.postgresql.org/message-id/flat/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Tue, 19 Jan 2021 20:01:31 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use boolean array for nulls parameters" }, { "msg_contents": "japin <japinli@hotmail.com> writes:\n> When I review the [1], I find that the tuple's nulls array use char type.\n> However there are many places use boolean array to repsent the nulls array,\n> so I think we can replace the char type nulls array to boolean type. This\n> change will break the SPI_xxx API, I'm not sure whether this chagnges cause\n> other problems or not. Any thought?\n\nWe have always considered that changing the APIs of published SPI\ninterfaces is a non-starter. The entire reason those calls still\nexist at all is for the benefit of third-party extensions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 10:45:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use boolean array for nulls parameters" }, { "msg_contents": "\nOn Tue, 19 Jan 2021 at 23:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> japin <japinli@hotmail.com> writes:\n>> When I review the [1], I find that the tuple's nulls array use char type.\n>> However there are many places use boolean array to repsent the nulls array,\n>> so I think we can replace the char type nulls array to boolean type. This\n>> change will break the SPI_xxx API, I'm not sure whether this chagnges cause\n>> other problems or not. Any thought?\n>\n> We have always considered that changing the APIs of published SPI\n> interfaces is a non-starter. The entire reason those calls still\n> exist at all is for the benefit of third-party extensions.\n>\n\nThanks for your clarify. I agree that we should keep the APIs stable, maybe we\ncan modify this in someday when the APIs must be changed.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 20 Jan 2021 10:26:19 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Use boolean array for nulls parameters" } ]
[ { "msg_contents": "I have a memory of the catalog not being MVCC,\nso maybe this is normal and expected,\nbut I wanted to report it in case it's not.\n\nWhen copying all tables in pg_catalog, to a separate schema with the purpose\nof testing if foreign keys could be added for all oid columns, I got an error for a toast table:\n\nERROR: insert or update on table \"pg_class\" violates foreign key constraint \"pg_class_reltype_fkey\"\nDETAIL: Key (reltype)=(86987582) is not present in table \"pg_type\".\nCONTEXT: SQL statement \"\n ALTER TABLE catalog_fks.pg_class ADD FOREIGN KEY (reltype) REFERENCES catalog_fks.pg_type (oid)\n \"\n\nThe copies of pg_catalog were executed in one and the same transaction,\nbut as separate queries in a PL/pgSQL function using EXECUTE.\n\n/Joel\nI have a memory of the catalog not being MVCC,so maybe this is normal and expected,but I wanted to report it in case it's not.When copying all tables in pg_catalog, to a separate schema with the purposeof testing if foreign keys could be added for all oid columns, I got an error for a toast table:ERROR:  insert or update on table \"pg_class\" violates foreign key constraint \"pg_class_reltype_fkey\"DETAIL:  Key (reltype)=(86987582) is not present in table \"pg_type\".CONTEXT:  SQL statement \"    ALTER TABLE catalog_fks.pg_class ADD FOREIGN KEY (reltype) REFERENCES catalog_fks.pg_type (oid)  \"The copies of pg_catalog were executed in one and the same transaction,but as separate queries in a PL/pgSQL function using EXECUTE./Joel", "msg_date": "Tue, 19 Jan 2021 17:34:36 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "pg_class.reltype -> pg_type.oid missing for pg_toast table" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> When copying all tables in pg_catalog, to a separate schema with the purpose\n> of testing if foreign keys could be added for all oid columns, I got an error for a toast table:\n> ERROR: insert or update on table \"pg_class\" violates foreign key constraint \"pg_class_reltype_fkey\"\n> DETAIL: Key (reltype)=(86987582) is not present in table \"pg_type\".\n\nI'm too lazy to check the code right now, but my recollection is that we\ndo not bother to make composite-type entries for toast tables. However,\nthey should have reltype = 0 if so, so I'm not quite sure where the\nabove failure is coming from.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 11:43:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_class.reltype -> pg_type.oid missing for pg_toast table" }, { "msg_contents": "On Tue, Jan 19, 2021, at 17:43, Tom Lane wrote:\n>I'm too lazy to check the code right now, but my recollection is that we\n>do not bother to make composite-type entries for toast tables. However,\n>they should have reltype = 0 if so, so I'm not quite sure where the\n>above failure is coming from.\n\nMy apologies, false alarm.\n\nThe problem turned out to be due to doing\n\n CREATE TABLE catalog_fks.%1$I AS\n SELECT * FROM pg_catalog.%1$I\n\nwhich causes changes to e.g. pg_catalog.pg_class during the command is running.\n\nSolved by instead using COPY ... TO to first copy catalogs to files on disk,\nwhich doesn't cause changes to the catalogs,\nand then using COPY .. FROM to copy the data into the replicated table structures.\n\n/Joel\nOn Tue, Jan 19, 2021, at 17:43, Tom Lane wrote:>I'm too lazy to check the code right now, but my recollection is that we>do not bother to make composite-type entries for toast tables.  However,>they should have reltype = 0 if so, so I'm not quite sure where the>above failure is coming from.My apologies, false alarm.The problem turned out to be due to doing    CREATE TABLE catalog_fks.%1$I AS    SELECT * FROM pg_catalog.%1$Iwhich causes changes to e.g. pg_catalog.pg_class during the command is running.Solved by instead using COPY ... TO to first copy catalogs to files on disk,which doesn't cause changes to the catalogs,and then using COPY .. FROM to copy the data into the replicated table structures./Joel", "msg_date": "Wed, 20 Jan 2021 07:00:37 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: pg_class.reltype -> pg_type.oid missing for pg_toast table" } ]
[ { "msg_contents": "Fixes:\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -I../../../../src/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -c -o fd.o fd.c\nfd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n part = pg_pwritev(fd, iov, iovcnt, offset);\n ^~~~~~~~~~\n../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'\n ^~~~~~~\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9: note: 'pwritev' has been marked as being introduced in macOS 11.0\n here, but the deployment target is macOS 10.15.0\nssize_t pwritev(int, const struct iovec *, int, off_t) __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0));\n ^\nfd.c:3661:10: note: enclose 'pwritev' in a __builtin_available check to silence this warning\n part = pg_pwritev(fd, iov, iovcnt, offset);\n ^~~~~~~~~~\n../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'\n ^~~~~~~\n1 warning generated.\n\nThis results in a runtime error:\nrunning bootstrap script ... dyld: lazy symbol binding failed: Symbol not found: _pwritev\n Referenced from: /usr/local/pgsql/bin/postgres\n Expected in: /usr/lib/libSystem.B.dylib\n\ndyld: Symbol not found: _pwritev\n Referenced from: /usr/local/pgsql/bin/postgres\n Expected in: /usr/lib/libSystem.B.dylib\n\nchild process was terminated by signal 6: Abort trap: 6\n\nTo fix this we set -Werror=unguarded-availability-new so that a compile\ntest for pwritev will fail if the symbol is unavailable on the requested\nSDK version.\n---\nChanges v1 -> v2:\n - Add AC_LIBOBJ(pwritev) when pwritev not available\n - set -Werror=unguarded-availability-new for CXX flags as well\n---\n configure | 145 ++++++++++++++++++++++++++++++++++++++++++++++-----\n configure.ac | 21 +++++++-\n 2 files changed, 152 insertions(+), 14 deletions(-)\n\ndiff --git a/configure b/configure\nindex 8af4b99021..662b0ae9ce 100755\n--- a/configure\n+++ b/configure\n@@ -5373,6 +5373,98 @@ if test x\"$pgac_cv_prog_CC_cflags__Werror_vla\" = x\"yes\"; then\n fi\n \n \n+ # Prevent usage of symbols marked as newer than our target.\n+\n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Werror=unguarded-availability-new, for CFLAGS\" >&5\n+$as_echo_n \"checking whether ${CC} supports -Werror=unguarded-availability-new, for CFLAGS... \" >&6; }\n+if ${pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new+:} false; then :\n+ $as_echo_n \"(cached) \" >&6\n+else\n+ pgac_save_CFLAGS=$CFLAGS\n+pgac_save_CC=$CC\n+CC=${CC}\n+CFLAGS=\"${CFLAGS} -Werror=unguarded-availability-new\"\n+ac_save_c_werror_flag=$ac_c_werror_flag\n+ac_c_werror_flag=yes\n+cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n+/* end confdefs.h. */\n+\n+int\n+main ()\n+{\n+\n+ ;\n+ return 0;\n+}\n+_ACEOF\n+if ac_fn_c_try_compile \"$LINENO\"; then :\n+ pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new=yes\n+else\n+ pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new=no\n+fi\n+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n+ac_c_werror_flag=$ac_save_c_werror_flag\n+CFLAGS=\"$pgac_save_CFLAGS\"\n+CC=\"$pgac_save_CC\"\n+fi\n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" >&5\n+$as_echo \"$pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" >&6; }\n+if test x\"$pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" = x\"yes\"; then\n+ CFLAGS=\"${CFLAGS} -Werror=unguarded-availability-new\"\n+fi\n+\n+\n+ { $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CXX} supports -Werror=unguarded-availability-new, for CXXFLAGS\" >&5\n+$as_echo_n \"checking whether ${CXX} supports -Werror=unguarded-availability-new, for CXXFLAGS... \" >&6; }\n+if ${pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new+:} false; then :\n+ $as_echo_n \"(cached) \" >&6\n+else\n+ pgac_save_CXXFLAGS=$CXXFLAGS\n+pgac_save_CXX=$CXX\n+CXX=${CXX}\n+CXXFLAGS=\"${CXXFLAGS} -Werror=unguarded-availability-new\"\n+ac_save_cxx_werror_flag=$ac_cxx_werror_flag\n+ac_cxx_werror_flag=yes\n+ac_ext=cpp\n+ac_cpp='$CXXCPP $CPPFLAGS'\n+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\n+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\n+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n+\n+cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n+/* end confdefs.h. */\n+\n+int\n+main ()\n+{\n+\n+ ;\n+ return 0;\n+}\n+_ACEOF\n+if ac_fn_cxx_try_compile \"$LINENO\"; then :\n+ pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new=yes\n+else\n+ pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new=no\n+fi\n+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n+ac_ext=c\n+ac_cpp='$CPP $CPPFLAGS'\n+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\n+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\n+ac_compiler_gnu=$ac_cv_c_compiler_gnu\n+\n+ac_cxx_werror_flag=$ac_save_cxx_werror_flag\n+CXXFLAGS=\"$pgac_save_CXXFLAGS\"\n+CXX=\"$pgac_save_CXX\"\n+fi\n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" >&5\n+$as_echo \"$pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" >&6; }\n+if test x\"$pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" = x\"yes\"; then\n+ CXXFLAGS=\"${CXXFLAGS} -Werror=unguarded-availability-new\"\n+fi\n+\n+\n # -Wvla is not applicable for C++\n \n { $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Wendif-labels, for CFLAGS\" >&5\n@@ -15715,6 +15807,46 @@ $as_echo \"#define HAVE_PS_STRINGS 1\" >>confdefs.h\n \n fi\n \n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: checking for pwritev\" >&5\n+$as_echo_n \"checking for pwritev... \" >&6; }\n+cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n+/* end confdefs.h. */\n+#ifdef HAVE_SYS_TYPES_H\n+#include <sys/types.h>\n+#endif\n+#ifdef HAVE_SYS_UIO_H\n+#include <sys/uio.h>\n+#endif\n+int\n+main ()\n+{\n+struct iovec *iov;\n+off_t offset;\n+offset = 0;\n+pwritev(0, iov, 0, offset);\n+\n+ ;\n+ return 0;\n+}\n+_ACEOF\n+if ac_fn_c_try_compile \"$LINENO\"; then :\n+ { $as_echo \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\n+$as_echo \"yes\" >&6; }\n+\n+$as_echo \"#define HAVE_PWRITEV 1\" >>confdefs.h\n+\n+else\n+ { $as_echo \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\n+$as_echo \"no\" >&6; }\n+case \" $LIBOBJS \" in\n+ *\" pwritev.$ac_objext \"* ) ;;\n+ *) LIBOBJS=\"$LIBOBJS pwritev.$ac_objext\"\n+ ;;\n+esac\n+\n+fi\n+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n+\n ac_fn_c_check_func \"$LINENO\" \"dlopen\" \"ac_cv_func_dlopen\"\n if test \"x$ac_cv_func_dlopen\" = xyes; then :\n $as_echo \"#define HAVE_DLOPEN 1\" >>confdefs.h\n@@ -15871,19 +16003,6 @@ esac\n \n fi\n \n-ac_fn_c_check_func \"$LINENO\" \"pwritev\" \"ac_cv_func_pwritev\"\n-if test \"x$ac_cv_func_pwritev\" = xyes; then :\n- $as_echo \"#define HAVE_PWRITEV 1\" >>confdefs.h\n-\n-else\n- case \" $LIBOBJS \" in\n- *\" pwritev.$ac_objext \"* ) ;;\n- *) LIBOBJS=\"$LIBOBJS pwritev.$ac_objext\"\n- ;;\n-esac\n-\n-fi\n-\n ac_fn_c_check_func \"$LINENO\" \"random\" \"ac_cv_func_random\"\n if test \"x$ac_cv_func_random\" = xyes; then :\n $as_echo \"#define HAVE_RANDOM 1\" >>confdefs.h\ndiff --git a/configure.ac b/configure.ac\nindex 868a94c9ba..724881a7f0 100644\n--- a/configure.ac\n+++ b/configure.ac\n@@ -494,6 +494,9 @@ if test \"$GCC\" = yes -a \"$ICC\" = no; then\n AC_SUBST(PERMIT_DECLARATION_AFTER_STATEMENT)\n # Really don't want VLAs to be used in our dialect of C\n PGAC_PROG_CC_CFLAGS_OPT([-Werror=vla])\n+ # Prevent usage of symbols marked as newer than our target.\n+ PGAC_PROG_CC_CFLAGS_OPT([-Werror=unguarded-availability-new])\n+ PGAC_PROG_CXX_CFLAGS_OPT([-Werror=unguarded-availability-new])\n # -Wvla is not applicable for C++\n PGAC_PROG_CC_CFLAGS_OPT([-Wendif-labels])\n PGAC_PROG_CXX_CFLAGS_OPT([-Wendif-labels])\n@@ -1726,6 +1729,23 @@ if test \"$pgac_cv_var_PS_STRINGS\" = yes ; then\n AC_DEFINE([HAVE_PS_STRINGS], 1, [Define to 1 if the PS_STRINGS thing exists.])\n fi\n \n+AC_MSG_CHECKING([for pwritev])\n+AC_COMPILE_IFELSE([AC_LANG_PROGRAM(\n+[#ifdef HAVE_SYS_TYPES_H\n+#include <sys/types.h>\n+#endif\n+#ifdef HAVE_SYS_UIO_H\n+#include <sys/uio.h>\n+#endif],\n+[struct iovec *iov;\n+off_t offset;\n+offset = 0;\n+pwritev(0, iov, 0, offset);\n+])], [AC_MSG_RESULT(yes)\n+AC_DEFINE([HAVE_PWRITEV], 1, [Define to 1 if you have the `pwritev' function.])],\n+[AC_MSG_RESULT(no)\n+AC_LIBOBJ(pwritev)])\n+\n AC_REPLACE_FUNCS(m4_normalize([\n \tdlopen\n \texplicit_bzero\n@@ -1739,7 +1759,6 @@ AC_REPLACE_FUNCS(m4_normalize([\n \tpread\n \tpreadv\n \tpwrite\n-\tpwritev\n \trandom\n \tsrandom\n \tstrlcat\n-- \n2.30.0\n\n\n\n", "msg_date": "Tue, 19 Jan 2021 09:54:35 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH v2 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> Fixes:\n> fd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n\nIt's still missing preadv, and it still has nonzero chance of breaking\nsuccessful detection of pwritev on platforms other than yours, and it's\nstill really ugly.\n\nBut the main reason I don't want to go this way is that I don't think\nit'll stop with preadv/pwritev. If we make it our job to build\nsuccessfully even when using the wrong SDK version for the target\nplatform, we're going to be in for more and more pain with other\nkernel APIs.\n\nWe could, of course, do what Apple wants us to do and try to build\nexecutables that work across versions. I do not intend to put up\nwith the sort of invasive, error-prone source-code-level runtime test\nthey recommend ... but given that there is weak linking involved here,\nI wonder if there is a way to silently sub in src/port/pwritev.c\nwhen executing on a pre-11 macOS, by dint of marking it a weak\nsymbol?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 12:29:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v2 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "On Tue, Jan 19, 2021 at 10:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Hilliard <james.hilliard1@gmail.com> writes:\n> > Fixes:\n> > fd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n>\n> It's still missing preadv, and it still has nonzero chance of breaking\n> successful detection of pwritev on platforms other than yours, and it's\n> still really ugly.\nSetting -Werror=unguarded-availability-new should in theory always\nensure that configure checks fail if the symbol is unavailable or marked\nas requiring a target newer than the MACOSX_DEPLOYMENT_TARGET.\n>\n> But the main reason I don't want to go this way is that I don't think\n> it'll stop with preadv/pwritev. If we make it our job to build\n> successfully even when using the wrong SDK version for the target\n> platform, we're going to be in for more and more pain with other\n> kernel APIs.\nThis issue really has nothing to do with the SDK version at all, it's the\nMACOSX_DEPLOYMENT_TARGET that matters which must be taken\ninto account during configure in some way, this is what my patch does\nby triggering the pwritev compile test error by setting\n-Werror=unguarded-availability-new.\n\nIt's expected that MACOSX_DEPLOYMENT_TARGET=10.15 with a\nMacOSX11.1.sdk will produce a binary that can run on OSX 10.15.\n\nThe MacOSX11.1.sdk is not the wrong SDK for a 10.15 target and\nis fully capable of producing 10.15 compatible binaries.\n>\n> We could, of course, do what Apple wants us to do and try to build\n> executables that work across versions. I do not intend to put up\n> with the sort of invasive, error-prone source-code-level runtime test\n> they recommend ... but given that there is weak linking involved here,\n> I wonder if there is a way to silently sub in src/port/pwritev.c\n> when executing on a pre-11 macOS, by dint of marking it a weak\n> symbol?\nThe check I added is strictly a compile time check still, not runtime.\n\nI also don't think this is a weak symbol.\n\n From the header file it is not have __attribute__((weak_import)):\nssize_t pwritev(int, const struct iovec *, int, off_t)\n__DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0),\nwatchos(7.0), tvos(14.0));\n>\n> regards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 10:55:46 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v2 1/1] Fix detection of pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> I also don't think this is a weak symbol.\n\n> From the header file it is not have __attribute__((weak_import)):\n> ssize_t pwritev(int, const struct iovec *, int, off_t)\n> __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0),\n> watchos(7.0), tvos(14.0));\n\nSee the other thread. I found by looking at the asm output that\nwhat __API_AVAILABLE actually does is cause the compiler to emit\na \".weak_reference\" directive when calling a function it thinks\nmight not be available. So there's some sort of weak linking\ngoing on, though it's certainly possible that it's not shaped\nin a way that'd help us do this the way we'd prefer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 16:00:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v2 1/1] Fix detection of pwritev support for OSX." } ]
[ { "msg_contents": "Do you know if the old travis build environment had liblz4 installed ?\n\nI'm asking regarding Dilip's patch, which was getting to \"check world\" 2 weeks\nago but now failing to even compile, not apparently due to any change in the\npatch. Also, are the historic logs available somewhere ?\nhttp://cfbot.cputube.org/dilip-kumar.html\n\nAlso, what's the process for having new libraries installed in the CI\nenvironment ?\n\nThere's 3 compression patches going around, so I think eventually we'll ask to\nget libzstd-devel (for libpq and pg_dump) and liblz4-devel (for toast and\nlibpq). Maybe all compression methods would be supported in each place - I\nhope the patches will share common code.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 19 Jan 2021 14:56:43 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "compression libraries and CF bot" }, { "msg_contents": "On Wed, Jan 20, 2021 at 9:56 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Do you know if the old travis build environment had liblz4 installed ?\n\nIt sounds like it.\n\n> I'm asking regarding Dilip's patch, which was getting to \"check world\" 2 weeks\n> ago but now failing to even compile, not apparently due to any change in the\n> patch. Also, are the historic logs available somewhere ?\n> http://cfbot.cputube.org/dilip-kumar.html\n\nI can find some of them but not that one, because Travis's \"branches\"\npage truncates well before our ~250 active branches, and that one\nisn't in there.\n\nhttps://travis-ci.org/github/postgresql-cfbot/postgresql/branches\n\n> Also, what's the process for having new libraries installed in the CI\n> environment ?\n\nI have added lz4 to the FreeBSD and Ubuntu build tasks, so we'll see\nif that helps at the next periodic build or when a new patch is\nposted. It's failing on Windows because there is no HAVE_LIBLZ4 in\nSolution.pm, and I don't know how to install that on a Mac. Is this\npatch supposed to be adding a new required dependency, or a new\noptional dependency?\n\nIn general, you could ask for changes here, or send me a pull request for eg:\n\nhttps://github.com/macdice/cfbot/blob/master/cirrus/.cirrus.yml\n\nIf we eventually think the CI control file is good enough, and can get\npast the various political discussions required to put CI\nvendor-specific material in our tree, it'd be just a regular patch\nproposal and could even be tweaked as part of a feature submission.\n\n> There's 3 compression patches going around, so I think eventually we'll ask to\n> get libzstd-devel (for libpq and pg_dump) and liblz4-devel (for toast and\n> libpq). Maybe all compression methods would be supported in each place - I\n> hope the patches will share common code.\n\n+1, nice to see modern compression coming to PostgreSQL.\n\n\n", "msg_date": "Wed, 20 Jan 2021 10:29:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compression libraries and CF bot" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Jan 20, 2021 at 9:56 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> Also, what's the process for having new libraries installed in the CI\n>> environment ?\n\n> I have added lz4 to the FreeBSD and Ubuntu build tasks, so we'll see\n> if that helps at the next periodic build or when a new patch is\n> posted. It's failing on Windows because there is no HAVE_LIBLZ4 in\n> Solution.pm, and I don't know how to install that on a Mac. Is this\n> patch supposed to be adding a new required dependency, or a new\n> optional dependency?\n\nIt had better be optional.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jan 2021 16:33:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compression libraries and CF bot" }, { "msg_contents": "On Wed, Jan 20, 2021 at 10:29:05AM +1300, Thomas Munro wrote:\n> I have added lz4 to the FreeBSD and Ubuntu build tasks, so we'll see\n> if that helps at the next periodic build or when a new patch is\n> posted. It's failing on Windows because there is no HAVE_LIBLZ4 in\n> Solution.pm, and I don't know how to install that on a Mac.\n\nFor mac, does it just need this ?\nbrew install lz4\n\nDilip's TOAST patch is passing on linux and bsd --with-lz4, so I think it's\ndesirable to install on mac now.\n\nlibzstd would be desirable for linux/bsd/mac for Konstantin's libpq patch, and\nmy pg_dump patch.\nhttps://formulae.brew.sh/formula/zstd\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 20 Feb 2021 09:30:27 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: compression libraries and CF bot" }, { "msg_contents": "On Sun, Feb 21, 2021 at 4:30 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Dilip's TOAST patch is passing on linux and bsd --with-lz4, so I think it's\n> desirable to install on mac now.\n\nJustin figured this out, so now this patch is using lz4 and passing on\nLinux, FreeBSD and macOS.\n\n> libzstd would be desirable for linux/bsd/mac for Konstantin's libpq patch, and\n> my pg_dump patch.\n> https://formulae.brew.sh/formula/zstd\n\nI think I have added that too; let's see what happens :-)\n\nNot done for Windows. It's probably easy with choco, though I assume\nnone of these patches have the right bits and pieces in our Perl build\nscripting for Windows...\n\n\n", "msg_date": "Fri, 12 Mar 2021 13:44:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: compression libraries and CF bot" } ]
[ { "msg_contents": "Hi Thomas,\n\nI am wondering if the cfbot at the moment is building the docs \n(html+pdf), for the patches that it tests. I suppose that it does? If \nso, what happens with the resulting (doc)files? To /dev/null? They are \nnot available as far as I can see. Would it be feasible to make them \navailable, either serving the html, or to make docs html+pdf a \ndownloadable zipfile?\n\n(it would also be useful to be able see at a glance somewhere if the \npatch contains sgml-changes at all...)\n\n\nThanks,\n\nErik Rijkers\n\n\n", "msg_date": "Tue, 19 Jan 2021 22:22:14 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "cfbot building docs - serving results" }, { "msg_contents": "On Wed, Jan 20, 2021 at 10:22 AM Erik Rijkers <er@xs4all.nl> wrote:\n> I am wondering if the cfbot at the moment is building the docs\n> (html+pdf), for the patches that it tests. I suppose that it does? If\n> so, what happens with the resulting (doc)files? To /dev/null? They are\n> not available as far as I can see. Would it be feasible to make them\n> available, either serving the html, or to make docs html+pdf a\n> downloadable zipfile?\n\nIt does build the docs as part of the Linux build. I picked that\nbecause Cirrus has more Linux horsepower available than the other\nOSes, and there's no benefit to doing that on all the OSes.\n\nThat's a good idea, and I suspect it could be handled as an\n\"artifact\", though I haven't looked into that:\n\nhttps://cirrus-ci.org/guide/writing-tasks/#artifacts-instruction\n\nIt'd also be nice to (somehow) know which .html pages changed so you\ncould go straight to the new stuff without the intermediate step of\nwondering where .sgml changes come out!\n\nAnother good use for artifacts that I used once or twice is the\nability to allow the results of the Windows build to be downloaded in\na .zip file and tested by non-developers without the build tool chain.\n\n> (it would also be useful to be able see at a glance somewhere if the\n> patch contains sgml-changes at all...)\n\nTrue. Basically you want to be able to find the diffstat output quickly.\n\n\n", "msg_date": "Wed, 20 Jan 2021 10:38:47 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cfbot building docs - serving results" } ]
[ { "msg_contents": "JSON parsing reports the line number and relevant context info\nincorrectly when the JSON contains newlines. Current code mostly just\nsays \"LINE 1\" and is misleading for error correction. There were no\ntests for this previously.\n\nProposed changes mean a JSON error such as this\n{\n \"one\": 1,\n \"two\":,\"two\", <-- extra comma\n \"three\": true\n}\n\nwas previously reported as\n\nCONTEXT: JSON data, line 1: {\n\"one\": 1,\n\"two\":,...\n\nshould be reported as\n\nCONTEXT: JSON data, line 3: \"two\":,...\n\nAttached patches:\nHEAD: json_error_context.v3.patch - applies cleanly, passes make check\nPG13: json_error_context.v3.patch - applies w minor fuzz, passes make check\nPG12: json_error_context.v3.PG12.patch - applies cleanly, passes make check\nPG11: json_error_context.v3.PG12.patch - applies cleanly, not tested\nPG10: json_error_context.v3.PG12.patch - applies cleanly, not tested\nPG9.6: json_error_context.v3.PG12.patch - applies cleanly, not tested\nPG9.5: json_error_context.v3.PG12.patch - applies cleanly, not tested\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Wed, 20 Jan 2021 06:58:09 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Bug in error reporting for multi-line JSON" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> JSON parsing reports the line number and relevant context info\n> incorrectly when the JSON contains newlines. Current code mostly just\n> says \"LINE 1\" and is misleading for error correction. There were no\n> tests for this previously.\n\nCouple thoughts:\n\n* I think you are wrong to have removed the line number bump that\nhappened when report_json_context advances context_start over a\nnewline. The case is likely harder to get to now, but it can still\nhappen can't it? If it can't, we should remove that whole stanza.\n\n* I'd suggest naming the new JsonLexContext field \"pos_last_newline\";\n\"linefeed\" is not usually the word we use for this concept. (Although\nactually, it might work better if you make that point to the char\n*after* the newline, in which case \"last_linestart\" might be the\nright name.)\n\n* I'm not enthused about back-patching. This behavior seems like an\nimprovement, but that doesn't mean people will appreciate changing it\nin stable branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Jan 2021 13:08:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in error reporting for multi-line JSON" }, { "msg_contents": "On Thu, Jan 21, 2021 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > JSON parsing reports the line number and relevant context info\n> > incorrectly when the JSON contains newlines. Current code mostly just\n> > says \"LINE 1\" and is misleading for error correction. There were no\n> > tests for this previously.\n>\n> Couple thoughts:\n>\n> * I think you are wrong to have removed the line number bump that\n> happened when report_json_context advances context_start over a\n> newline. The case is likely harder to get to now, but it can still\n> happen can't it? If it can't, we should remove that whole stanza.\n\nOK, I'm playing around with this to see what is needed.\n\n> * I'd suggest naming the new JsonLexContext field \"pos_last_newline\";\n> \"linefeed\" is not usually the word we use for this concept. (Although\n> actually, it might work better if you make that point to the char\n> *after* the newline, in which case \"last_linestart\" might be the\n> right name.)\n\nYes, OK\n\n> * I'm not enthused about back-patching. This behavior seems like an\n> improvement, but that doesn't mean people will appreciate changing it\n> in stable branches.\n\nOK, as you wish.\n\nThanks for the review, will post again soon with an updated patch.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 25 Jan 2021 13:08:15 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Bug in error reporting for multi-line JSON" }, { "msg_contents": "On Mon, Jan 25, 2021 at 6:08 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Thu, Jan 21, 2021 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > > JSON parsing reports the line number and relevant context info\n> > > incorrectly when the JSON contains newlines. Current code mostly just\n> > > says \"LINE 1\" and is misleading for error correction. There were no\n> > > tests for this previously.\n> >\n> > Couple thoughts:\n> >\n> > * I think you are wrong to have removed the line number bump that\n> > happened when report_json_context advances context_start over a\n> > newline. The case is likely harder to get to now, but it can still\n> > happen can't it? If it can't, we should remove that whole stanza.\n>\n> OK, I'm playing around with this to see what is needed.\n>\n> > * I'd suggest naming the new JsonLexContext field \"pos_last_newline\";\n> > \"linefeed\" is not usually the word we use for this concept. (Although\n> > actually, it might work better if you make that point to the char\n> > *after* the newline, in which case \"last_linestart\" might be the\n> > right name.)\n>\n> Yes, OK\n>\n> > * I'm not enthused about back-patching. This behavior seems like an\n> > improvement, but that doesn't mean people will appreciate changing it\n> > in stable branches.\n>\n> OK, as you wish.\n>\n> Thanks for the review, will post again soon with an updated patch.\n>\n\nI agree with Tom's feedback.. Whether you change pos_last_linefeed to point\nto the character after the linefeed or not, we can still simplify the for\nloop within the \"report_json_context\" function to:\n\n=================\ncontext_start = lex->input + lex->pos_last_linefeed;\ncontext_start += (*context_start == '\\n'); /* Let's move beyond the\nlinefeed */\ncontext_end = lex->token_terminator;\nline_start = context_start;\nwhile (context_end - context_start >= 50 && context_start < context_end)\n{\n/* Advance to next multibyte character */\nif (IS_HIGHBIT_SET(*context_start))\ncontext_start += pg_mblen(context_start);\nelse\ncontext_start++;\n}\n=================\n\nIMHO, this should work as pos_last_linefeed points to the position of the\nlast linefeed before the error occurred, hence we can safely skip it and\nmove the start_context forward.\n\n\n\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n>\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Mon, Jan 25, 2021 at 6:08 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Thu, Jan 21, 2021 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > JSON parsing reports the line number and relevant context info\n> > incorrectly when the JSON contains newlines. Current code mostly just\n> > says \"LINE 1\" and is misleading for error correction. There were no\n> > tests for this previously.\n>\n> Couple thoughts:\n>\n> * I think you are wrong to have removed the line number bump that\n> happened when report_json_context advances context_start over a\n> newline.  The case is likely harder to get to now, but it can still\n> happen can't it?  If it can't, we should remove that whole stanza.\n\nOK, I'm playing around with this to see what is needed.\n\n> * I'd suggest naming the new JsonLexContext field \"pos_last_newline\";\n> \"linefeed\" is not usually the word we use for this concept.  (Although\n> actually, it might work better if you make that point to the char\n> *after* the newline, in which case \"last_linestart\" might be the\n> right name.)\n\nYes, OK\n\n> * I'm not enthused about back-patching.  This behavior seems like an\n> improvement, but that doesn't mean people will appreciate changing it\n> in stable branches.\n\nOK, as you wish.\n\nThanks for the review, will post again soon with an updated patch.I agree with Tom's feedback.. Whether you change pos_last_linefeed to point to the character after the linefeed or not, we can still simplify the for loop within the \"report_json_context\" function to:=================\tcontext_start = lex->input + lex->pos_last_linefeed;\tcontext_start += (*context_start == '\\n'); /* Let's move beyond the linefeed */\tcontext_end = lex->token_terminator;\tline_start = context_start;\twhile (context_end - context_start >= 50 && context_start < context_end)\t{\t\t/* Advance to next multibyte character */\t\tif (IS_HIGHBIT_SET(*context_start))\t\t\tcontext_start += pg_mblen(context_start);\t\telse\t\t\tcontext_start++;\t}=================IMHO, this should work as pos_last_linefeed points to the position of the last linefeed before the error occurred, hence we can safely skip it and move the start_context forward. \n\n-- \nSimon Riggs                http://www.EnterpriseDB.com/\n\n\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Tue, 26 Jan 2021 14:07:09 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in error reporting for multi-line JSON" }, { "msg_contents": "On Tue, Jan 26, 2021 at 2:07 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n>\n>\n> On Mon, Jan 25, 2021 at 6:08 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n>\n>> On Thu, Jan 21, 2021 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >\n>> > Simon Riggs <simon.riggs@enterprisedb.com> writes:\n>> > > JSON parsing reports the line number and relevant context info\n>> > > incorrectly when the JSON contains newlines. Current code mostly just\n>> > > says \"LINE 1\" and is misleading for error correction. There were no\n>> > > tests for this previously.\n>> >\n>> > Couple thoughts:\n>> >\n>> > * I think you are wrong to have removed the line number bump that\n>> > happened when report_json_context advances context_start over a\n>> > newline. The case is likely harder to get to now, but it can still\n>> > happen can't it? If it can't, we should remove that whole stanza.\n>>\n>> OK, I'm playing around with this to see what is needed.\n>>\n>> > * I'd suggest naming the new JsonLexContext field \"pos_last_newline\";\n>> > \"linefeed\" is not usually the word we use for this concept. (Although\n>> > actually, it might work better if you make that point to the char\n>> > *after* the newline, in which case \"last_linestart\" might be the\n>> > right name.)\n>>\n>> Yes, OK\n>>\n>> > * I'm not enthused about back-patching. This behavior seems like an\n>> > improvement, but that doesn't mean people will appreciate changing it\n>> > in stable branches.\n>>\n>> OK, as you wish.\n>>\n>> Thanks for the review, will post again soon with an updated patch.\n>>\n>\n> I agree with Tom's feedback.. Whether you change pos_last_linefeed to\n> point to the character after the linefeed or not, we can still simplify the\n> for loop within the \"report_json_context\" function to:\n>\n> =================\n> context_start = lex->input + lex->pos_last_linefeed;\n> context_start += (*context_start == '\\n'); /* Let's move beyond the\n> linefeed */\n> context_end = lex->token_terminator;\n> line_start = context_start;\n> while (context_end - context_start >= 50 && context_start < context_end)\n> {\n> /* Advance to next multibyte character */\n> if (IS_HIGHBIT_SET(*context_start))\n> context_start += pg_mblen(context_start);\n> else\n> context_start++;\n> }\n> =================\n>\n> IMHO, this should work as pos_last_linefeed points to the position of the\n> last linefeed before the error occurred, hence we can safely skip it and\n> move the start_context forward.\n>\n>\nThis thread has been inactive for more than a month now.\n\nSo, I have reworked Simon's patch and incorporated Tom's feedback. The\nchanges include:\n- Changing the variable name from \"pos_last_linefeed\" to \"last_linestart\"\nas it now points to the character after the newline character,\n- The \"for\" loop in report_json_context function has been significantly\nsimplified and uses a while loop.\n\nThe attached patch is created against the current master branch.\n\n\n>\n>>\n>> --\n>> Simon Riggs http://www.EnterpriseDB.com/\n>>\n>>\n>>\n>\n> --\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca\n> ADDR: 10318 WHALLEY BLVD, Surrey, BC\n> CELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\n> SKYPE: engineeredvirus\n>", "msg_date": "Sun, 28 Feb 2021 02:19:19 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in error reporting for multi-line JSON" }, { "msg_contents": "Updated the patch based on feedback.\n\nThe new status of this patch is: Needs review\n", "msg_date": "Sat, 27 Feb 2021 21:25:00 +0000", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in error reporting for multi-line JSON" }, { "msg_contents": "> On 27 Feb 2021, at 22:19, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n> This thread has been inactive for more than a month now. \n> \n> So, I have reworked Simon's patch and incorporated Tom's feedback. The changes include:\n> - Changing the variable name from \"pos_last_linefeed\" to \"last_linestart\" as it now points to the character after the newline character,\n> - The \"for\" loop in report_json_context function has been significantly simplified and uses a while loop.\n> \n> The attached patch is created against the current master branch.\n\nThe updated version address the review comments, and pass all tests with no\ndocumentation updates required. Playing around with I was also unable to break\nit.\n\nI'm changing the status of this patch to Ready for Committer.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 1 Mar 2021 15:04:40 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Bug in error reporting for multi-line JSON" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I'm changing the status of this patch to Ready for Committer.\n\nI reviewed this and pushed it, with some changes.\n\nI noted that there was a basically unused \"line_start\" field in\nJsonLexContext, which seems clearly to have been meant to track\nwhat the new field was going to track. So we can fix this without\nany new field by updating that at the right times.\n\nI thought putting jsonb tests into json.sql was a bit poorly\nthought out. I ended up adding parallel tests to both json.sql\nand jsonb.sql ... maybe that's overkill, but a lot of the rest\nof those scripts is duplicative too. The tests weren't exercising\nthe dots-at-start-of-line behavior, either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Mar 2021 16:50:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in error reporting for multi-line JSON" }, { "msg_contents": "On Tuesday, March 2, 2021 6:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Daniel Gustafsson <daniel@yesql.se> writes:\n>> I'm changing the status of this patch to Ready for Committer.\n>\n>I reviewed this and pushed it, with some changes.\n\nI think the while condition \"context_start < context_end\" added in commit ffd3944ab9 is useless. Thoughts?\n\ncode added in ffd3944ab9\n+ while (context_end - context_start >= 50 && context_start < context_end)\n\nRegards,\nTang\n\n\n\n", "msg_date": "Wed, 25 Aug 2021 08:22:57 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Bug in error reporting for multi-line JSON" }, { "msg_contents": "> On 25 Aug 2021, at 10:22, tanghy.fnst@fujitsu.com wrote:\n> \n> On Tuesday, March 2, 2021 6:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> I'm changing the status of this patch to Ready for Committer.\n>> \n>> I reviewed this and pushed it, with some changes.\n> \n> I think the while condition \"context_start < context_end\" added in commit ffd3944ab9 is useless. Thoughts?\n> \n> code added in ffd3944ab9\n> + while (context_end - context_start >= 50 && context_start < context_end)\n\nJudging by the diff it’s likely a leftover from the previous coding. I don’t\nsee a case for when it would hit, but it also doesn’t seem to do any harm apart\nfrom potentially causing static analyzers to get angry.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 25 Aug 2021 10:56:31 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Bug in error reporting for multi-line JSON" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 25 Aug 2021, at 10:22, tanghy.fnst@fujitsu.com wrote:\n>> I think the while condition \"context_start < context_end\" added in commit ffd3944ab9 is useless. Thoughts?\n\n> Judging by the diff it’s likely a leftover from the previous coding. I don’t\n> see a case for when it would hit, but it also doesn’t seem to do any harm apart\n> from potentially causing static analyzers to get angry.\n\nYeah. I think that while reviewing this patch I read the while-condition\nas a range check on context_start, but it isn't --- both inequalities\nare in the same direction. I suppose there could be some quibble\nabout what happens if context_end - context_start is so large as to\noverflow an integer, but that's never gonna happen (and if it did,\nwe'd have other issues, for instance the lack of any check-for-interrupt\nin this loop).\n\nWill fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Aug 2021 10:55:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in error reporting for multi-line JSON" } ]
[ { "msg_contents": "Hi all,\n\nI am interested in figuring out how to get the names and types of the columns from an arbitrary query. Essentially, I want to be able to take a query like:\n\nCREATE TABLE foo(\n bar bigserial,\n baz varchar(256)\n);\n\nSELECT * FROM foo WHERE bar = 42;\n\nand figure out programmatically that the select will return a column \"bar\" of type bigserial, and a column \"foo\" of type varchar(256). I would like this to work for more complex queries as well (joins, CTEs, etc).\n\nI've found https://wiki.postgresql.org/wiki/Query_Parsing, which talks about related ways to hook into postgres, but that seems to only talk about the parse tree — a lot more detail and processing seems to be required in order to figure out the output types. It seems like there should be somewhere I can hook into in postgres that will get me this information, but I have no familiarity with the codebase, so I don't know the best way to get this.\n\nHow would you recommend that I approach this? I'm comfortable patching postgres if needed, although if there's a solution that doesn't require that, I'd prefer that.\n\nThanks,\n\n:w\n\n\n", "msg_date": "Wed, 20 Jan 2021 16:02:16 +0800", "msg_from": "\"Wesley Aptekar-Cassels\" <me@wesleyac.com>", "msg_from_op": true, "msg_subject": "Getting column names/types from select query?" }, { "msg_contents": "\"Wesley Aptekar-Cassels\" <me@wesleyac.com> writes:\n> I am interested in figuring out how to get the names and types of the\n> columns from an arbitrary query.\n\nWhere do you need this information? Usually the easiest way is to\nprepare (plan) the query and then extract metadata, for instance\nPQprepare and PQdescribePrepared if using libpq.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 11:46:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Getting column names/types from select query?" }, { "msg_contents": "> Where do you need this information?\n\nI'm writing some code that takes a given query, and generates type-safe bindings for it, so people can write SQL queries and get structs (or vectors of structs) out the other end. So I'm pretty flexible about where I get it, given that it'll be part of my build/codegen process. I hadn't seen libpq yet, I'll look into that — thanks!\n\n\n", "msg_date": "Thu, 21 Jan 2021 01:38:53 +0800", "msg_from": "\"Wesley Aptekar-Cassels\" <me@wesleyac.com>", "msg_from_op": true, "msg_subject": "Re: Getting column names/types from select query?" } ]
[ { "msg_contents": "Hello,\r\n\r\n\r\nWhile I'm investigating problems with parallel DML on another thread, I encountered a fishy behavior of EXPLAIN on HEAD. Is this a bug?\r\n\r\n\r\nAs follows, the rows and width values of Update node is 0. These were 1 and 10 respectively in versions 9.4.26 and 10.12 at hand.\r\n\r\n\r\npostgres=# create table a (c int);\r\nCREATE TABLE\r\npostgres=# insert into a values(1);\r\nINSERT 0 1\r\npostgres=# analyze a;\r\nANALYZE\r\npostgres=# begin;\r\nBEGIN\r\npostgres=*# explain analyze update a set c=2;\r\n QUERY PLAN \r\n--------------------------------------------------------------------------------------------------\r\n Update on a (cost=0.00..1.01 rows=0 width=0) (actual time=0.189..0.191 rows=0 loops=1)\r\n -> Seq Scan on a (cost=0.00..1.01 rows=1 width=10) (actual time=0.076..0.079 rows=1 loops=1)\r\n Planning Time: 0.688 ms\r\n Execution Time: 0.494 ms\r\n(4 rows)\r\n\r\n\r\nWith RETURNING, the values are not 0 as follows.\r\n\r\npostgres=*# rollback;\r\nROLLBACK\r\npostgres=# begin;\r\nBEGIN\r\npostgres=# explain analyze update a set c=2 returning *;\r\n QUERY PLAN \r\n--------------------------------------------------------------------------------------------------\r\n Update on a (cost=0.00..1.01 rows=1 width=10) (actual time=0.271..0.278 rows=1 loops=1)\r\n -> Seq Scan on a (cost=0.00..1.01 rows=1 width=10) (actual time=0.080..0.082 rows=1 loops=1)\r\n Planning Time: 0.308 ms\r\n Execution Time: 0.392 ms\r\n(4 rows)\r\n\r\nThe above holds true for Insert and Delete nodes as well.\r\n\r\nIn the manual, they are not 0.\r\n\r\nhttps://www.postgresql.org/docs/devel/using-explain.html\r\n--------------------------------------------------\r\nEXPLAIN ANALYZE UPDATE tenk1 SET hundred = hundred + 1 WHERE unique1 < 100;\r\n\r\n QUERY PLAN\r\n-------------------------------------------------------------------​-------------------------------------------------------------\r\n Update on tenk1 (cost=5.07..229.46 rows=101 width=250) (actual time=14.628..14.628 rows=0 loops=1)\r\n -> Bitmap Heap Scan on tenk1 (cost=5.07..229.46 rows=101 width=250) (actual time=0.101..0.439 rows=100 loops=1)\r\n...\r\n--------------------------------------------------\r\n\r\n\r\nThis behavior may possibly be considered as an intended behavior for the reason that Update/Insert/Delete nodes don't output rows without RETURNING. Is this a bug or a correct behavior?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Wed, 20 Jan 2021 08:12:34 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "[bug?] EXPLAIN outputs 0 for rows and width in cost estimate for\n update nodes" }, { "msg_contents": "On Wed, Jan 20, 2021 at 9:12 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> This behavior may possibly be considered as an intended behavior for the reason that Update/Insert/Delete nodes don't output rows without RETURNING. Is this a bug or a correct behavior?\n\nHi Tsunakawa-san,\n\nThis was a change made deliberately. Do you see a problem?\n\ncommit f0f13a3a08b2757997410f3a1c38bdc22973c525\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: Mon Oct 12 20:41:16 2020 +1300\n\n Fix estimates for ModifyTable paths without RETURNING.\n\n In the past, we always estimated that a ModifyTable node would emit the\n same number of rows as its subpaths. Without a RETURNING clause, the\n correct estimate is zero. Fix, in preparation for a proposed parallel\n write patch that is sensitive to that number.\n\n A remaining problem is that for RETURNING queries, the estimated width\n is based on subpath output rather than the RETURNING tlist.\n\n Reviewed-by: Greg Nancarrow <gregn4422@gmail.com>\n Discussion: https://postgr.es/m/CAJcOf-cXnB5cnMKqWEp2E2z7Mvcd04iLVmV%3DqpFJr\nR3AcrTS3g%40mail.gmail.com\n\n\n", "msg_date": "Wed, 20 Jan 2021 21:20:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] EXPLAIN outputs 0 for rows and width in cost estimate for\n update nodes" }, { "msg_contents": "Hi Thomas-san,\r\n\r\nFrom: Thomas Munro <thomas.munro@gmail.com>\r\n> This was a change made deliberately. Do you see a problem?\r\n\r\nThank you, I was surprised at your very quick response. I just wanted to confirm I can believe EXPLAIN output. Then the problem is the sample output in the manual. The fix is attached.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Wed, 20 Jan 2021 08:35:23 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [bug?] EXPLAIN outputs 0 for rows and width in cost estimate for\n update nodes" }, { "msg_contents": "On Wed, 2021-01-20 at 08:35 +0000, tsunakawa.takay@fujitsu.com wrote:\n> > This was a change made deliberately. Do you see a problem?\n> \n> Thank you, I was surprised at your very quick response.\n> I just wanted to confirm I can believe EXPLAIN output.\n> Then the problem is the sample output in the manual.\n> The fix is attached.\n\n+1. That was obviously an oversight.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 20 Jan 2021 09:42:35 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [bug?] EXPLAIN outputs 0 for rows and width in cost estimate\n for update nodes" }, { "msg_contents": "On Wed, Jan 20, 2021 at 9:42 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Wed, 2021-01-20 at 08:35 +0000, tsunakawa.takay@fujitsu.com wrote:\n> > > This was a change made deliberately. Do you see a problem?\n> >\n> > Thank you, I was surprised at your very quick response.\n> > I just wanted to confirm I can believe EXPLAIN output.\n> > Then the problem is the sample output in the manual.\n> > The fix is attached.\n>\n> +1. That was obviously an oversight.\n\nPushed. Thanks.\n\n\n", "msg_date": "Wed, 20 Jan 2021 22:49:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug?] EXPLAIN outputs 0 for rows and width in cost estimate for\n update nodes" } ]
[ { "msg_contents": "I just made the mistake of trying to run pgbench without first running\ncreatedb and got this:\n\npgbench: error: connection to database \"\" failed: could not connect to\nsocket \"/tmp/.s.PGSQL.5432\": FATAL: database \"rhaas\" does not exist\n\nThis looks pretty bogus because (1) I was not attempting to connect to\na database whose name is the empty string and (2) saying that it\ncouldn't connect to the socket is wrong, else it would not also be\nshowing a server message.\n\nI haven't investigated why this is happening; apologies if this is a\nknown issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Jan 2021 12:08:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "strange error reporting" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I just made the mistake of trying to run pgbench without first running\n> createdb and got this:\n\n> pgbench: error: connection to database \"\" failed: could not connect to\n> socket \"/tmp/.s.PGSQL.5432\": FATAL: database \"rhaas\" does not exist\n\n> This looks pretty bogus because (1) I was not attempting to connect to\n> a database whose name is the empty string and (2) saying that it\n> couldn't connect to the socket is wrong, else it would not also be\n> showing a server message.\n\nI'm not sure about the empty DB name in the first part (presumably\nthat's from pgbench, so what was your pgbench command exactly?).\nBut the 'could not connect to socket' part is a consequence of my\nrecent fiddling with libpq's connection failure reporting, see\n52a10224e. We could discuss exactly how that ought to be spelled,\nbut the idea is to consistently identify the host that we were trying\nto connect to. If you have a multi-host connection string, it's\nconceivable that \"rhaas\" exists on some of those hosts and not others,\nso I do not think the info is irrelevant.\n\nJust looking at this, I wonder if we ought to drop pgbench's\ncontribution to the message entirely; it seems like libpq's\nmessage is now fairly freestanding.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 12:19:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On Wed, Jan 20, 2021 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I just made the mistake of trying to run pgbench without first running\n> > createdb and got this:\n>\n> > pgbench: error: connection to database \"\" failed: could not connect to\n> > socket \"/tmp/.s.PGSQL.5432\": FATAL: database \"rhaas\" does not exist\n>\n> > This looks pretty bogus because (1) I was not attempting to connect to\n> > a database whose name is the empty string and (2) saying that it\n> > couldn't connect to the socket is wrong, else it would not also be\n> > showing a server message.\n>\n> I'm not sure about the empty DB name in the first part (presumably\n> that's from pgbench, so what was your pgbench command exactly?).\n\nI think it was just 'pgbench -i 40'. For sure, I didn't specify a database name.\n\n> But the 'could not connect to socket' part is a consequence of my\n> recent fiddling with libpq's connection failure reporting, see\n> 52a10224e. We could discuss exactly how that ought to be spelled,\n> but the idea is to consistently identify the host that we were trying\n> to connect to. If you have a multi-host connection string, it's\n> conceivable that \"rhaas\" exists on some of those hosts and not others,\n> so I do not think the info is irrelevant.\n\nI'm not saying that which socket I used is totally irrelevant although\nin most cases it's going to be a lot of detail. I'm just saying that,\nat least for me, when you say you can't connect to a socket, I at\nleast think about the return value of connect(2), which was clearly 0\nhere.\n\n> Just looking at this, I wonder if we ought to drop pgbench's\n> contribution to the message entirely; it seems like libpq's\n> message is now fairly freestanding.\n\nMaybe it would be better if it said:\n\nconnection to database at socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:\ndatabase \"rhaas\" does not exist\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Jan 2021 12:34:30 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jan 20, 2021 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> But the 'could not connect to socket' part is a consequence of my\n>> recent fiddling with libpq's connection failure reporting, see\n>> 52a10224e. We could discuss exactly how that ought to be spelled,\n>> but the idea is to consistently identify the host that we were trying\n>> to connect to. If you have a multi-host connection string, it's\n>> conceivable that \"rhaas\" exists on some of those hosts and not others,\n>> so I do not think the info is irrelevant.\n\n> I'm not saying that which socket I used is totally irrelevant although\n> in most cases it's going to be a lot of detail. I'm just saying that,\n> at least for me, when you say you can't connect to a socket, I at\n> least think about the return value of connect(2), which was clearly 0\n> here.\n\nFair. One possibility, which'd take a few more cycles in libpq but\nlikely not anything significant, is to replace \"could not connect to ...\"\nwith \"while connecting to ...\" once we're past the connect() per se.\n\n> Maybe it would be better if it said:\n\n> connection to database at socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:\n> database \"rhaas\" does not exist\n\nI'd be inclined to spell it \"connection to server at ... failed\",\nbut that sort of wording is surely also possible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 12:47:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On Wed, Jan 20, 2021 at 12:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Fair. One possibility, which'd take a few more cycles in libpq but\n> likely not anything significant, is to replace \"could not connect to ...\"\n> with \"while connecting to ...\" once we're past the connect() per se.\n\nYeah. I think this is kind of a client-side version of errcontext(),\nexcept we don't really have that context formally, so we're trying to\nfigure out how to fake it in specific cases.\n\n> > Maybe it would be better if it said:\n>\n> > connection to database at socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:\n> > database \"rhaas\" does not exist\n>\n> I'd be inclined to spell it \"connection to server at ... failed\",\n> but that sort of wording is surely also possible.\n\n\"connection to server\" rather than \"connection to database\" works for\nme; in fact, I think I like it slightly better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Jan 2021 12:59:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On 2021-Jan-20, Robert Haas wrote:\n\n> On Wed, Jan 20, 2021 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > I just made the mistake of trying to run pgbench without first running\n> > > createdb and got this:\n> >\n> > > pgbench: error: connection to database \"\" failed: could not connect to\n> > > socket \"/tmp/.s.PGSQL.5432\": FATAL: database \"rhaas\" does not exist\n> >\n> > > This looks pretty bogus because (1) I was not attempting to connect to\n> > > a database whose name is the empty string [...]\n> >\n> > I'm not sure about the empty DB name in the first part (presumably\n> > that's from pgbench, so what was your pgbench command exactly?).\n> \n> I think it was just 'pgbench -i 40'. For sure, I didn't specify a database name.\n\nThat's because pgbench reports the input argument dbname, but since you\ndidn't specify anything, then PQconnectdbParams() uses the libpq\nbehavior. I think we'd have to use PQdb() instead.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 20 Jan 2021 15:25:26 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On Wed, Jan 20, 2021 at 1:25 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> That's because pgbench reports the input argument dbname, but since you\n> didn't specify anything, then PQconnectdbParams() uses the libpq\n> behavior. I think we'd have to use PQdb() instead.\n\nI figured it was something like that. I don't know whether the right\nthing is to use something like PQdb() to get the correct database\nname, or whether we should go with Tom's suggestion and omit that\ndetail altogether, but I think showing the empty string when the user\nrelied on the default is too confusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Jan 2021 13:27:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On 2021-Jan-20, Robert Haas wrote:\n\n> On Wed, Jan 20, 2021 at 1:25 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > That's because pgbench reports the input argument dbname, but since you\n> > didn't specify anything, then PQconnectdbParams() uses the libpq\n> > behavior. I think we'd have to use PQdb() instead.\n> \n> I figured it was something like that. I don't know whether the right\n> thing is to use something like PQdb() to get the correct database\n> name, or whether we should go with Tom's suggestion and omit that\n> detail altogether, but I think showing the empty string when the user\n> relied on the default is too confusing.\n\nWell, the patch seems small enough, and I don't think it'll be in any\nway helpful to omit that detail.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)", "msg_date": "Wed, 20 Jan 2021 15:44:46 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jan-20, Robert Haas wrote:\n>> I figured it was something like that. I don't know whether the right\n>> thing is to use something like PQdb() to get the correct database\n>> name, or whether we should go with Tom's suggestion and omit that\n>> detail altogether, but I think showing the empty string when the user\n>> relied on the default is too confusing.\n\n> Well, the patch seems small enough, and I don't think it'll be in any\n> way helpful to omit that detail.\n\nI'm +1 for applying and back-patching that. I still think we might\nwant to just drop the phrase altogether in HEAD, but we wouldn't do\nthat in the back branches, and the message is surely misleading as-is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 13:54:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On Wed, Jan 20, 2021 at 1:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-Jan-20, Robert Haas wrote:\n> >> I figured it was something like that. I don't know whether the right\n> >> thing is to use something like PQdb() to get the correct database\n> >> name, or whether we should go with Tom's suggestion and omit that\n> >> detail altogether, but I think showing the empty string when the user\n> >> relied on the default is too confusing.\n>\n> > Well, the patch seems small enough, and I don't think it'll be in any\n> > way helpful to omit that detail.\n>\n> I'm +1 for applying and back-patching that. I still think we might\n> want to just drop the phrase altogether in HEAD, but we wouldn't do\n> that in the back branches, and the message is surely misleading as-is.\n\nSure, that makes sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Jan 2021 14:44:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n>>> Maybe it would be better if it said:\n>>> connection to database at socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:\n>>> database \"rhaas\" does not exist\n\n>> I'd be inclined to spell it \"connection to server at ... failed\",\n>> but that sort of wording is surely also possible.\n\n> \"connection to server\" rather than \"connection to database\" works for\n> me; in fact, I think I like it slightly better.\n\nIf I don't hear any other opinions, I'll change these messages to\n\n\"connection to server at socket \\\"%s\\\" failed: \"\n\"connection to server at \\\"%s\\\" (%s), port %s failed: \"\n\n(or maybe \"server on socket\"? \"at\" sounds right for the IP address\ncase, but it feels a little off in the socket pathname case.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 20:33:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "I wrote:\n> If I don't hear any other opinions, I'll change these messages to\n> \"connection to server at socket \\\"%s\\\" failed: \"\n> \"connection to server at \\\"%s\\\" (%s), port %s failed: \"\n\nDone. Also, here is a patch to remove the redundant-seeming prefixes\nfrom our reports of connection failures. My feeling that this is the\nright thing was greatly increased when I noticed that psql, as well as\na few other programs, already did it like this. (I still favor\nAlvaro's patch for the back branches, though.)\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 21 Jan 2021 17:03:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On 2021-Jan-20, Robert Haas wrote:\n\n> On Wed, Jan 20, 2021 at 1:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > > Well, the patch seems small enough, and I don't think it'll be in any\n> > > way helpful to omit that detail.\n> >\n> > I'm +1 for applying and back-patching that. I still think we might\n> > want to just drop the phrase altogether in HEAD, but we wouldn't do\n> > that in the back branches, and the message is surely misleading as-is.\n> \n> Sure, that makes sense.\n\nOK, I pushed it. Thanks,\n\npgbench has one occurrence of the old pattern in master, in line 6043.\nHowever, since doConnect() returns NULL when it gets CONNECTION_BAD,\nthat seems dead code. This patch kills it.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"I can see support will not be a problem. 10 out of 10.\" (Simon Wittber)\n (http://archives.postgresql.org/pgsql-general/2004-12/msg00159.php)", "msg_date": "Tue, 26 Jan 2021 16:52:24 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> pgbench has one occurrence of the old pattern in master, in line 6043.\n> However, since doConnect() returns NULL when it gets CONNECTION_BAD,\n> that seems dead code. This patch kills it.\n\nOh ... I missed that because it wasn't adjacent to the PQconnectdbParams\ncall :-(. You're right, that's dead code and we should just delete it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Jan 2021 16:45:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On 2021-Jan-26, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > pgbench has one occurrence of the old pattern in master, in line 6043.\n> > However, since doConnect() returns NULL when it gets CONNECTION_BAD,\n> > that seems dead code. This patch kills it.\n> \n> Oh ... I missed that because it wasn't adjacent to the PQconnectdbParams\n> call :-(. You're right, that's dead code and we should just delete it.\n\nPushed, thanks.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\ns�lo le suma el nuevo terror de la locura\" (Perelandra, C.S.Lewis)\n\n\n", "msg_date": "Thu, 28 Jan 2021 12:55:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On 21.01.21 02:33, Tom Lane wrote:\n>>> I'd be inclined to spell it \"connection to server at ... failed\",\n>>> but that sort of wording is surely also possible.\n> \n>> \"connection to server\" rather than \"connection to database\" works for\n>> me; in fact, I think I like it slightly better.\n> \n> If I don't hear any other opinions, I'll change these messages to\n> \n> \"connection to server at socket \\\"%s\\\" failed:\"\n> \"connection to server at \\\"%s\\\" (%s), port %s failed:\"\n> \n> (or maybe \"server on socket\"? \"at\" sounds right for the IP address\n> case, but it feels a little off in the socket pathname case.)\n\nI was just trying some stuff with PG14, which led me to this thread.\n\nI find these new error messages to be more distracting than before in \nsome cases. For example:\n\nPG13:\n\nclusterdb: error: could not connect to database typo: FATAL: database \n\"typo\" does not exist\n\nPG14:\n\nclusterdb: error: connection to server on socket \"/tmp/.s.PGSQL.65432\" \nfailed: FATAL: database \"typo\" does not exist\n\nThrowing the socket address in there seems a bit distracting and \nmisleading, and it also pushes off the actual information very far to \nthe end. (Also, in some cases the socket path is very long, making the \nactual information even harder to find.) By the time you get to this \nerror, you have already connected, so mentioning the server address \nseems secondary at best.\n\n\n", "msg_date": "Mon, 3 May 2021 12:08:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "On Mon, May 3, 2021 at 6:08 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I find these new error messages to be more distracting than before in\n> some cases. For example:\n>\n> PG13:\n>\n> clusterdb: error: could not connect to database typo: FATAL: database\n> \"typo\" does not exist\n>\n> PG14:\n>\n> clusterdb: error: connection to server on socket \"/tmp/.s.PGSQL.65432\"\n> failed: FATAL: database \"typo\" does not exist\n>\n> Throwing the socket address in there seems a bit distracting and\n> misleading, and it also pushes off the actual information very far to\n> the end. (Also, in some cases the socket path is very long, making the\n> actual information even harder to find.) By the time you get to this\n> error, you have already connected, so mentioning the server address\n> seems secondary at best.\n\nIt feels a little counterintuitive to me too but I am nevertheless\ninclined to believe that it's an improvement. When multi-host\nconnection strings are used, the server address may not be clear. In\nfact, even when they're not, it may not be clear to a new user that\nsocket communication is used, and it may not be clear where the socket\nis located. New users may not even realize that there's a socket\ninvolved; I certainly didn't know that for quite a while. It's a lot\nharder for the database name to be unclear, because since a particular\nconnection attempt will never try more than one, and also because when\nit's relevant to understanding why the connection failed, the server\nwill hopefully include it in the message string anyway, as here. So\nthe PG13 message is really kind of silly: it tells us the same thing\ntwice, which we must already know, instead of telling us something\nthat we might not know.\n\nIt might be more intuitive in some ways if the socket information were\ndemoted to the end of the message, but I think we'd lose more than we\ngained. The standard way of reporting someone else's error is\nbasically \"what i have to say about the problem: %s\" and that's\nexactly what we're doing here. We could find some way of gluing the\ninformation about the socket onto the end of the server message, but\nit seems unclear how to do that in a way that looks natural, and it\nwould depart from our usual practice. So even though I also find this\nto be a bit distracting, I think we should just live with it, because\neverything else seems worse.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 10:21:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: strange error reporting" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, May 3, 2021 at 6:08 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> Throwing the socket address in there seems a bit distracting and\n>> misleading, and it also pushes off the actual information very far to\n>> the end. (Also, in some cases the socket path is very long, making the\n>> actual information even harder to find.) By the time you get to this\n>> error, you have already connected, so mentioning the server address\n>> seems secondary at best.\n\n> It feels a little counterintuitive to me too but I am nevertheless\n> inclined to believe that it's an improvement. When multi-host\n> connection strings are used, the server address may not be clear. In\n> fact, even when they're not, it may not be clear to a new user that\n> socket communication is used, and it may not be clear where the socket\n> is located.\n\nYeah. The specific problem I'm concerned about solving here is\n\"I wasn't connecting to the server I thought I was\", which could be\na contributing factor in almost any connection-time failure. The\nmulti-host-connection-string feature made that issue noticeably worse,\nbut surely we've all seen trouble reports that boiled down to that\neven before that feature came in.\n\nAs you say, we could perhaps redesign the messages to provide this\ninfo in another order. But it'd be difficult, and I think it might\ncome out even more confusing in cases where libpq tried several\nservers on the way to finally failing. The old code's error\nreporting for such cases completely sucked, whereas now you get\na reasonably complete trace of the attempts. As a quick example,\nfor a case of bad hostname followed by wrong port:\n\n$ psql -d \"host=foo1,sss2 port=5432,5342\"\npsql: error: could not translate host name \"foo1\" to address: Name or service not known\nconnection to server at \"sss2\" (192.168.1.48), port 5342 failed: Connection refused\n Is the server running on that host and accepting TCP/IP connections?\n\nv13 renders this as\n\n$ psql -d \"host=foo1,sss2 port=5432,5342\"\npsql: error: could not translate host name \"foo1\" to address: Name or service not known\ncould not connect to server: Connection refused\n Is the server running on host \"sss2\" (192.168.1.48) and accepting\n TCP/IP connections on port 5342?\n\nNow, of course the big problem there is the lack of consistency about\nhow the two errors are laid out; but I'd argue that putting the\nserver identity info first is better than putting it later.\n\nAlso, if you experiment with other cases such as some of the servers\ncomplaining about wrong user name, the old behavior is even harder\nto follow about which server said what.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 May 2021 10:47:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strange error reporting" } ]
[ { "msg_contents": "Hi!\n\nWe have a bug report which says that jsonpath ** operator behaves strangely\nin the lax mode [1].\n\nNaturally, the result of this query looks counter-intuitive.\n\n# select jsonb_path_query_array('[{\"a\": 1, \"b\": [{\"a\": 2}]}]', 'lax\n$.**.a');\n jsonb_path_query_array\n------------------------\n [1, 1, 2, 2]\n(1 row)\n\nBut actually, everything works as designed. ** operator reports both\nobjects and wrapping arrays, while object key accessor automatically\nunwraps arrays.\n\n# select x, jsonb_path_query_array(x, '$.a') from jsonb_path_query('[{\"a\":\n1, \"b\": [{\"a\": 2}]}]', 'lax $.**') x;\n x | jsonb_path_query_array\n-----------------------------+------------------------\n [{\"a\": 1, \"b\": [{\"a\": 2}]}] | [1]\n {\"a\": 1, \"b\": [{\"a\": 2}]} | [1]\n 1 | []\n [{\"a\": 2}] | [2]\n {\"a\": 2} | [2]\n 2 | []\n(6 rows)\n\nAt first sight, we may just say that lax mode just sucks and\ncounter-intuitive results are expected. But at the second sight, the lax\nmode is used by default and current behavior may look too surprising.\n\nMy proposal is to make everything after the ** operator use strict mode\n(patch attached). I think this shouldn't be backpatched, just applied to\nthe v14. Other suggestions?\n\nLinks\n1.\nhttps://www.postgresql.org/message-id/16828-2b0229babfad2d8c%40postgresql.org\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Wed, 20 Jan 2021 20:13:05 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Jsonpath ** vs lax mode" }, { "msg_contents": "On 2021-Jan-20, Alexander Korotkov wrote:\n\n> My proposal is to make everything after the ** operator use strict mode\n> (patch attached). I think this shouldn't be backpatched, just applied to\n> the v14. Other suggestions?\n\nI think changing the mode midway through the operation is strange. What\ndo you think of requiring for ** that mode is strict? That is, if ** is\nused and the mode is lax, an error is thrown.\n\nThanks\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 20 Jan 2021 15:16:34 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Jsonpath ** vs lax mode" }, { "msg_contents": "Hi, Alvaro!\n\nThank you for your feedback.\n\nOn Wed, Jan 20, 2021 at 9:16 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Jan-20, Alexander Korotkov wrote:\n>\n> > My proposal is to make everything after the ** operator use strict mode\n> > (patch attached). I think this shouldn't be backpatched, just applied to\n> > the v14. Other suggestions?\n>\n> I think changing the mode midway through the operation is strange. What\n> do you think of requiring for ** that mode is strict? That is, if ** is\n> used and the mode is lax, an error is thrown.\n\nYes, changing mode in midway is a bit strange.\n\nRequiring strict mode for ** is a solution, but probably too restrictive...\n\nWhat do you think about making just subsequent accessor after ** not\nto unwrap arrays. That would be a bit tricky to implement, but\nprobably that would better satisfy the user needs.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 21 Jan 2021 12:27:45 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Jsonpath ** vs lax mode" }, { "msg_contents": "Alexander Korotkov schrieb am 20.01.2021 um 18:13:\n> We have a bug report which says that jsonpath ** operator behaves strangely in the lax mode [1].\n\nThat report was from me ;)\n\nThanks for looking into it.\n\n> At first sight, we may just say that lax mode just sucks and\n> counter-intuitive results are expected. But at the second sight, the\n> lax mode is used by default and current behavior may look too\n> surprising.\n\nI personally would be fine with the manual stating that the Postgres extension\nto the JSONPath processing that allows a recursive lookup using ** requires strict\nmode to work properly.\n\nIt should probably be documented in chapter 9.16.2 \"The SQL/JSON Path Language\",\nmaybe with a little warning in the description of jsonb_path_query** and in\nchapter 8.14.16 as well (or at least that's were I would expect such a warning)\n\nRegards\nThomas\n\n\n", "msg_date": "Thu, 21 Jan 2021 10:37:58 +0100", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Jsonpath ** vs lax mode" }, { "msg_contents": "On 2021-Jan-21, Alexander Korotkov wrote:\n\n> Requiring strict mode for ** is a solution, but probably too restrictive...\n> \n> What do you think about making just subsequent accessor after ** not\n> to unwrap arrays. That would be a bit tricky to implement, but\n> probably that would better satisfy the user needs.\n\nHmm, why is it too restrictive? If the user needs to further drill into\nthe JSON, can't they chain json_path_query calls, specifying (or\ndefaulting to) lax mode for the part doesn't include the ** expression?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 21 Jan 2021 10:35:10 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Jsonpath ** vs lax mode" }, { "msg_contents": "On Thu, Jan 21, 2021 at 12:38 PM Thomas Kellerer <shammat@gmx.net> wrote:\n> Alexander Korotkov schrieb am 20.01.2021 um 18:13:\n> > We have a bug report which says that jsonpath ** operator behaves strangely in the lax mode [1].\n> That report was from me ;)\n>\n> Thanks for looking into it.\n>\n> > At first sight, we may just say that lax mode just sucks and\n> > counter-intuitive results are expected. But at the second sight, the\n> > lax mode is used by default and current behavior may look too\n> > surprising.\n>\n> I personally would be fine with the manual stating that the Postgres extension\n> to the JSONPath processing that allows a recursive lookup using ** requires strict\n> mode to work properly.\n>\n> It should probably be documented in chapter 9.16.2 \"The SQL/JSON Path Language\",\n> maybe with a little warning in the description of jsonb_path_query** and in\n> chapter 8.14.16 as well (or at least that's were I would expect such a warning)\n\nThank you for reporting :)\n\nYeah, documenting the current behavior is something \"must have\". If\neven we find the appropriate behavior change, I don't think it would\nbe backpatchable. But we need to backpatch the documentation for\nsure. So, let's start by fixing the docs.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 25 Jan 2021 18:31:01 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Jsonpath ** vs lax mode" }, { "msg_contents": "On Thu, Jan 21, 2021 at 4:35 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Jan-21, Alexander Korotkov wrote:\n>\n> > Requiring strict mode for ** is a solution, but probably too restrictive...\n> >\n> > What do you think about making just subsequent accessor after ** not\n> > to unwrap arrays. That would be a bit tricky to implement, but\n> > probably that would better satisfy the user needs.\n>\n> Hmm, why is it too restrictive? If the user needs to further drill into\n> the JSON, can't they chain json_path_query calls, specifying (or\n> defaulting to) lax mode for the part doesn't include the ** expression?\n\nFor sure, there are some walkarounds. But I don't think all the\nlax-mode queries involving ** are affected. So, it might happen that\nwe force users to use strict-mode or chain call even if it's not\nnecessary. I'm tending to just fix the doc and wait if there are mode\ncomplaints :)\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 25 Jan 2021 18:33:50 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Jsonpath ** vs lax mode" }, { "msg_contents": "On Mon, Jan 25, 2021 at 6:33 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Jan 21, 2021 at 4:35 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2021-Jan-21, Alexander Korotkov wrote:\n> >\n> > > Requiring strict mode for ** is a solution, but probably too restrictive...\n> > >\n> > > What do you think about making just subsequent accessor after ** not\n> > > to unwrap arrays. That would be a bit tricky to implement, but\n> > > probably that would better satisfy the user needs.\n> >\n> > Hmm, why is it too restrictive? If the user needs to further drill into\n> > the JSON, can't they chain json_path_query calls, specifying (or\n> > defaulting to) lax mode for the part doesn't include the ** expression?\n>\n> For sure, there are some walkarounds. But I don't think all the\n> lax-mode queries involving ** are affected. So, it might happen that\n> we force users to use strict-mode or chain call even if it's not\n> necessary. I'm tending to just fix the doc and wait if there are mode\n> complaints :)\n\nThe patch, which clarifies this situation in the docs is attached.\nI'm going to push it if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Fri, 29 Jan 2021 00:44:17 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Jsonpath ** vs lax mode" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> The patch, which clarifies this situation in the docs is attached.\n> I'm going to push it if no objections.\n\n+1, but the English in this seems a bit shaky. Perhaps more\nlike the attached?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 28 Jan 2021 19:02:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Jsonpath ** vs lax mode" } ]
[ { "msg_contents": "Some catalog tables have references to pg_attribute.attnum.\n\nIn the documentation, it only says \"(references pg_attribute.attnum)\"\nbut not which oid column to include in the two-column \"foreign key\".\n\nThis would not be a problem if there would only be one reference to pg_class.oid,\nbut some catalog tables have multiple columns that references pg_class.oid.\n\nFor instance, pg_constraint has two columns (conkey, confkey) referencing pg_attribute,\nand three columns (conrelid, conindid, confrelid) referencing pg_class.\n\nA user might wonder:\n- Which one of these three columns should be used in combination with the conkey/confkey elements to join pg_attribute?\n\nIf we would have array foreign key support, I would guess the \"foreign keys\" should be:\n\nFOREIGN KEY (confrelid, EACH ELEMENT OF confkey) REFERENCES pg_catalog.pg_attribute (attrelid, attnum)\nFOREIGN KEY (conrelid, EACH ELEMENT OF conkey) REFERENCES pg_catalog.pg_attribute (attrelid, attnum)\n\nIt's of course harder to guess for a machine though, which would need a separate human-produced lookup-table.\n\nCould it be meaningful to clarify these multi-key relations in the documentation?\n\nAs a bonus, machines could then parse the information out of catalogs.sgml.\n\nHere is a list of catalogs referencing pg_attribute and with multiple pg_class references:\n\n table_name | array_agg\n----------------------+---------------------------------------\npg_constraint | {confrelid,conindid,conrelid}\npg_index | {indexrelid,indrelid}\npg_partitioned_table | {partdefid,partrelid}\npg_trigger | {tgconstrindid,tgconstrrelid,tgrelid}\n(4 rows)\n\nProduced using query:\n\nSELECT b.table_name, array_agg(DISTINCT b.column_name)\nFROM pit.oid_joins AS a\nJOIN pit.oid_joins AS b\nON b.table_name = a.table_name\nWHERE a.ref_table_name = 'pg_attribute'\nAND b.ref_table_name = 'pg_class'\nGROUP BY b.table_name\nHAVING cardinality(array_agg(DISTINCT b.column_name)) > 1\n;\n\nSome catalog tables have references to pg_attribute.attnum.In the documentation, it only says \"(references pg_attribute.attnum)\"but not which oid column to include in the two-column \"foreign key\".This would not be a problem if there would only be one reference to pg_class.oid,but some catalog tables have multiple columns that references pg_class.oid.For instance, pg_constraint has two columns (conkey, confkey) referencing pg_attribute,and three columns (conrelid, conindid, confrelid) referencing pg_class.A user might wonder:- Which one of these three columns should be used in combination with the conkey/confkey elements to join pg_attribute?If we would have array foreign key support, I would guess the \"foreign keys\" should be:FOREIGN KEY (confrelid, EACH ELEMENT OF confkey) REFERENCES pg_catalog.pg_attribute (attrelid, attnum)FOREIGN KEY (conrelid, EACH ELEMENT OF conkey) REFERENCES pg_catalog.pg_attribute (attrelid, attnum)It's of course harder to guess for a machine though, which would need a separate human-produced lookup-table.Could it be meaningful to clarify these multi-key relations in the documentation?As a bonus, machines could then parse the information out of catalogs.sgml.Here is a list of catalogs referencing pg_attribute and with multiple pg_class references:      table_name      |               array_agg----------------------+---------------------------------------pg_constraint        | {confrelid,conindid,conrelid}pg_index             | {indexrelid,indrelid}pg_partitioned_table | {partdefid,partrelid}pg_trigger           | {tgconstrindid,tgconstrrelid,tgrelid}(4 rows)Produced using query:SELECT b.table_name, array_agg(DISTINCT b.column_name)FROM pit.oid_joins AS aJOIN pit.oid_joins AS bON b.table_name = a.table_nameWHERE a.ref_table_name = 'pg_attribute'AND b.ref_table_name = 'pg_class'GROUP BY b.table_nameHAVING cardinality(array_agg(DISTINCT b.column_name)) > 1;", "msg_date": "Wed, 20 Jan 2021 19:57:32 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "catalogs.sgml documentation ambiguity" } ]
[ { "msg_contents": "Hackers,\n\nIt looks like both heapgettup() and heapgettup_pagemode() are coded\nincorrectly when setting the page to start the scan on for a backwards\nscan when heap_setscanlimits() has been used.\n\nIt looks like the code was not updated during 7516f5259.\n\nThe current code is:\n\n/* start from last page of the scan */\nif (scan->rs_startblock > 0)\n page = scan->rs_startblock - 1;\nelse\n page = scan->rs_nblocks - 1;\n\n\nWhere rs_startblock is either the sync scan start location, or the\nstart page set by heap_setscanlimits(). rs_nblocks is the number of\nblocks in the relation.\n\nLet's say we have a 100 block relation and we want to scan blocks 10\nto 30 in a backwards order. We'll do heap_setscanlimits(scan, 10, 21);\nto indicate that we want to scan 21 blocks starting at page 10 and\nfinishing after scanning page 30.\n\nWhat the code above does is wrong. Since rs_startblock is > 0 we'll execute:\n\npage = scan->rs_nblocks - 1;\n\ni.e. 99. then proceed to scan blocks all blocks down to 78. Oops. Not\nquite the 10 to 30 that we asked for.\n\nNow, it does not appear that there are any live bugs here, in core at\nleast. The only usage I see of heap_setscanlimits() is in\nheapam_index_build_range_scan() to which I see the scan is a forward\nscan. I only noticed the bug as I'm in the middle of fixing up [1] to\nimplement backwards TID Range scans.\n\nProposed patch attached.\n\nSince this is not a live bug, is it worth a backpatch? I guess some\nextensions could suffer from this, I'm just not sure how likely that\nis as if anyone is doing backwards scanning with a setscanlimits set,\nthen they'd surely have noticed this already!?\n\nDavid\n\n[1] https://postgr.es/m/CAMyN-kB-nFTkF=VA_JPwFNo08S0d-Yk0F741S2B7LDmYAi8eyA@mail.gmail.com", "msg_date": "Thu, 21 Jan 2021 13:16:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Heap's backwards scan scans the incorrect pages with\n heap_setscanlimits()" }, { "msg_contents": "On Thu, 21 Jan 2021 at 13:16, David Rowley <dgrowleyml@gmail.com> wrote:\n> Proposed patch attached.\n\nI ended up pushing a slightly revised version of this which kept the\ncode the same as before when rs_numblocks had not been changed. I\nbackpatched to 9.5 as it seemed low risk and worthy of stopping some\nhead-scratching and a future report for any extension authors that\nmake use of heap_setscanlimits() with backwards scans at some point in\nthe future.\n\nDavid\n\n\n", "msg_date": "Mon, 25 Jan 2021 20:05:19 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Heap's backwards scan scans the incorrect pages with\n heap_setscanlimits()" } ]
[ { "msg_contents": "While analyzing one of the customer issues, based on the log it\nappeared that there is a race condition in the recovery process.\n\nThe summary of the issue is, that one of the standby is promoted as\nthe new primary (Node2) and another standby (Node3) is restarted and\nset the primary_info and the restore_command so that it can\nstream/restore from Node2 (new primary). The problem is that during\nthe promotion the timeline switch happened in the middle of the\nsegment in Node2 and the Node3 is able to restore the newTLI.history\nfile from the archive but the WAL file with the new TLI is not yet\narchived. Now, we will try to stream the wal file from the primary\nbut if we are fetching the checkpoint that time we will not use the\nlatest timeline instead we will try with the checkpoint timeline and\nwalsender will send the WAL from the new timeline file because\nrequested WAL recptr is before the TLI switch. And once that\nhappened the expectedTLEs will be set based on the oldTLI.history\nfile. Now, whenever we try to restore the required WAL file and\nrecoveryTargetTimeLineGoal is set to the latest we again try to rescan\nthe latest timeline (rescanLatestTimeLine) but the problem is\nrecoveryTargetTLI was already set to the latest timeline. So now the\nproblem is expectedTLEs is set to oldTLI and recoveryTargetTLI is set\nto newTLI and rescanLatestTimeLine will never update the expectedTLEs.\nNow, walsender will fail to stream new WAL using the old TLI and the\narchiver process will also fail because that file doesn't not exists\nanymore (converted to .partial). Basically, now we will never try\nwith the newTLI.\n\nI have given the sequence of the events based on my analysis.\n\nRefer to the sequence of event\n-----------------------------------------\nNode1 primary, Node2 standby1, Node3 standby2\n\n1. Node2 got promoted to new primary, and node 2 picked new TL 13 in\nthe middle of the segment.\n2. Node3, restarted with new primary info of Node2 and restore command\n3. Node3, found the newest TL13 in validateRecoveryParameters()\nBecause the latest TL was requested in recovery.conf (history file\nrestored from TL13) and set recoveryTargetTLI to 13\nSo point to note is recoveryTargetTLI is set to 13 but expectedTLEs is\nnot yet set.\n4. Node3, entered into the standby mode.\n5. Node3, tries to read the checkpoint Record, on Node3 still the\ncheckpoint TL (ControlFile->checkPointCopy.ThisTimeLineID) is 12.\n6. Node3, tries to get the checkpoint record file using new TL13 from\nthe archive which it should get ideally but it may not if the Node2\nhaven't yet archived it.\n7. Node3, tries to stream from primary but using TL12 because\nControlFile->checkPointCopy.ThisTimeLineID is 12.\n8. Node3, get it because walsender of Node2 read it from TL13 and send\nit and Node2 write in the new WAL file but with TL12.\n\nWalSndSegmentOpen()\n{\n/*-------\n* When reading from a historic timeline, and there is a timeline switch\n* within this segment, read from the WAL segment belonging to the new\n* timeline.\n}\n\n9. Node3, now set the expectedTLEs to 12 because that is what\nwalreceiver has streamed the WAL using.\n\n10. Node3, now recoveryTargetTLI is 13 and expectedTLEs is 12. So\nwhenever it tries to find the latest TLE (rescanLatestTimeLine ) it\nfinds it is 13 which is the same as recoveryTargetTLI so nothing to\nchange but expectedTLEs is 12 using which it will try to\nrestore/stream further WAL and fail every time.\n\nrescanLatestTimeLine(void)\n{\n....\nnewtarget = findNewestTimeLine(recoveryTargetTLI);\nif (newtarget == recoveryTargetTLI)\n{\n /* No new timelines found */\n return false;\n}\n\n11. So now the situation is that ideally in rescanLatestTimeLine() we\nshould get the latest TLI but it assumes that it is already on the\nlatest TLI because recoveryTargetTLI is on the latest TLI.\n\nOther points to be noted:\n- The actual issue happened on 9.6.11 but based on the code comparison\nit appeared that same can occur on the latest code as well.\n- After Node3 is shutdown wal from its pg_wal/ directory were removed\nso that it can follow the new primary.\n\nBased on the sequence of events It is quite clear that something is\nwrong in rescanLatestTimeLine, maybe after selecting the latest TLI we\nshould compare it with the head of the expectedTLEs as well instead of\njust comparing it to the recoveryTargetTLI?\n\n\nLog from Node2:\n2020-12-22 04:49:02 UTC [1019]: [9-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: received promote request\n2020-12-22 04:49:02 UTC [1019]: [10-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: redo done at 0/F8000028\n2020-12-22 04:49:02 UTC [1019]: [11-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: last completed transaction was at log time\n2020-12-22 04:48:01.833476+00\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000C00000000000000F8\"\nfailed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000D.history\" failed:\nNo such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\n2020-12-22 04:49:02 UTC [1019]: [12-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: selected new timeline ID: 13\n2020-12-22 04:49:02 UTC [1019]: [13-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: archive recovery complete\n\n\nLog From Node3 (with pointwise analysis):\n\n1. Node3 restarted, restored 0000000D.history from archive and\nrecoveryTargetTLI is set to 13\n2020-12-22 04:49:40 UTC [2896]: [2-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: database system is shut down\n2020-12-22 04:49:40 UTC [2872]: [6-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: database system is shut down\n2020-12-22 04:49:41 UTC [9082]: [1-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: database system was shut down in recovery at\n2020-12-22 04:49:40 UTC\n2020-12-22 04:49:41 UTC [9082]: [2-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: creating missing WAL directory\n\"pg_xlog/archive_status\"\n2020-12-22 04:49:41 UTC [9082]: [3-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: restored log file \"0000000D.history\" from archive\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000E.history\" failed:\nNo such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\n2020-12-22 04:49:41 UTC [9082]: [4-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: entering standby mode\n2020-12-22 04:49:41 UTC [9082]: [5-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: restored log file \"0000000D.history\" from archive\n\n\n2. Tries to get the WAL file with different timelines from the archive\nbut did not get so expectedTLEs is not yet set\n\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000D00000000000000F8\"\nfailed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000C00000000000000F8\"\nfailed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000B00000000000000F8\"\nfailed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000100000000000000F8\"\nfailed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\n\n3. Since fetching the checkpoint record so use the checkpoint TLI\nwhich is 12, although primary doesn’t have 0000000C00000000000000F8\nfile as it renamed it to 0000000C00000000000000F8.partial\n\nBut there is the logic in walsender that if requested wal is there in\nthe current TLI then send from their so it will stream from\n0000000D00000000000000F8 file\n\n2020-12-22 04:49:42 UTC [9105]: [1-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: fetching timeline history file for timeline 12\nfrom primary server\n2020-12-22 04:49:42 UTC [9105]: [2-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: started streaming WAL from primary at 0/F8000000\non timeline 12\n2020-12-22 04:49:42 UTC [9105]: [3-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: replication terminated by primary server\n2020-12-22 04:49:42 UTC [9105]: [4-1] user=; db=; app=; client=;\nSQLcode=00000 DETAIL: End of WAL reached on timeline 12 at\n0/F8000098.\n\n\n\n4. Now since walreciver assumes that it has restore the WAL from the\nTL 12 the recieveTLI is 12 and the expectedTLEs is set base on the\n0000000C.history.\nSee below Logic in WaitForWalToBecomeAvailable\n if (readFile < 0)\n {\n if (!expectedTLEs)\n expectedTLEs = readTimeLineHistory(receiveTLI);\n\n\n2020-12-22 04:49:42 UTC [9082]: [6-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: restored log file \"0000000C.history\" from archive\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000E.history\" failed:\nNo such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000C00000000000000F8\"\nfailed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\n2020-12-22 04:49:42 UTC [9082]: [7-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: restored log file \"0000000C.history\" from archive\n2020-12-22 04:49:42 UTC [9082]: [8-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: consistent recovery state reached at 0/F8000098\n2020-12-22 04:49:42 UTC [9082]: [9-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: invalid record length at 0/F8000098: wanted 24,\ngot 0\n2020-12-22 04:49:42 UTC [9074]: [3-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: database system is ready to accept read only\nconnections\n2020-12-22 04:49:42 UTC [9105]: [5-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: restarted WAL streaming at 0/F8000000 on timeline\n12\n2020-12-22 04:49:42 UTC [9105]: [6-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: replication terminated by primary server\n2020-12-22 04:49:42 UTC [9105]: [7-1] user=; db=; app=; client=;\nSQLcode=00000 DETAIL: End of WAL reached on timeline 12 at\n0/F8000098.\n\n\n4. Now, expectedTLEs head is as 12 and recoveryTargetTLI is 13 so in\nrescanLatestTimeLine we always assume we are at the latest Ali but we\ntry to archive from expectedTLEs which is old TLI.\n\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000E.history\" failed:\nNo such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000C00000000000000F8\"\nfailed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\n2020-12-22 04:49:47 UTC [9105]: [8-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: restarted WAL streaming at 0/F8000000 on timeline\n12\n2020-12-22 04:49:47 UTC [9105]: [9-1] user=; db=; app=; client=;\nSQLcode=00000 LOG: replication terminated by primary server\n2020-12-22 04:49:47 UTC [9105]: [10-1] user=; db=; app=; client=;\nSQLcode=00000 DETAIL: End of WAL reached on timeline 12 at\n0/F8000098.\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000E.history\" failed:\nNo such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous\nerrors) (code 23) at main.c(1179) [sender=3.1.2]\nrsync: link_stat \"/wal_archive/ins30wfm02dbs/0000000C00000000000000F8\"\nfailed: No such file or directory (2)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 14:30:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Race condition in recovery?" }, { "msg_contents": "On Thu, Jan 21, 2021 at 4:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> 8. Node3, get it because walsender of Node2 read it from TL13 and send\n> it and Node2 write in the new WAL file but with TL12.\n>\n> WalSndSegmentOpen()\n> {\n> /*-------\n> * When reading from a historic timeline, and there is a timeline switch\n> * within this segment, read from the WAL segment belonging to the new\n> * timeline.\n> }\n>\n> 9. Node3, now set the expectedTLEs to 12 because that is what\n> walreceiver has streamed the WAL using.\n\nThis seems to be incorrect, because the comment for expectedTLEs says:\n\n * expectedTLEs: a list of TimeLineHistoryEntries for\nrecoveryTargetTLI and the timelines of\n * its known parents, newest first (so recoveryTargetTLI is always the\n * first list member). Only these TLIs are expected to be seen in the WAL\n * segments we read, and indeed only these TLIs will be considered as\n * candidate WAL files to open at all.\n\nBut in your scenario apparently we end up with a situation that\ncontradicts that, because you go on to say:\n\n> 10. Node3, now recoveryTargetTLI is 13 and expectedTLEs is 12. So\n\nAs I understand, expectedTLEs should end up being a list where the\nfirst element is the timeline we want to end up on, and the last\nelement is the timeline where we are now, and every timeline in the\nlist branches off of the next timeline in the list. So here if 13\nbranches of 12 then the list should be 13,12 not just 12; and if 13\ndoes not branch off of 12 OR if 13 branches off of 12 at an earlier\npoint in the WAL stream than where we are now, then that should be an\nerror that shuts down the standby, because then there is no way for\nreplay to get from where it is now to the correct timeline.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 15:35:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, Jan 22, 2021 at 2:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jan 21, 2021 at 4:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > 8. Node3, get it because walsender of Node2 read it from TL13 and send\n> > it and Node2 write in the new WAL file but with TL12.\n> >\n> > WalSndSegmentOpen()\n> > {\n> > /*-------\n> > * When reading from a historic timeline, and there is a timeline switch\n> > * within this segment, read from the WAL segment belonging to the new\n> > * timeline.\n> > }\n> >\n> > 9. Node3, now set the expectedTLEs to 12 because that is what\n> > walreceiver has streamed the WAL using.\n>\n> This seems to be incorrect, because the comment for expectedTLEs says:\n>\n> * expectedTLEs: a list of TimeLineHistoryEntries for\n> recoveryTargetTLI and the timelines of\n> * its known parents, newest first (so recoveryTargetTLI is always the\n> * first list member). Only these TLIs are expected to be seen in the WAL\n> * segments we read, and indeed only these TLIs will be considered as\n> * candidate WAL files to open at all.\n>\n> But in your scenario apparently we end up with a situation that\n> contradicts that, because you go on to say:\n>\n> > 10. Node3, now recoveryTargetTLI is 13 and expectedTLEs is 12. So\n>\n> As I understand, expectedTLEs should end up being a list where the\n> first element is the timeline we want to end up on, and the last\n> element is the timeline where we are now, and every timeline in the\n> list branches off of the next timeline in the list. So here if 13\n> branches of 12 then the list should be 13,12 not just 12; and if 13\n> does not branch off of 12 OR if 13 branches off of 12 at an earlier\n> point in the WAL stream than where we are now, then that should be an\n> error that shuts down the standby, because then there is no way for\n> replay to get from where it is now to the correct timeline.\n\nYeah, I agree with this. So IMHO the expectedTLEs should be set based\non the recoveryTargetTLI instead of receiveTLI. Based on the\nexpectedTLEs definition it can never be correct to set it based on the\nreceiveTLI.\n\nBasically, the simple fix could be this.\n\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex b18257c198..465bc7c929 100644\n\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -12533,7 +12533,8 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr,\nbool randAccess,\n if (readFile < 0)\n {\n if (!expectedTLEs)\n-\nexpectedTLEs = readTimeLineHistory(receiveTLI);\n+\nexpectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n+\n readFile =\nXLogFileRead(readSegNo, PANIC,\n\n receiveTLI,\n\n XLOG_FROM_STREAM, false);\n\nBut I am afraid that whether this adjustment (setting based on\nreceiveTLI) is done based on some analysis or part of some bug fix. I\nwill try to find the history of this and maybe based on that we can\nmake a better decision.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 23 Jan 2021 10:06:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sat, Jan 23, 2021 at 10:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> But I am afraid that whether this adjustment (setting based on\n> receiveTLI) is done based on some analysis or part of some bug fix. I\n> will try to find the history of this and maybe based on that we can\n> make a better decision.\n\nI have done further analysis of this, so this of initializing the\nexpectedTLEs from receiveTLI instead of recoveryTargetTLI is done in\nbelow commit.\n\n=====\nee994272ca50f70b53074f0febaec97e28f83c4e\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\nCommitter: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n\n Delay reading timeline history file until it's fetched from master.\n\n Streaming replication can fetch any missing timeline history files from the\n master, but recovery would read the timeline history file for the target\n timeline before reading the checkpoint record, and before walreceiver has\n had a chance to fetch it from the master. Delay reading it, and the sanity\n checks involving timeline history, until after reading the checkpoint\n record.\n\n There is at least one scenario where this makes a difference: if you take\n a base backup from a standby server right after a timeline switch, the\n WAL segment containing the initial checkpoint record will begin with an\n older timeline ID. Without the timeline history file, recovering that file\n will fail as the older timeline ID is not recognized to be an ancestor of\n the target timeline. If you try to recover from such a backup, using only\n streaming replication to fetch the WAL, this patch is required for that to\n work.\n=====\n\nI did not understand one point about this commit message, \"Without the\ntimeline history file, recovering that file will fail as the older\ntimeline ID is not recognized to be an ancestor of the target\ntimeline. \" I mean in which case this will be true?\n\nNow the problem is that we have initialized the expectTLEs based on\nthe older timeline history file instead of recoveryTargetTLI, which is\nbreaking the sanity of expectedTLEs. So in below function\n(rescanLatestTimeLine), if we find the newest TLI is same as\nrecoveryTargetTLI then we assume we don't need to do anything but the\nproblem is expectedTLEs is set to wrong target and we never update\nunless again timeline changes. So I think first we need to identify\nwhat the above commit is trying to achieve and then can we do it in a\nbetter way without breaking the sanity of expectedTLEs.\n\nrescanLatestTimeLine(void)\n{\n....\nnewtarget = findNewestTimeLine(recoveryTargetTLI);\nif (newtarget == recoveryTargetTLI)\n{\n/* No new timelines found */\nreturn false;\n}\n...\nnewExpectedTLEs = readTimeLineHistory(newtarget);\n...\nexpectedTLEs = newExpectedTLEs;\n}\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Mar 2021 15:14:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, Mar 2, 2021 at 3:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> =====\n> ee994272ca50f70b53074f0febaec97e28f83c4e\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n> Committer: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n>\n> Delay reading timeline history file until it's fetched from master.\n>\n> Streaming replication can fetch any missing timeline history files from the\n> master, but recovery would read the timeline history file for the target\n> timeline before reading the checkpoint record, and before walreceiver has\n> had a chance to fetch it from the master. Delay reading it, and the sanity\n> checks involving timeline history, until after reading the checkpoint\n> record.\n>\n> There is at least one scenario where this makes a difference: if you take\n> a base backup from a standby server right after a timeline switch, the\n> WAL segment containing the initial checkpoint record will begin with an\n> older timeline ID. Without the timeline history file, recovering that file\n> will fail as the older timeline ID is not recognized to be an ancestor of\n> the target timeline. If you try to recover from such a backup, using only\n> streaming replication to fetch the WAL, this patch is required for that to\n> work.\n> =====\n\nThe above commit avoid initializing the expectedTLEs from the\nrecoveryTargetTLI as shown in below hunk from this commit.\n\n@@ -5279,49 +5299,6 @@ StartupXLOG(void)\n */\n readRecoveryCommandFile();\n\n- /* Now we can determine the list of expected TLIs */\n- expectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n-\n\nI think the fix for the problem will be that, after reading/validating\nthe checkpoint record, we can free the current value of expectedTLEs\nand reinitialize it based on the recoveryTargetTLI as shown in the\nattached patch?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 4 May 2021 17:41:06 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Tue, 4 May 2021 17:41:06 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> I think the fix for the problem will be that, after reading/validating\n> the checkpoint record, we can free the current value of expectedTLEs\n> and reinitialize it based on the recoveryTargetTLI as shown in the\n> attached patch?\n\nI'm not sure I understand the issue here. I think that the attached\nshould reproduce the issue mentioned here, but didn't for me.\n\nThe result of running the attached test script is shown below. TLIs\nare adjusted in your descriptions cited below. The lines prefixed by\nNodeN> are the server log lines written while running the attached\ntest script.\n\n> 1. Node2 got promoted to new primary, and node 2 picked new TL 2 in\n> the middle of the segment 3.\n\nNode2> LOG: selected new timeline ID: 2\n\n> 2. Node3, restarted with new primary info of Node2 and restore command\n\nNode2> node_3 LOG: received replication command: IDENTIFY_SYSTEM\n\n> 3. Node3, found the newest TL2 in validateRecoveryParameters() Because\n> the latest TL was requested in recovery.conf (history file restored\n> from TL2) and set recoveryTargetTLI to 2 So point to note is\n> recoveryTargetTLI is set to 2 but expectedTLEs is not yet set.\n\nThis means you specified recovery_target_timeline? Either way,\nexpectedTLEs is not relevant to the behavior here. Even if\nrecovery_target_timeline is set to latest, findNewestTimeLine doesn't\nlook it.\n\nNode3> LOG: restored log file \"00000002.history\" from archive\n \n> 4. Node3, entered into the standby mode.\n\nNode3> LOG: entering standby mode\n\n> 5. Node3, tries to read the checkpoint Record, on Node3 still the\n> checkpoint TL (ControlFile->checkPointCopy.ThisTimeLineID) is 1.\n\nexpectedTLEs is loaded just before fetching the last checkpoint.\n\nReadCheckpointRecord doesn't consider checkPointCopy.ThisTimeLineID.\n\nThe reason for the checkpoint TLI is that the segment file was that of\nthe newest TLI in expectedTLEs found in pg_wal directory. If the\nsegment for TLI=2 containing the last checkpoint had been archived,\ncheckpoint record would be read as TLI=2. Replication starts at TLI=2\nin this case because archive recovery has reached that timeline.\n(Turn on the optional section in the attached test script to see this\nbehavior.) This is the expected behavior since we assume that the\nsegment files for TLI=n and n+1 are identical in the TLI=n part.\n\nAnyway the checkpoint that is read is on TLI=1 in this case and\nreplication starts at TLI=1.\n\nNode3> LOG: Checkpoint record: TLI=1, 0/3014F78\n \n> 6. Node3, tries to get the checkpoint record file using new TL2 from\n> the archive which it should get ideally but it may not if the Node2\n> haven't yet archived it.\n\nThis doesn't happen for me. Instead, node3 runs recovery from the\ncheckpoint up to the end of the archived WAL. In this case the end\npoint is 3014FF0@TLI=1.\n\nNode3> LOG: invalid record length at 0/3014FF0: wanted 24, got 0\n\nThen, node3 connects to node2 requesting TLI=1 because the history\nfile (or expectedTLEs) told that the LSN belongs to TLI=1.\n\nNode3> LOG: 0/3014FF0 is on TLI 1\nNode3> LOG: started streaming WAL from primary at 0/3000000 on timeline 1\n\nAfter a while node2 finds a timeline switch and disconnects the\nreplication.\n\nNode3> LOG: replication terminated by primary server\nNode3> DETAIL: End of WAL reached on timeline 1 at 0/3029A68.\n\nAfter scanning the archive and pg_wal ends in failure, node3 correctly\nrequests node2 for TLI=2 because expectedTLEs told that the current\nLSN belongs to TLI=2.\n\nNode3> LOG: 0/3029A68 is on TLI 2\nNode3> LOG: restarted WAL streaming at 0/3000000 on timeline 2\n\nFinally the items below don't happen for me, because node3 needs not\nto go back to the last checkpoint any longer. Perhaps the script is\nfailing to reproduce your issue correctly.\n\n> 7. Node3, tries to stream from primary but using TL1 because\n> ControlFile->checkPointCopy.ThisTimeLineID is 1.\n\nAs mentioned above, the checkPointCopy.ThisTimeLineID on either the\nprimary and secondary is irrelevant to the timline the primary\nsends. The primary streams the timeline requested by the secondary.\n\n> 8. Node3, get it because walsender of Node2 read it from TL2 and send\n> it and Node2 write in the new WAL file but with TL1.\n\nWalsender strems the requested TLI from walreceiver, then disconnects\nat the end of the TLI notifying node3 of the next TLI. Node3\nre-establishes replication with the new TLI. Looking into pg_wal of\nnode3, segment 3 for both TLI=1 and 2 are filled by the correct\ncontent.\n\nSo,,, I don't understand what are you saying is the race condition..\n\nAn issue that may be slightly relevant to this case have been raised\n[1]. But it is about writing end-of-recovery checkpoint into the older\ntimeline.\n\nCould you please fix the test script so that it causes your issue\ncorrectly? And/or elaborate a bit more?\n\nThe attached first file is the debugging aid logging. The second is\nthe test script, to be placed in src/test/recovery/t.\n\n\n1: https://www.postgresql.org/message-id/CAE-ML%2B_EjH_fzfq1F3RJ1%3DXaaNG%3D-Jz-i3JqkNhXiLAsM3z-Ew%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/access/transam/timeline.c b/src/backend/access/transam/timeline.c\nindex 8d0903c175..9483fd055c 100644\n--- a/src/backend/access/transam/timeline.c\n+++ b/src/backend/access/transam/timeline.c\n@@ -55,6 +55,7 @@ restoreTimeLineHistoryFiles(TimeLineID begin, TimeLineID end)\n \n \tfor (tli = begin; tli < end; tli++)\n \t{\n+\t\telog(LOG, \"Trying restoring history file for TLI=%d\", tli);\n \t\tif (tli == 1)\n \t\t\tcontinue;\n \n@@ -95,6 +96,7 @@ readTimeLineHistory(TimeLineID targetTLI)\n \n \tif (ArchiveRecoveryRequested)\n \t{\n+\t\telog(LOG, \"Trying reading history file for TLI=%d\", targetTLI);\n \t\tTLHistoryFileName(histfname, targetTLI);\n \t\tfromArchive =\n \t\t\tRestoreArchivedFile(path, histfname, \"RECOVERYHISTORY\", 0, false);\n@@ -231,6 +233,7 @@ existsTimeLineHistory(TimeLineID probeTLI)\n \n \tif (ArchiveRecoveryRequested)\n \t{\n+\t\telog(LOG, \"Probing history file for TLI=%d\", probeTLI);\n \t\tTLHistoryFileName(histfname, probeTLI);\n \t\tRestoreArchivedFile(path, histfname, \"RECOVERYHISTORY\", 0, false);\n \t}\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex adfc6f67e2..e31e1f0ce3 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -3732,6 +3732,7 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,\n \t\t\tsnprintf(activitymsg, sizeof(activitymsg), \"waiting for %s\",\n \t\t\t\t\t xlogfname);\n \t\t\tset_ps_display(activitymsg);\n+\t\t\telog(LOG, \"Trying fetching history file for TLI=%d\", tli);\n \t\t\trestoredFromArchive = RestoreArchivedFile(path, xlogfname,\n \t\t\t\t\t\t\t\t\t\t\t\t\t \"RECOVERYXLOG\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t wal_segment_size,\n@@ -3825,7 +3826,10 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)\n \tif (expectedTLEs)\n \t\ttles = expectedTLEs;\n \telse\n+\t{\n+\t\telog(LOG, \"Loading history file for TLI=%d (2)\", recoveryTargetTLI);\n \t\ttles = readTimeLineHistory(recoveryTargetTLI);\n+\t}\n \n \tforeach(cell, tles)\n \t{\n@@ -3839,6 +3843,8 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)\n \t\t * Skip scanning the timeline ID that the logfile segment to read\n \t\t * doesn't belong to\n \t\t */\n+\t\telog(LOG, \"scanning segment %lX TLI %d, source %d\", segno, tli, source);\n+\n \t\tif (hent->begin != InvalidXLogRecPtr)\n \t\t{\n \t\t\tXLogSegNo\tbeginseg = 0;\n@@ -3865,6 +3871,7 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)\n \t\t\t\t\t\t\t XLOG_FROM_ARCHIVE, true);\n \t\t\tif (fd != -1)\n \t\t\t{\n+\t\t\t\telog(LOG, \"found segment %lX TLI %d, from archive\", segno, tli);\n \t\t\t\telog(DEBUG1, \"got WAL segment from archive\");\n \t\t\t\tif (!expectedTLEs)\n \t\t\t\t\texpectedTLEs = tles;\n@@ -3878,6 +3885,7 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)\n \t\t\t\t\t\t\t XLOG_FROM_PG_WAL, true);\n \t\t\tif (fd != -1)\n \t\t\t{\n+\t\t\t\telog(LOG, \"found segment %lX TLI %d, from PG_WAL\", segno, tli);\n \t\t\t\tif (!expectedTLEs)\n \t\t\t\t\texpectedTLEs = tles;\n \t\t\t\treturn fd;\n@@ -8421,7 +8429,7 @@ ReadCheckpointRecord(XLogReaderState *xlogreader, XLogRecPtr RecPtr,\n \n \tXLogBeginRead(xlogreader, RecPtr);\n \trecord = ReadRecord(xlogreader, LOG, true);\n-\n+\telog(LOG, \"Checkpoint record: TLI=%d, %X/%X, rectargetTLI=%d, exptles=%p\", xlogreader->seg.ws_tli, LSN_FORMAT_ARGS(xlogreader->ReadRecPtr), recoveryTargetTLI, expectedTLEs);\n \tif (record == NULL)\n \t{\n \t\tif (!report)\n@@ -12628,7 +12636,7 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n \t\t\t\t\t\t\t * TLI, rather than the position we're reading.\n \t\t\t\t\t\t\t */\n \t\t\t\t\t\t\ttli = tliOfPointInHistory(tliRecPtr, expectedTLEs);\n-\n+\t\t\t\t\t\t\telog(LOG, \"%X/%X is on TLI %X\", LSN_FORMAT_ARGS(tliRecPtr), tli);\n \t\t\t\t\t\t\tif (curFileTLI > 0 && tli < curFileTLI)\n \t\t\t\t\t\t\t\telog(ERROR, \"according to history file, WAL location %X/%X belongs to timeline %u, but previous recovered WAL file came from timeline %u\",\n \t\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(tliRecPtr),\n@@ -12697,7 +12705,10 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n \t\t\t\t\t\tif (readFile < 0)\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\tif (!expectedTLEs)\n+\t\t\t\t\t\t\t{\n+\t\t\t\t\t\t\t\telog(LOG, \"Loading expectedTLEs for %d\", receiveTLI);\n \t\t\t\t\t\t\t\texpectedTLEs = readTimeLineHistory(receiveTLI);\n+\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\treadFile = XLogFileRead(readSegNo, PANIC,\n \t\t\t\t\t\t\t\t\t\t\t\t\treceiveTLI,\n \t\t\t\t\t\t\t\t\t\t\t\t\tXLOG_FROM_STREAM, false);\n\n# Minimal test testing streaming replication\nuse strict;\nuse warnings;\nuse PostgresNode;\nuse TestLib;\nuse Test::More tests => 2;\n\nmy $primary = get_new_node('primary');\n$primary->init(allows_streaming => 1);\n$primary->start;\nmy $backup_name = 'my_backup';\n\n$primary->backup($backup_name);\n\nmy $node_2 = get_new_node('node_2');\n$node_2->init_from_backup($primary, $backup_name,\n\t has_streaming => 1,\n\t allows_streaming => 1);\n$node_2->append_conf('postgresql.conf', \"archive_mode = always\");\nmy $archdir = $node_2->data_dir . \"/../archive\";\n$node_2->append_conf('postgresql.conf', \"archive_command = 'cp %p $archdir/%f'\");\n\nmkdir($archdir);\n\n$node_2->start;\n\n# Create streaming standby linking to primary\nmy $node_3 = get_new_node('node_3');\n$node_3->init_from_backup($primary, $backup_name,\n\thas_streaming => 1);\n$node_3->append_conf('postgresql.conf', \"restore_command = 'cp $archdir/%f %p'\");\n$node_3->start;\n\n$primary->psql('postgres', 'SELECT pg_switch_wal(); CREATE TABLE t(); DROP TABLE t; CHECKPOINT;');\n$primary->wait_for_catchup($node_2, 'write',\n\t\t\t\t\t\t\t\t$primary->lsn('insert'));\n$primary->wait_for_catchup($node_3, 'write',\n\t\t\t\t\t\t\t\t$primary->lsn('insert'));\n$node_3->stop;\n\n# put node3 a bit behind to cause streaming on the old timeline\n$primary->psql('postgres', 'CREATE TABLE t(); DROP TABLE t;');\n$primary->wait_for_catchup($node_2, 'write',\n\t\t\t\t\t\t $primary->lsn('insert'));\n\n$primary->stop;\n\n# promote node2\n$node_2->psql('postgres', 'SELECT pg_promote()');\n\n# optional: archive segment 3 of TLI=2 on node2 and advance one more segment\nif (0)\n{\n\tmy $lastwal = $node_2->safe_psql('postgres', 'select last_archived_wal from pg_stat_archiver');\n\t$node_2->psql('postgres', 'SELECT pg_switch_wal();');\n\t$node_2->psql('postgres', 'CREATE TABLE t(); DROP TABLE t;');\n\t$node_2->poll_query_until('postgres', \"SELECT '$lastwal' <> last_archived_wal from pg_stat_archiver\");\n\n\t$lastwal = $node_2->safe_psql('postgres', 'select last_archived_wal from pg_stat_archiver');\n\t$node_2->psql('postgres', 'SELECT pg_switch_wal();');\n\t$node_2->psql('postgres', 'CREATE TABLE t(); DROP TABLE t;');\n\t$node_2->poll_query_until('postgres', \"SELECT '$lastwal' <> last_archived_wal from pg_stat_archiver\");\n}\n\n# attach node3 as a standby of node2\n$node_3->enable_streaming($node_2);\n$node_3->append_conf('postgresql.conf', \"recovery_target_timeline = 2\");\n\n# *restart# node3, not just reloading to trigger archive recovery\n$node_3->start;\n$node_2->psql('postgres', 'CREATE TABLE t(); DROP TABLE t;');\n$node_2->psql('postgres', 'SELECT pg_switch_wal();');\n\n# XXX: another defect comes out without this X(\n$node_2->psql('postgres', 'CREATE TABLE t(); DROP TABLE t;');\n\n$node_2->wait_for_catchup($node_3, 'write',\n\t\t\t\t\t\t\t\t$node_2->lsn('insert'));\nmy $result = \n $node_2->safe_psql('postgres', 'SELECT pg_current_wal_insert_lsn() = write_lsn FROM pg_stat_replication');\n\nok($result eq 't', 'check');\n\n# set 0 to leave data directories after a successful run\nok(1, 'break');", "msg_date": "Fri, 07 May 2021 11:53:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, May 7, 2021 at 8:23 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 4 May 2021 17:41:06 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> Could you please fix the test script so that it causes your issue\n> correctly? And/or elaborate a bit more?\n>\n> The attached first file is the debugging aid logging. The second is\n> the test script, to be placed in src/test/recovery/t.\n\nI will look into your test case and try to see whether we can\nreproduce the issue. But let me summarise what is the exact issue.\nBasically, the issue is that first in validateRecoveryParameters if\nthe recovery target is the latest then we fetch the latest history\nfile and set the recoveryTargetTLI timeline to the latest available\ntimeline assume it's 2 but we delay updating the expectedTLEs (as per\ncommit ee994272ca50f70b53074f0febaec97e28f83c4e). Now, while reading\nthe checkpoint record if we don't get the required WAL from the\narchive then we try to get from primary, and while getting checkpoint\nfrom primary we use \"ControlFile->checkPointCopy.ThisTimeLineID\"\nsuppose that is older timeline 1. Now after reading the checkpoint we\nwill set the expectedTLEs based on the timeline from which we got the\ncheckpoint record.\n\nSee below Logic in WaitForWalToBecomeAvailable\n if (readFile < 0)\n {\n if (!expectedTLEs)\n expectedTLEs = readTimeLineHistory(receiveTLI);\n\nNow, the first problem is we are breaking the sanity of expectedTLEs\nbecause as per the definition it should already start with\nrecoveryTargetTLI but it is starting with the older TLI. Now, in\nrescanLatestTimeLine we are trying to fetch the latest TLI which is\nstill 2, so this logic returns without reinitializing the expectedTLEs\nbecause it assumes that if recoveryTargetTLI is pointing to 2 then\nexpectedTLEs must be correct and need not be changed.\n\nSee below logic:\nrescanLatestTimeLine(void)\n{\n....\nnewtarget = findNewestTimeLine(recoveryTargetTLI);\nif (newtarget == recoveryTargetTLI)\n{\n/* No new timelines found */\nreturn false;\n}\n...\nnewExpectedTLEs = readTimeLineHistory(newtarget);\n...\nexpectedTLEs = newExpectedTLEs;\n\n\nSolution:\n1. Find better way to fix the problem of commit\n(ee994272ca50f70b53074f0febaec97e28f83c4e) which is breaking the\nsanity of expectedTLEs.\n2. Assume, we have to live with fix 1 and we have to initialize\nexpectedTLEs with an older timeline for validating the checkpoint in\nabsence of tl.hostory file (as this commit claims). Then as soon as\nwe read and validate the checkpoint, fix the expectedTLEs and set it\nbased on the history file of recoveryTargetTLI.\n\nDoes this explanation make sense? If not please let me know what part\nis not clear in the explanation so I can point to that code.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 May 2021 11:04:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Thanks.\n\n\nAt Fri, 7 May 2021 11:04:53 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Fri, May 7, 2021 at 8:23 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 4 May 2021 17:41:06 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > Could you please fix the test script so that it causes your issue\n> > correctly? And/or elaborate a bit more?\n> >\n> > The attached first file is the debugging aid logging. The second is\n> > the test script, to be placed in src/test/recovery/t.\n> \n> I will look into your test case and try to see whether we can\n> reproduce the issue. But let me summarise what is the exact issue.\n> Basically, the issue is that first in validateRecoveryParameters if\n> the recovery target is the latest then we fetch the latest history\n> file and set the recoveryTargetTLI timeline to the latest available\n> timeline assume it's 2 but we delay updating the expectedTLEs (as per\n> commit ee994272ca50f70b53074f0febaec97e28f83c4e). Now, while reading\n\nI think it is right up to here.\n\n> the checkpoint record if we don't get the required WAL from the\n> archive then we try to get from primary, and while getting checkpoint\n> from primary we use \"ControlFile->checkPointCopy.ThisTimeLineID\"\n> suppose that is older timeline 1. Now after reading the checkpoint we\n> will set the expectedTLEs based on the timeline from which we got the\n> checkpoint record.\n\nI doubt this point. ReadCheckpointRecord finally calls\nXLogFileReadAnyTLI and it uses the content of the 00000002.history as\nthe local timeline entry list, since expectedTLEs is NIL and\nrecoveryTargetTLI has been updated to 2 by\nvalidateRecoveryParameters(). But node 3 was having only the segment\non TLI=1 so ReadCheckPointRecord() finds the wanted checkpoint recrod\non TLI=1. XLogFileReadAnyTLI() copies the local TLE list based on\nTLI=2 to expectedTLEs just after that because the wanted checkpoint\nrecord was available based on the list.\n\nSo I don't think checkPointCopy.ThisTimeLineID cannot affect this\nlogic, and don't think expectedTLEs is left with NIL. It's helpful\nthat you could show the specific code path to cause that.\n\n> See below Logic in WaitForWalToBecomeAvailable\n> if (readFile < 0)\n> {\n> if (!expectedTLEs)\n> expectedTLEs = readTimeLineHistory(receiveTLI);\n> \n> Now, the first problem is we are breaking the sanity of expectedTLEs\n> because as per the definition it should already start with\n> recoveryTargetTLI but it is starting with the older TLI. Now, in\n\nIf my description above is correct, expectedTLEs has been always\nfilled by TLI=2's hisotry so readTimeLineHistory is not called there.\n\nAfter that the things are working as described in my previous mail. So\nThe following is not an issue if I'm not missing something.\n\n\n> rescanLatestTimeLine we are trying to fetch the latest TLI which is\n> still 2, so this logic returns without reinitializing the expectedTLEs\n> because it assumes that if recoveryTargetTLI is pointing to 2 then\n> expectedTLEs must be correct and need not be changed.\n> \n> See below logic:\n> rescanLatestTimeLine(void)\n> {\n> ....\n> newtarget = findNewestTimeLine(recoveryTargetTLI);\n> if (newtarget == recoveryTargetTLI)\n> {\n> /* No new timelines found */\n> return false;\n> }\n> ...\n> newExpectedTLEs = readTimeLineHistory(newtarget);\n> ...\n> expectedTLEs = newExpectedTLEs;\n> \n> \n> Solution:\n> 1. Find better way to fix the problem of commit\n> (ee994272ca50f70b53074f0febaec97e28f83c4e) which is breaking the\n> sanity of expectedTLEs.\n> 2. Assume, we have to live with fix 1 and we have to initialize\n> expectedTLEs with an older timeline for validating the checkpoint in\n> absence of tl.hostory file (as this commit claims). Then as soon as\n> we read and validate the checkpoint, fix the expectedTLEs and set it\n> based on the history file of recoveryTargetTLI.\n> \n> Does this explanation make sense? If not please let me know what part\n> is not clear in the explanation so I can point to that code.\n\nSo, unfortunately not.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 07 May 2021 18:03:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": " On Fri, May 7, 2021 at 2:33 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Thanks.\n>\n>\n> At Fri, 7 May 2021 11:04:53 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Fri, May 7, 2021 at 8:23 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Tue, 4 May 2021 17:41:06 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > > Could you please fix the test script so that it causes your issue\n> > > correctly? And/or elaborate a bit more?\n> > >\n> > > The attached first file is the debugging aid logging. The second is\n> > > the test script, to be placed in src/test/recovery/t.\n> >\n> > I will look into your test case and try to see whether we can\n> > reproduce the issue. But let me summarise what is the exact issue.\n> > Basically, the issue is that first in validateRecoveryParameters if\n> > the recovery target is the latest then we fetch the latest history\n> > file and set the recoveryTargetTLI timeline to the latest available\n> > timeline assume it's 2 but we delay updating the expectedTLEs (as per\n> > commit ee994272ca50f70b53074f0febaec97e28f83c4e). Now, while reading\n>\n> I think it is right up to here.\n>\n> > the checkpoint record if we don't get the required WAL from the\n> > archive then we try to get from primary, and while getting checkpoint\n> > from primary we use \"ControlFile->checkPointCopy.ThisTimeLineID\"\n> > suppose that is older timeline 1. Now after reading the checkpoint we\n> > will set the expectedTLEs based on the timeline from which we got the\n> > checkpoint record.\n>\n> I doubt this point. ReadCheckpointRecord finally calls\n> XLogFileReadAnyTLI and it uses the content of the 00000002.history as\n> the local timeline entry list, since expectedTLEs is NIL and\n> recoveryTargetTLI has been updated to 2 by\n> validateRecoveryParameters(). But node 3 was having only the segment\n> on TLI=1 so ReadCheckPointRecord() finds the wanted checkpoint recrod\n> on TLI=1. XLogFileReadAnyTLI() copies the local TLE list based on\n> TLI=2 to expectedTLEs just after that because the wanted checkpoint\n> record was available based on the list.\n\nOkay, I got your point, now, consider the scenario that we are trying\nto get the checkpoint record in XLogFileReadAnyTLI, you are right that\nit returns history file 00000002.history. I think I did not mention\none point, basically, the tool while restarting node 3 after promoting\nnode 2 is deleting all the local WAL of node3 (so that node 3 can\nfollow node2). So now node3 doesn't have the checkpoint in the local\nsegment. Suppose checkpoint record was in segment\n000000010000000000000001, but after TL switch 000000010000000000000001\nis renamed to 000000010000000000000001.partial on node2 so now\npractically doesn't have 000000010000000000000001 file anywhere.\nHowever if TL switch mid-segment then we copy that segment with new TL\nso we have 000000020000000000000001 which contains the checkpoint\nrecord, but node 2 haven't yet archived it.\n\nSo now you come out of XLogFileReadAnyTLI, without reading checkpoint\nrecord and without setting expectedTLEs. Because expectedTLEs is only\nset if we are able to read the checkpoint record. Make sense?\n\n> So I don't think checkPointCopy.ThisTimeLineID cannot affect this\n> logic, and don't think expectedTLEs is left with NIL. It's helpful\n> that you could show the specific code path to cause that.\n\nSo now expectedTLEs is still NULL and you go to get the checkpoint\nrecord from primary and use checkPointCopy.ThisTimeLineID.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 May 2021 15:16:03 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 7 May 2021 15:16:03 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> Okay, I got your point, now, consider the scenario that we are trying\n> to get the checkpoint record in XLogFileReadAnyTLI, you are right that\n> it returns history file 00000002.history. I think I did not mention\n> one point, basically, the tool while restarting node 3 after promoting\n> node 2 is deleting all the local WAL of node3 (so that node 3 can\n> follow node2). So now node3 doesn't have the checkpoint in the local\n> segment. Suppose checkpoint record was in segment\n...\n> So now you come out of XLogFileReadAnyTLI, without reading checkpoint\n> record and without setting expectedTLEs. Because expectedTLEs is only\n> set if we are able to read the checkpoint record. Make sense?\n\nThanks. I understood the case and reproduced. Although I don't think\nremoving WAL files from non-backup cluster is legit, I also think we\ncan safely start archive recovery from a replicated segment.\n\n> So now expectedTLEs is still NULL and you go to get the checkpoint\n> record from primary and use checkPointCopy.ThisTimeLineID.\n\nI don't think erasing expectedTLEs after once set is the right fix\nbecause expectedTLEs are supposed to be set just once iff we are sure\nthat we are going to follow the history, until rescan changes it as\nthe only exception.\n\nIt seems to me the issue here is not a race condition but\nWaitForWALToBecomeAvailable initializing expectedTLEs with the history\nof a improper timeline. So using recoveryTargetTLI instead of\nreceiveTLI for the case fixes this issue.\n\n-\t\t\t\t\t\t\tif (!expectedTLEs)\n-\t\t\t\t\t\t\t\texpectedTLEs = readTimeLineHistory(receiveTLI);\n\nI thought that the reason using receiveTLI instead of\nrecoveryTargetTLI here is that there's a case where receiveTLI is the\nfuture of recoveryTarrgetTLI but I haven't successfully had such a\nsituation. If I set recovoryTargetTLI to a TLI that standby doesn't\nknow but primary knows, validateRecoveryParameters immediately\ncomplains about that before reaching there. Anyway the attached\nassumes receiveTLI may be the future of recoveryTargetTLI.\n\nJust inserting if() into the exising code makes the added lines stick\nout of the right side edge of 80 columns so I refactored there a bit\nto lower indentation.\n\n\nI believe the 004_timeline_switch.pl detects your issue. And the\nattached change fixes it.\n\nAny suggestions are welcome.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 10 May 2021 17:35:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Mon, May 10, 2021 at 2:05 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n\n> I thought that the reason using receiveTLI instead of\n> recoveryTargetTLI here is that there's a case where receiveTLI is the\n> future of recoveryTarrgetTLI but I haven't successfully had such a\n> situation. If I set recovoryTargetTLI to a TLI that standby doesn't\n> know but primary knows, validateRecoveryParameters immediately\n> complains about that before reaching there. Anyway the attached\n> assumes receiveTLI may be the future of recoveryTargetTLI.\n\nIf you see the note in this commit. It says without the timeline\nhistory file, so does it trying to say that although receiveTLI is the\nancestor of recovoryTargetTLI, it can not detect that because of the\nabsence of the TL.history file ?\n\nee994272ca50f70b53074f0febaec97e28f83c4e\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\nCommitter: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n.....\n Without the timeline history file, recovering that file\n will fail as the older timeline ID is not recognized to be an ancestor of\n the target timeline. If you try to recover from such a backup, using only\n streaming replication to fetch the WAL, this patch is required for that to\n work.\n=====\n\n>\n> I believe the 004_timeline_switch.pl detects your issue. And the\n> attached change fixes it.\n\nI think this fix looks better to me, but I will think more about it\nand give my feedback. Thanks for quickly coming up with the\nreproducible test case.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 14:27:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Mon, 10 May 2021 14:27:21 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Mon, May 10, 2021 at 2:05 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> \n> > I thought that the reason using receiveTLI instead of\n> > recoveryTargetTLI here is that there's a case where receiveTLI is the\n> > future of recoveryTarrgetTLI but I haven't successfully had such a\n> > situation. If I set recovoryTargetTLI to a TLI that standby doesn't\n> > know but primary knows, validateRecoveryParameters immediately\n> > complains about that before reaching there. Anyway the attached\n> > assumes receiveTLI may be the future of recoveryTargetTLI.\n> \n> If you see the note in this commit. It says without the timeline\n> history file, so does it trying to say that although receiveTLI is the\n> ancestor of recovoryTargetTLI, it can not detect that because of the\n> absence of the TL.history file ?\n\nYeah, it reads so for me and it works as described. What I don't\nunderstand is that why the patch uses receiveTLI, not\nrecovoryTargetTLI to load timeline hisotry in\nWaitForWALToBecomeAvailable. The only possible reason is that there\ncould be a case where receivedTLI is the future of recoveryTargetTLI.\nHowever, AFAICS it's impossible for that case to happen. At\nreplication start, requsting TLI is that of the last checkpoint, which\nis the same to recoveryTargetTLI, or anywhere in exising expectedTLEs\nwhich must be the past of recoveryTargetTLI. That seems to be already\ntrue at the time replication was made possible to follow a timeline\nswitch (abfd192b1b).\n\nSo I was tempted to just load history for recoveryTargetTLI then\nconfirm that receiveTLI is in the history. Actually that change\ndoesn't harm any of the recovery TAP tests. It is way simpler than\nthe last patch. However, I'm not confident that it is right.. ;(\n\n> ee994272ca50f70b53074f0febaec97e28f83c4e\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n> Committer: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n> .....\n> Without the timeline history file, recovering that file\n> will fail as the older timeline ID is not recognized to be an ancestor of\n> the target timeline. If you try to recover from such a backup, using only\n> streaming replication to fetch the WAL, this patch is required for that to\n> work.\n> =====\n> \n> >\n> > I believe the 004_timeline_switch.pl detects your issue. And the\n> > attached change fixes it.\n> \n> I think this fix looks better to me, but I will think more about it\n> and give my feedback. Thanks for quickly coming up with the\n> reproducible test case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 11 May 2021 17:11:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, May 11, 2021 at 1:42 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 10 May 2021 14:27:21 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Mon, May 10, 2021 at 2:05 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >\n> > > I thought that the reason using receiveTLI instead of\n> > > recoveryTargetTLI here is that there's a case where receiveTLI is the\n> > > future of recoveryTarrgetTLI but I haven't successfully had such a\n> > > situation. If I set recovoryTargetTLI to a TLI that standby doesn't\n> > > know but primary knows, validateRecoveryParameters immediately\n> > > complains about that before reaching there. Anyway the attached\n> > > assumes receiveTLI may be the future of recoveryTargetTLI.\n> >\n> > If you see the note in this commit. It says without the timeline\n> > history file, so does it trying to say that although receiveTLI is the\n> > ancestor of recovoryTargetTLI, it can not detect that because of the\n> > absence of the TL.history file ?\n>\n> Yeah, it reads so for me and it works as described. What I don't\n> understand is that why the patch uses receiveTLI, not\n> recovoryTargetTLI to load timeline hisotry in\n> WaitForWALToBecomeAvailable. The only possible reason is that there\n> could be a case where receivedTLI is the future of recoveryTargetTLI.\n> However, AFAICS it's impossible for that case to happen. At\n> replication start, requsting TLI is that of the last checkpoint, which\n> is the same to recoveryTargetTLI, or anywhere in exising expectedTLEs\n> which must be the past of recoveryTargetTLI. That seems to be already\n> true at the time replication was made possible to follow a timeline\n> switch (abfd192b1b).\n>\n> So I was tempted to just load history for recoveryTargetTLI then\n> confirm that receiveTLI is in the history. Actually that change\n> doesn't harm any of the recovery TAP tests. It is way simpler than\n> the last patch. However, I'm not confident that it is right.. ;(\n\nI first thought of fixing like as you describe that instead of loading\nhistory of receiveTLI, load history for recoveryTargetTLI. But then,\nthis commit (ee994272ca50f70b53074f0febaec97e28f83c4e) has especially\nused the history file of receiveTLI to solve a particular issue which\nI did not clearly understand. I am not sure that whether it is a good\nidea to directly using recoveryTargetTLI, without exactly\nunderstanding why this commit was using receiveTLI. It doesn't seem\nlike an oversight to me, it seems intentional. Maybe Heikki can\ncomment on this?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 May 2021 14:07:19 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Mon, May 10, 2021 at 4:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> It seems to me the issue here is not a race condition but\n> WaitForWALToBecomeAvailable initializing expectedTLEs with the history\n> of a improper timeline. So using recoveryTargetTLI instead of\n> receiveTLI for the case fixes this issue.\n\nI agree.\n\n> I believe the 004_timeline_switch.pl detects your issue. And the\n> attached change fixes it.\n\nSo why does this use recoveryTargetTLI instead of receiveTLI only\nconditionally? Why not do it all the time?\n\nThe hard thing about this code is that the assumptions are not very\nclear. If we don't know why something is a certain way, then we might\nbreak things if we change it. Worse yet, if nobody else knows why it's\nlike that either, then who knows what assumptions they might be\nmaking? It's hard to be sure that any change is safe.\n\nBut that being said, we have a clear definition from the comments for\nwhat expectedTLEs is supposed to contain, and it's only going to end\nup with those contents if we initialize it from recoveryTargetTLI. So\nI am inclined to think that we ought to do that always, and if it\nbreaks something, then that's a sign that some other part of the code\nalso needs fixing, because apparently that hypothetical other part of\nthe code doesn't work if expctedTLEs contains what the comments say\nthat it should.\n\nNow maybe that's the wrong idea. But if so, then we're saying that the\ndefinition of expectedTLEs needs to be changed, and we should update\nthe comments with the new definition, whatever it is. A lot of the\nconfusion here results from the fact that the code and comments are\ninconsistent and we can't tell whether that's intentional or\ninadvertent. Let's not leave the next person who looks at this code\nwondering the same thing about whatever changes we make.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 May 2021 17:07:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, May 14, 2021 at 2:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> So why does this use recoveryTargetTLI instead of receiveTLI only\n> conditionally? Why not do it all the time?\n>\n> The hard thing about this code is that the assumptions are not very\n> clear. If we don't know why something is a certain way, then we might\n> break things if we change it. Worse yet, if nobody else knows why it's\n> like that either, then who knows what assumptions they might be\n> making? It's hard to be sure that any change is safe.\n>\n> But that being said, we have a clear definition from the comments for\n> what expectedTLEs is supposed to contain, and it's only going to end\n> up with those contents if we initialize it from recoveryTargetTLI. So\n> I am inclined to think that we ought to do that always, and if it\n> breaks something, then that's a sign that some other part of the code\n> also needs fixing, because apparently that hypothetical other part of\n> the code doesn't work if expctedTLEs contains what the comments say\n> that it should.\n>\n> Now maybe that's the wrong idea. But if so, then we're saying that the\n> definition of expectedTLEs needs to be changed, and we should update\n> the comments with the new definition, whatever it is. A lot of the\n> confusion here results from the fact that the code and comments are\n> inconsistent and we can't tell whether that's intentional or\n> inadvertent. Let's not leave the next person who looks at this code\n> wondering the same thing about whatever changes we make.\n\nI am not sure that have you noticed the commit id which changed the\ndefinition of expectedTLEs, Heikki has committed that change so adding\nhim in the list to know his opinion.\n\n=====\nee994272ca50f70b53074f0febaec97e28f83c4e\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\nCommitter: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n\n Delay reading timeline history file until it's fetched from master.\n .....\n Without the timeline history file, recovering that file\n will fail as the older timeline ID is not recognized to be an ancestor of\n the target timeline. If you try to recover from such a backup, using only\n streaming replication to fetch the WAL, this patch is required for that to\n work.\n=====\n\nPart of this commit message says that it will not identify the\nrecoveryTargetTLI as the ancestor of the checkpoint timeline (without\nhistory file). I did not understand what it is trying to say. Does\nit is trying to say that even though the recoveryTargetTLI is the\nancestor of the checkpoint timeline but we can not track that because\nwe don't have a history file? So to handle this problem change the\ndefinition of expectedTLEs to directly point to the checkpoint\ntimeline?\n\nBecause before this commit, we were directly initializing expectedTLEs\nwith the history file of recoveryTargetTLI, we were not even waiting\nfor reading the checkpoint, but under this commit, it is changed.\n\nI am referring to the below code which was deleted by this commit:\n\n========\n@@ -5279,49 +5299,6 @@ StartupXLOG(void)\n */\n readRecoveryCommandFile();\n\n- /* Now we can determine the list of expected TLIs */\n- expectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n-\n- /*\n- * If the location of the checkpoint record is not on the expected\n- * timeline in the history of the requested timeline, we cannot proceed:\n- * the backup is not part of the history of the requested timeline.\n- */\n- if (tliOfPointInHistory(ControlFile->checkPoint, expectedTLEs) !=\n- ControlFile->checkPointCopy.ThisTimeLineID)\n- {\n=========\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 10:29:07 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Thu, 13 May 2021 17:07:31 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Mon, May 10, 2021 at 4:35 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > It seems to me the issue here is not a race condition but\n> > WaitForWALToBecomeAvailable initializing expectedTLEs with the history\n> > of a improper timeline. So using recoveryTargetTLI instead of\n> > receiveTLI for the case fixes this issue.\n> \n> I agree.\n> \n> > I believe the 004_timeline_switch.pl detects your issue. And the\n> > attached change fixes it.\n> \n> So why does this use recoveryTargetTLI instead of receiveTLI only\n> conditionally? Why not do it all the time?\n\nThe commit ee994272ca apparently says that there's a case where primary \n\n> The hard thing about this code is that the assumptions are not very\n> clear. If we don't know why something is a certain way, then we might\n> break things if we change it. Worse yet, if nobody else knows why it's\n> like that either, then who knows what assumptions they might be\n> making? It's hard to be sure that any change is safe.\n\nThanks for the comment.\n\n> But that being said, we have a clear definition from the comments for\n> what expectedTLEs is supposed to contain, and it's only going to end\n> up with those contents if we initialize it from recoveryTargetTLI. So\n> I am inclined to think that we ought to do that always, and if it\n\nYes, I also found it after that, and agreed. Desynchronization\nbetween recoveryTargetTLI and expectedTLEs prevents\nrescanLatestTimeline from working.\n\n> breaks something, then that's a sign that some other part of the code\n> also needs fixing, because apparently that hypothetical other part of\n> the code doesn't work if expctedTLEs contains what the comments say\n> that it should.\n\nAfter some more inspection, I'm now also sure that it is a\ntypo/thinko. Other than while fetching the first checkpoint,\nreceivedTLI is always in the history of recoveryTargetTLI, otherwise\nrecoveryTargetTLI is equal to receiveTLI.\n\nI read that the commit message as \"waiting for fetching possible\nfuture history files to know if there's the future for the current\ntimeline. However now I read it as \"don't bother expecting for\npossiblly-unavailable history files when we're reading the first\ncheckpoint the timeline for which is already known to us.\". If it is\ncorrect we don't bother considering future history files coming from\nprimary there.\n\n> Now maybe that's the wrong idea. But if so, then we're saying that the\n> definition of expectedTLEs needs to be changed, and we should update\n> the comments with the new definition, whatever it is. A lot of the\n> confusion here results from the fact that the code and comments are\n> inconsistent and we can't tell whether that's intentional or\n> inadvertent. Let's not leave the next person who looks at this code\n> wondering the same thing about whatever changes we make.\n\nOk. The reason why we haven't have a complain about that would be\nthat it is rare that pg_wal is wiped out before a standby connects to\na just-promoted primary. I'm not sure about the tool Dilip is using,\nthough..\n\nSo the result is the attached. This would be back-patcheable to 9.3\n(or 9.6?) but I doubt that we should do as we don't seem to have had a\ncomplaint on this issue and we're not full faith on this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 14 May 2021 14:12:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 14 May 2021 14:12:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 13 May 2021 17:07:31 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> > On Mon, May 10, 2021 at 4:35 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > It seems to me the issue here is not a race condition but\n> > > WaitForWALToBecomeAvailable initializing expectedTLEs with the history\n> > > of a improper timeline. So using recoveryTargetTLI instead of\n> > > receiveTLI for the case fixes this issue.\n> > \n> > I agree.\n> > \n> > > I believe the 004_timeline_switch.pl detects your issue. And the\n> > > attached change fixes it.\n> > \n> > So why does this use recoveryTargetTLI instead of receiveTLI only\n> > conditionally? Why not do it all the time?\n> \n> The commit ee994272ca apparently says that there's a case where primary \n\nThis is not an incomplete line but just a garbage.\n\n> > The hard thing about this code is that the assumptions are not very\n> > clear. If we don't know why something is a certain way, then we might\n> > break things if we change it. Worse yet, if nobody else knows why it's\n> > like that either, then who knows what assumptions they might be\n> > making? It's hard to be sure that any change is safe.\n> \n> Thanks for the comment.\n> \n> > But that being said, we have a clear definition from the comments for\n> > what expectedTLEs is supposed to contain, and it's only going to end\n> > up with those contents if we initialize it from recoveryTargetTLI. So\n> > I am inclined to think that we ought to do that always, and if it\n> \n> Yes, I also found it after that, and agreed. Desynchronization\n> between recoveryTargetTLI and expectedTLEs prevents\n> rescanLatestTimeline from working.\n> \n> > breaks something, then that's a sign that some other part of the code\n> > also needs fixing, because apparently that hypothetical other part of\n> > the code doesn't work if expctedTLEs contains what the comments say\n> > that it should.\n> \n> After some more inspection, I'm now also sure that it is a\n> typo/thinko. Other than while fetching the first checkpoint,\n> receivedTLI is always in the history of recoveryTargetTLI, otherwise\n> recoveryTargetTLI is equal to receiveTLI.\n> \n> I read that the commit message as \"waiting for fetching possible\n> future history files to know if there's the future for the current\n> timeline. However now I read it as \"don't bother expecting for\n> possiblly-unavailable history files when we're reading the first\n> checkpoint the timeline for which is already known to us.\". If it is\n> correct we don't bother considering future history files coming from\n> primary there.\n> \n> > Now maybe that's the wrong idea. But if so, then we're saying that the\n> > definition of expectedTLEs needs to be changed, and we should update\n> > the comments with the new definition, whatever it is. A lot of the\n> > confusion here results from the fact that the code and comments are\n> > inconsistent and we can't tell whether that's intentional or\n> > inadvertent. Let's not leave the next person who looks at this code\n> > wondering the same thing about whatever changes we make.\n> \n> Ok. The reason why we haven't have a complain about that would be\n> that it is rare that pg_wal is wiped out before a standby connects to\n> a just-promoted primary. I'm not sure about the tool Dilip is using,\n> though..\n> \n> So the result is the attached. This would be back-patcheable to 9.3\n> (or 9.6?) but I doubt that we should do as we don't seem to have had a\n> complaint on this issue and we're not full faith on this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 14 May 2021 14:24:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, May 14, 2021 at 12:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I am not sure that have you noticed the commit id which changed the\n> definition of expectedTLEs, Heikki has committed that change so adding\n> him in the list to know his opinion.\n\nI did notice, but keep in mind that this was more than 8 years ago.\nEven if Heikki is reading this thread, he may not remember why he\nchanged 1 line of code one way rather than another in 2013. I mean if\nhe does that's great, but it's asking a lot.\n\n> =====\n> ee994272ca50f70b53074f0febaec97e28f83c4e\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n> Committer: Heikki Linnakangas <heikki.linnakangas@iki.fi> 2013-01-03 14:11:58\n>\n> Delay reading timeline history file until it's fetched from master.\n> .....\n> Without the timeline history file, recovering that file\n> will fail as the older timeline ID is not recognized to be an ancestor of\n> the target timeline. If you try to recover from such a backup, using only\n> streaming replication to fetch the WAL, this patch is required for that to\n> work.\n> =====\n>\n> Part of this commit message says that it will not identify the\n> recoveryTargetTLI as the ancestor of the checkpoint timeline (without\n> history file). I did not understand what it is trying to say. Does\n> it is trying to say that even though the recoveryTargetTLI is the\n> ancestor of the checkpoint timeline but we can not track that because\n> we don't have a history file? So to handle this problem change the\n> definition of expectedTLEs to directly point to the checkpoint\n> timeline?\n>\n> Because before this commit, we were directly initializing expectedTLEs\n> with the history file of recoveryTargetTLI, we were not even waiting\n> for reading the checkpoint, but under this commit, it is changed.\n\nWell, I think that is talking about what the commit did in general,\nnot specifically the one line of code that we think may be incorrect.\nAs I understand it, the general issue here was that if\nXLogFileReadAnyTLI() was called before expectedTLEs got set, then\nprior to this commit it would have to fail, because the foreach() loop\nin that function would be iterating over an empty list. Heikki tried\nto make it not fail in that case, by setting tles =\nreadTimeLineHistory(recoveryTargetTLI), so that the foreach loop\n*wouldn't* get an empty list.\n\nThinking about this a bit more, I think the idea behind the logic this\ncommit added to XLogFileReadAnyTLI() is that\nXLogFileReadAnyTLI(recoveryTargetTLI) may or may not produce the\ncorrect answer. If the timeline history file exists, it will contain\nall the information that we need and will return a complete list of\nTLEs. But if the file does not exist yet, then it will return a\n1-entry list saying that the TLI in question has no parents. If\nreadTimeLineHistory() actually reads the file, then it's safe to save\nthe return value in expectedTLEs, but if it doesn't, then it may or\nmay not be safe. If XLogFileReadAnyTLI calls XLogFileRead and it\nworks, then the WAL segment we need exists on our target timeline and\nwe don't actually need the timeline history for anything because we\ncan just directly begin replay from the target timeline. But if\nXLogFileRead fails with the 1-entry dummy list, then we need the\ntimeline history and don't have it yet, so we have to retry later,\nwhen the history file will hopefully be present, and then at that\npoint readTimeLineHistory will return a different and better answer\nand hopefully it will all work.\n\nI think this is what the commit message is talking about when it says\nthat \"Without the timeline history file, recovering that file will\nfail as the older timeline ID is not recognized to be an ancestor of\nthe target timeline.\" Without the timeline history file, we can't know\nwhether some other timeline is an ancestor or not. But the specific\nway that manifests is that XLogFileReadAnyTLI() returns a 1-entry\ndummy list instead of the real contents of the timeline history file.\nThis commit doesn't prevent that from happening, but it does prevent\nthe 1-entry dummy list from getting stored in the global variable\nexpectedTLEs, except in the case where no timeline switch is occurring\nand the lack of history therefore doesn't matter. Without this commit,\nif the call to readTimeLineHistory(recoveryTargetTLI) happens at a\ntime when the timeline history file is not yet available, the 1-entry\ndummy list ends up in the global variable and there's no way for it to\never be replaced with a real history even if the timeline history file\nshows up in the archive later.\n\nAs I see it, the question on the table here is whether there's any\njustification for the fact that when the second switch in\nWaitForWALToBecomeAvailable takes the\nXLOG_FROM_ARCHIVE/XLOG_FROM_PG_WAL branch, it calls XLogFileReadAnyTLI\nwhich tries to read the history of recoveryTargetTLI, while when that\nsame switch takes the XLOG_FROM_STREAM branch, it tries to read the\nhistory of receiveTLI. I tend to think it doesn't make sense. On\ngeneral principle, archiving and streaming are supposed to work the\nsame way, so the idea that they are getting the timeline from\ndifferent places is inherently suspect. But also and more\nspecifically, AFAICS receiveTLI always has to be the same TLI that we\nrequested from the server, so we're always looking up our own current\nTLI here rather than the target TLI, which seems wrong to me, at least\nof this moment. :-)\n\nBut that having been said, I still don't quite understand the\nconditions required to tickle this problem. I spent a long time poking\nat it today. It seems to me that it ought somehow to be possible to\nrecreate the scenario without trying to reuse the old master as a\nstandby, and also without even needing a WAL archive, but I couldn't\nfigure out how to do it. I tried setting up a primary and a standby,\nand then making a backup from the standby, promoting it, and then\nstarting what would have been a cascading standby from the backup. But\nthat doesn't do it. The first mistake I made was creating the standbys\nwith something like 'pg_basebackup -R', but that's not good enough\nbecause then they have WAL, so I added '-Xnone'. But then I realized\nthat when a base backup ends, the primary writes an XLOG_SWITCH\nrecord, which means that when the standby is promoted, the promotion\nis not in the same WAL segment as the checkpoint from which the\nmachine that would have been a cascading standby is trying to start. I\nworked around that by setting recovery_target='immediate' on the\nstandby. With that change, I get a WAL file on the new timeline - 2 in\nthis case - that looks like this:\n\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n0/19000060, prev 0/19000028, desc: CHECKPOINT_ONLINE redo 0/19000028;\ntli 1; prev tli 1; fpw true; xid 0:587; oid 16385; multi 1; offset 0;\noldest xid 579 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\ntimestamp xid: 0/0; oldest running xid 587; online\nrmgr: XLOG len (rec/tot): 34/ 34, tx: 0, lsn:\n0/190000D8, prev 0/19000060, desc: BACKUP_END 0/19000028\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n0/19000100, prev 0/190000D8, desc: CHECKPOINT_SHUTDOWN redo\n0/19000100; tli 2; prev tli 1; fpw true; xid 0:587; oid 16385; multi\n1; offset 0; oldest xid 579 in DB 1; oldest multi 1 in DB 1;\noldest/newest commit timestamp xid: 0/0; oldest running xid 0;\nshutdown\n\nThat sure looks like the right thing to recreate the problem, because\nthe first checkpoint is from the backup, and the\nwoulda-been-a-cascading-standby should be starting there, and the\nsecond checkpoint is in the same segment and shows a timeline switch.\nBut everything worked great:\n\n2021-05-14 17:44:58.684 EDT [5697] DETAIL: End of WAL reached on\ntimeline 1 at 0/19000100.\n2021-05-14 17:44:58.728 EDT [5694] LOG: new target timeline is 2\n2021-05-14 17:44:58.746 EDT [5694] LOG: redo starts at 0/19000028\n2021-05-14 17:44:58.749 EDT [5694] LOG: consistent recovery state\nreached at 0/19000100\n\nI don't understand why that works. It feels to me like it ought to run\nsmack into the same problem you saw, but it doesn't.\n\n> I am referring to the below code which was deleted by this commit:\n>\n> ========\n> @@ -5279,49 +5299,6 @@ StartupXLOG(void)\n> */\n> readRecoveryCommandFile();\n>\n> - /* Now we can determine the list of expected TLIs */\n> - expectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n> -\n> - /*\n> - * If the location of the checkpoint record is not on the expected\n> - * timeline in the history of the requested timeline, we cannot proceed:\n> - * the backup is not part of the history of the requested timeline.\n> - */\n> - if (tliOfPointInHistory(ControlFile->checkPoint, expectedTLEs) !=\n> - ControlFile->checkPointCopy.ThisTimeLineID)\n> - {\n> =========\n\nI don't think this code is really deleted. The tliOfPointInHistory\ncheck was just moved later in the function. And expectedTLEs is still\nsupposed to be getting initialized, because just before the new\nlocation of the tliOfPointInHistory check, Heikki added\nAssert(expectedTLEs).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 May 2021 18:28:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sat, May 15, 2021 at 3:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I did notice, but keep in mind that this was more than 8 years ago.\n> Even if Heikki is reading this thread, he may not remember why he\n> changed 1 line of code one way rather than another in 2013. I mean if\n> he does that's great, but it's asking a lot.\n\nI agree with your point. But I think that one line is related to the\npurpose of this commit and I have explained (in 3rd paragraph below)\nwhy do I think so.\n\n As I understand it, the general issue here was that if\n> XLogFileReadAnyTLI() was called before expectedTLEs got set, then\n> prior to this commit it would have to fail, because the foreach() loop\n> in that function would be iterating over an empty list. Heikki tried\n> to make it not fail in that case, by setting tles =\n> readTimeLineHistory(recoveryTargetTLI), so that the foreach loop\n> *wouldn't* get an empty list.\n\nI might be missing something but I don't agree with this logic. If\nyou see prior to this commit the code flow was like below[1]. So my\npoint is if we are calling XLogFileReadAnyTLI() somewhere while\nreading the checkpoint record then note that expectedTLEs were\ninitialized unconditionally before even try to read that checkpoint\nrecord. So how expectedTLEs could be uninitialized in\nLogFileReadAnyTLI?\n\n[1]\nStartupXLOG(void)\n{\n....\n\nrecoveryTargetTLI = ControlFile->checkPointCopy.ThisTimeLineID;\n...\nreadRecoveryCommandFile();\n...\nexpectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n...\n..\nrecord = ReadCheckpointRecord(checkPointLoc, 0);\n\n\nAnother point which I am not sure about but still I think that one\nline (expectedTLEs = readTimeLineHistory(receiveTLI);), somewhere\nrelated to the purpose of this commit. Let me explain why do I think\nso. Basically, before this commit, we were initializing\n\"expectedTLEs\" based on the history file of \"recoveryTargetTLI\", right\nafter calling \"readRecoveryCommandFile()\" (this function will\ninitialize recoveryTargetTLI based on the recovery target, and it\nensures it read the respective history file). Now, right after this\npoint, there was a check as shown below[2], which is checking whether\nthe checkpoint TLI exists in the \"expectedTLEs\" which is initialized\nbased on recoveryTargetTLI. And it appeared that this check was\nfailing in some cases which this commit tried to fix and all other\ncode is there to support that. Because now before going for reading\nthe checkpoint we are not initializing \"expectedTLEs\" so now after\nmoving this line from here it was possible that \"expectedTLEs\" is not\ninitialized in XLogFileReadAnyTLI() and the remaining code in\nXLogFileReadAnyTLI() is to handle that part.\n\nNow, coming to my point that why this one line is related, In this\none line (expectedTLEs = readTimeLineHistory(receiveTLI);), we\ncompletely avoiding recoveryTargetTLI and initializing \"expectedTLEs\"\nbased on the history file of the TL from which we read the checkpoint,\nso now, there is no scope of below[2] check to fail because note that\nwe are not initializing \"expectedTLEs\" based on the\n\"recoveryTargetTLI\" but we are initializing from the history from\nwhere we read checkpoint.\n\nSo I feel if we directly fix this one line to initialize\n\"expectedTLEs\" from \"recoveryTargetTLI\" then it will expose to the\nsame problem this commit tried to fix.\n\n[2]\nif (tliOfPointInHistory(ControlFile->checkPoint, expectedTLEs) !=\nControlFile->checkPointCopy.ThisTimeLineID)\n{\nerror()\n}\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 15 May 2021 10:55:05 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Sat, 15 May 2021 10:55:05 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Sat, May 15, 2021 at 3:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > I did notice, but keep in mind that this was more than 8 years ago.\n> > Even if Heikki is reading this thread, he may not remember why he\n> > changed 1 line of code one way rather than another in 2013. I mean if\n> > he does that's great, but it's asking a lot.\n> \n> I agree with your point. But I think that one line is related to the\n> purpose of this commit and I have explained (in 3rd paragraph below)\n> why do I think so.\n> \n> As I understand it, the general issue here was that if\n> > XLogFileReadAnyTLI() was called before expectedTLEs got set, then\n> > prior to this commit it would have to fail, because the foreach() loop\n> > in that function would be iterating over an empty list. Heikki tried\n> > to make it not fail in that case, by setting tles =\n> > readTimeLineHistory(recoveryTargetTLI), so that the foreach loop\n> > *wouldn't* get an empty list.\n> \n> I might be missing something but I don't agree with this logic. If\n> you see prior to this commit the code flow was like below[1]. So my\n> point is if we are calling XLogFileReadAnyTLI() somewhere while\n> reading the checkpoint record then note that expectedTLEs were\n> initialized unconditionally before even try to read that checkpoint\n> record. So how expectedTLEs could be uninitialized in\n> LogFileReadAnyTLI?\n\nMmm. I think both of you are right. Before the commit,\nXLogFileReadAnyTLI expected that expectedTLEs is initialized. After\nthe commit it cannot no longer expect that so readTimeLineHistory was\nchanged to try fetching by itself. *If* an appropriate history file\nis found, it *initializes* expectedTLEs with the content.\n\n> [1]\n> StartupXLOG(void)\n> {\n> ....\n> \n> recoveryTargetTLI = ControlFile->checkPointCopy.ThisTimeLineID;\n> ...\n> readRecoveryCommandFile();\n> ...\n> expectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n> ...\n> ..\n> record = ReadCheckpointRecord(checkPointLoc, 0);\n> \n> \n> Another point which I am not sure about but still I think that one\n> line (expectedTLEs = readTimeLineHistory(receiveTLI);), somewhere\n> related to the purpose of this commit. Let me explain why do I think\n> so. Basically, before this commit, we were initializing\n> \"expectedTLEs\" based on the history file of \"recoveryTargetTLI\", right\n> after calling \"readRecoveryCommandFile()\" (this function will\n> initialize recoveryTargetTLI based on the recovery target, and it\n> ensures it read the respective history file). Now, right after this\n> point, there was a check as shown below[2], which is checking whether\n> the checkpoint TLI exists in the \"expectedTLEs\" which is initialized\n> based on recoveryTargetTLI. And it appeared that this check was\n> failing in some cases which this commit tried to fix and all other\n> code is there to support that. Because now before going for reading\n> the checkpoint we are not initializing \"expectedTLEs\" so now after\n> moving this line from here it was possible that \"expectedTLEs\" is not\n> initialized in XLogFileReadAnyTLI() and the remaining code in\n> XLogFileReadAnyTLI() is to handle that part.\n\nBefore the commit expectedTLEs is always initialized with just one\nentry for the TLI of the last checkpoint record.\n\n(1) If XLogFileReadAnyTLI() found the segment but no history file\nfound, that is, using the dummy TLE-list, expectedTLEs is initialized\nwith the dummy one-entry list. So there's no behavioral change in this\naspect.\n\n(2) If we didn't find the segment for the checkpoint record, it starts\nreplication and fetches history files and WAL records then revisits\nXLogFileReadAnyTLI. Now we have both the history file and segments,\nit successfully reads the recood. The difference of expectedTLEs made\nby the patch is having just one entry or the all entries for the past.\n\nAssuming that we keep expectedTLEs synced with recoveryTargetTLI,\nrescanLatestTimeLine updates the list properly so no need to worry\nabout the future. So the issue would be in the past timelines. After\nreading the checkpoint record, if we need to rewind to the previous\ntimeline for the REDO point, the dummy list is inconvenient.\n\nSo there is a possibility that the patch fixed the case (2), where the\nstandby doesn't have both the segment for the checkpoint record and\nthe history file for the checkpoint, and the REDO point is on the last\nTLI. If it is correct, the patch still fails for the case (1), that\nis, the issue raised here. Anyway it would be useless (and rahter\nharmful) to initialize expectedTLEs based on receiveTLI there.\n\nSo my resul there is:\n\nThe commit fixed the case (2)\nThe fix caused the issue for the case (1).\nThe proposed fix fixes the case (1), caused by the commit.\n\nThere's another issue in the case (1) and REDO point is back to the\nprevious timeline, which is in doubt we need to fix..\n\n> Now, coming to my point that why this one line is related, In this\n> one line (expectedTLEs = readTimeLineHistory(receiveTLI);), we\n> completely avoiding recoveryTargetTLI and initializing \"expectedTLEs\"\n> based on the history file of the TL from which we read the checkpoint,\n> so now, there is no scope of below[2] check to fail because note that\n> we are not initializing \"expectedTLEs\" based on the\n> \"recoveryTargetTLI\" but we are initializing from the history from\n> where we read checkpoint.\n> \n> So I feel if we directly fix this one line to initialize\n> \"expectedTLEs\" from \"recoveryTargetTLI\" then it will expose to the\n> same problem this commit tried to fix.\n> \n> [2]\n> if (tliOfPointInHistory(ControlFile->checkPoint, expectedTLEs) !=\n> ControlFile->checkPointCopy.ThisTimeLineID)\n> {\n> error()\n> }\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 17 May 2021 12:20:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Mon, 17 May 2021 12:20:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Assuming that we keep expectedTLEs synced with recoveryTargetTLI,\n> rescanLatestTimeLine updates the list properly so no need to worry\n> about the future. So the issue would be in the past timelines. After\n> reading the checkpoint record, if we need to rewind to the previous\n> timeline for the REDO point, the dummy list is inconvenient.\n\nBy the way, I tried reproducing this situation, but ended in finding\nit a kind of impossible because pg_basebackup (or pg_stop_backup())\nwaits for the promotion checkpoint to end.\n\nIf we make a backup in a somewhat broken steps, that could be done but\nI didn't try.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 17 May 2021 13:01:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Mon, 17 May 2021 13:01:04 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 17 May 2021 12:20:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Assuming that we keep expectedTLEs synced with recoveryTargetTLI,\n> > rescanLatestTimeLine updates the list properly so no need to worry\n> > about the future. So the issue would be in the past timelines. After\n> > reading the checkpoint record, if we need to rewind to the previous\n> > timeline for the REDO point, the dummy list is inconvenient.\n> \n> By the way, I tried reproducing this situation, but ended in finding\n> it a kind of impossible because pg_basebackup (or pg_stop_backup())\n> waits for the promotion checkpoint to end.\n\nMmm. That's wrong. What the tool waits is not a promotion checkpoint,\nbut a backup-checkpoint, maybe. (I don't remember cleary about that,\nsorry.)\n\n> If we make a backup in a somewhat broken steps, that could be done but\n> I didn't try.\n\nSo there might still be a way to reproduce that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 17 May 2021 13:05:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Mon, May 17, 2021 at 8:50 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Before the commit expectedTLEs is always initialized with just one\n> entry for the TLI of the last checkpoint record.\n\nRight\n\n> (1) If XLogFileReadAnyTLI() found the segment but no history file\n> found, that is, using the dummy TLE-list, expectedTLEs is initialized\n> with the dummy one-entry list. So there's no behavioral change in this\n> aspect.\n\nYeah, you are right.\n\n> (2) If we didn't find the segment for the checkpoint record, it starts\n> replication and fetches history files and WAL records then revisits\n> XLogFileReadAnyTLI. Now we have both the history file and segments,\n> it successfully reads the recood. The difference of expectedTLEs made\n> by the patch is having just one entry or the all entries for the past.\n\nCorrect.\n\n> Assuming that we keep expectedTLEs synced with recoveryTargetTLI,\n> rescanLatestTimeLine updates the list properly so no need to worry\n> about the future. So the issue would be in the past timelines. After\n> reading the checkpoint record, if we need to rewind to the previous\n> timeline for the REDO point, the dummy list is inconvenient.\n>\n> So there is a possibility that the patch fixed the case (2), where the\n> standby doesn't have both the segment for the checkpoint record and\n> the history file for the checkpoint, and the REDO point is on the last\n> TLI. If it is correct, the patch still fails for the case (1), that\n> is, the issue raised here. Anyway it would be useless (and rahter\n> harmful) to initialize expectedTLEs based on receiveTLI there.\n>\n> So my resul there is:\n>\n> The commit fixed the case (2)\n\nYes, by maintaining the entire history instead of one entry if history\nwas missing.\n\n> The fix caused the issue for the case (1).\n\nBasically, before this commit expectedTLEs and recoveryTargetTLI were\nin always in sync which this patch broke.\n\n> The proposed fix fixes the case (1), caused by the commit.\n\nRight, I agree with the fix. So fix should be just to change that one\nline and initialize expectedTLEs from recoveryTargetTLI\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 May 2021 10:09:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Mon, May 17, 2021 at 10:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 8:50 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > Before the commit expectedTLEs is always initialized with just one\n> > entry for the TLI of the last checkpoint record.\n>\n> Right\n>\n> > (1) If XLogFileReadAnyTLI() found the segment but no history file\n> > found, that is, using the dummy TLE-list, expectedTLEs is initialized\n> > with the dummy one-entry list. So there's no behavioral change in this\n> > aspect.\n>\n> Yeah, you are right.\n\nBut do you agree that one line entry will always be a checkpoint\ntimeline entry? Because if you notice below code[1] in function\n\"readRecoveryCommandFile();\", then you will realize that once we come\nout of this function either the \"recoveryTargetTLI\" is checkpoint TL\nwherever it was before calling this function or we must have the\nhistory file. That means after exiting this function if we execute\nthis line (expectedTLEs = readTimeLineHistory(recoveryTargetTLI);)\nthat means either \"expectedTLEs\" could point to one dummy entry which\nwill be nothing but the checkpoint TL entry or it will be holding\ncomplete history.\n\nThe patch is trying to say that without the history file the\ncheckpoint TL will not be found in \"expectedTLEs\" because the older TL\n(checkpoint TL) is not the ancestor of the target\ntimeline(recoveryTargetTLI). But ideally, either the target timeline\nshould be the same as the checkpoint timeline or we must have the\nhistory file as I stated in the above paragraph. Am I missing\nsomething?\n\n[1]\nif (rtli)\n{\n /* Timeline 1 does not have a history file, all else should */\n if (rtli != 1 && !existsTimeLineHistory(rtli))\n ereport(FATAL,\n (errmsg(\"recovery target timeline %u does not exist\",\n rtli)));\n recoveryTargetTLI = rtli;\n recoveryTargetIsLatest = false;\n}\nelse\n{\n /* We start the \"latest\" search from pg_control's timeline */\n recoveryTargetTLI = findNewestTimeLine(recoveryTargetTLI);\n recoveryTargetIsLatest = true;\n}\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 May 2021 10:46:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sat, May 15, 2021 at 1:25 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > As I understand it, the general issue here was that if\n> > XLogFileReadAnyTLI() was called before expectedTLEs got set, then\n> > prior to this commit it would have to fail, because the foreach() loop\n> > in that function would be iterating over an empty list. Heikki tried\n> > to make it not fail in that case, by setting tles =\n> > readTimeLineHistory(recoveryTargetTLI), so that the foreach loop\n> > *wouldn't* get an empty list.\n>\n> I might be missing something but I don't agree with this logic. If\n> you see prior to this commit the code flow was like below[1]. So my\n> point is if we are calling XLogFileReadAnyTLI() somewhere while\n> reading the checkpoint record then note that expectedTLEs were\n> initialized unconditionally before even try to read that checkpoint\n> record. So how expectedTLEs could be uninitialized in\n> LogFileReadAnyTLI?\n\nSorry, you're right. It couldn't be uninitialized, but it could be a\nfake 1-element list saying there are no ancestors rather than the real\nvalue. So I think the point was to avoid that.\n\n> Another point which I am not sure about but still I think that one\n> line (expectedTLEs = readTimeLineHistory(receiveTLI);), somewhere\n> related to the purpose of this commit. Let me explain why do I think\n> so. Basically, before this commit, we were initializing\n> \"expectedTLEs\" based on the history file of \"recoveryTargetTLI\", right\n> after calling \"readRecoveryCommandFile()\" (this function will\n> initialize recoveryTargetTLI based on the recovery target, and it\n> ensures it read the respective history file). Now, right after this\n> point, there was a check as shown below[2], which is checking whether\n> the checkpoint TLI exists in the \"expectedTLEs\" which is initialized\n> based on recoveryTargetTLI. And it appeared that this check was\n> failing in some cases which this commit tried to fix and all other\n> code is there to support that. Because now before going for reading\n> the checkpoint we are not initializing \"expectedTLEs\" so now after\n> moving this line from here it was possible that \"expectedTLEs\" is not\n> initialized in XLogFileReadAnyTLI() and the remaining code in\n> XLogFileReadAnyTLI() is to handle that part.\n\nI think the issue here is: If expectedTLEs was initialized before the\nhistory file was available, and contained a dummy 1-element list, then\ntliOfPointInHistory() is going to say that every LSN is on that\ntimeline rather than any previous timeline. And if we are supposed to\nbe switching timelines then that will lead to this sanity check\nfailing.\n\n> Now, coming to my point that why this one line is related, In this\n> one line (expectedTLEs = readTimeLineHistory(receiveTLI);), we\n> completely avoiding recoveryTargetTLI and initializing \"expectedTLEs\"\n> based on the history file of the TL from which we read the checkpoint,\n> so now, there is no scope of below[2] check to fail because note that\n> we are not initializing \"expectedTLEs\" based on the\n> \"recoveryTargetTLI\" but we are initializing from the history from\n> where we read checkpoint.\n\nI agree, but that's actually bad, isn't it? I mean if we want the\nsanity check to never fail we can just take it out. What we want to\nhappen is that the sanity check should pass if the startup timeline if\nthe TLI of the startup checkpoint is in the history of the recovery\ntarget timeline, but fail if it isn't. The only way to achieve that\nbehavior is if expectedTLEs is initialized from the recovery target\ntimeline.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 May 2021 15:58:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, May 18, 2021 at 1:28 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Sorry, you're right. It couldn't be uninitialized, but it could be a\n> fake 1-element list saying there are no ancestors rather than the real\n> value. So I think the point was to avoid that.\n\nYeah, it will be a fake 1-element list. But just to be clear that\n1-element can only be \"ControlFile->checkPointCopy.ThisTimeLineID\" and\nnothing else, do you agree to this? Because we initialize\nrecoveryTargetTLI to this value and we might change it in\nreadRecoveryCommandFile() but for that, we must get the history file,\nso if we are talking about the case where we don't have the history\nfile then \"recoveryTargetTLI\" will still be\n\"ControlFile->checkPointCopy.ThisTimeLineID\".\n\n>\n> I think the issue here is: If expectedTLEs was initialized before the\n> history file was available, and contained a dummy 1-element list, then\n> tliOfPointInHistory() is going to say that every LSN is on that\n> timeline rather than any previous timeline. And if we are supposed to\n> be switching timelines then that will lead to this sanity check\n> failing.\n\nYou are talking about the sanity check of validating the timeline of\nthe checkpoint record right? but as I mentioned earlier the only\nentry in expectedTLEs will be the TLE of the checkpoint record so how\nthe sanity check will fail?\n\n>\n> I agree, but that's actually bad, isn't it?\n\nYes, it is bad.\n\n I mean if we want the\n> sanity check to never fail we can just take it out. What we want to\n> happen is that the sanity check should pass if the startup timeline if\n> the TLI of the startup checkpoint is in the history of the recovery\n> target timeline, but fail if it isn't. The only way to achieve that\n> behavior is if expectedTLEs is initialized from the recovery target\n> timeline.\n\nYes, I agree, with this. So initializing expectedTLEs with the\nrecovery target timeline is the right fix.\n\nConclusion:\n- I think now we agree on the point that initializing expectedTLEs\nwith the recovery target timeline is the right fix.\n- We still have some differences of opinion about what was the\noriginal problem in the base code which was fixed by the commit\n(ee994272ca50f70b53074f0febaec97e28f83c4e).\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 May 2021 11:03:30 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Mon, 17 May 2021 10:46:24 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Mon, May 17, 2021 at 10:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, May 17, 2021 at 8:50 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > Before the commit expectedTLEs is always initialized with just one\n> > > entry for the TLI of the last checkpoint record.\n> >\n> > Right\n> >\n> > > (1) If XLogFileReadAnyTLI() found the segment but no history file\n> > > found, that is, using the dummy TLE-list, expectedTLEs is initialized\n> > > with the dummy one-entry list. So there's no behavioral change in this\n> > > aspect.\n> >\n> > Yeah, you are right.\n> \n> But do you agree that one line entry will always be a checkpoint\n> timeline entry? Because if you notice below code[1] in function\n> \"readRecoveryCommandFile();\", then you will realize that once we come\n> out of this function either the \"recoveryTargetTLI\" is checkpoint TL\n> wherever it was before calling this function or we must have the\n> history file. That means after exiting this function if we execute\n> this line (expectedTLEs = readTimeLineHistory(recoveryTargetTLI);)\n> that means either \"expectedTLEs\" could point to one dummy entry which\n> will be nothing but the checkpoint TL entry or it will be holding\n> complete history.\n\nRight.\n\n> The patch is trying to say that without the history file the\n> checkpoint TL will not be found in \"expectedTLEs\" because the older TL\n> (checkpoint TL) is not the ancestor of the target\n> timeline(recoveryTargetTLI). But ideally, either the target timeline\n> should be the same as the checkpoint timeline or we must have the\n> history file as I stated in the above paragraph. Am I missing\n> something?\n\nYeah, that has been the most mysterious point here. So I searched for\na situation the one-entry expectedTLEs does not work.\n\nI vaguely believed that there's a case where REDO point of a\ncheckpoint is in the timeline previous to the record of the\ncheckpoint. The previous discussion is based on this case, but that\ndoesn't seem to happen. The last replayed checkpoint (that causes a\nrestartpoint) record is found before protmotion and the first\ncheckpoint starts after promotion.\n\nA little while ago I tried to make a situation where a checkpoint\nrecord is placed in the previous timeline of the tli written in the\ncontrol file. But control file is always written after checkpoint\nrecord is flushed.\n\n\nI rebooted myself from this:\n\nee994272ca:\n> There is at least one scenario where this makes a difference: if you take\n> a base backup from a standby server right after a timeline switch, the\n> WAL segment containing the initial checkpoint record will begin with an\n> older timeline ID. Without the timeline history file, recovering that file\n\nAnd finally I think I could reach the situation the commit wanted to fix.\n\nI took a basebackup from a standby just before replaying the first\ncheckpoint of the new timeline (by using debugger), without copying\npg_wal. In this backup, the control file contains checkPointCopy of\nthe previous timeline.\n\nI modified StartXLOG so that expectedTLEs is set just after first\ndetermining recoveryTargetTLI, then started the grandchild node. I\nhave the following error and the server fails to continue replication.\n\n[postmaster] LOG: starting PostgreSQL 14beta1 on x86_64-pc-linux-gnu...\n[startup] LOG: database system was interrupted while in recovery at log...\n[startup] LOG: set expectedtles tli=6, length=1\n[startup] LOG: Probing history file for TLI=7\n[startup] LOG: entering standby mode\n[startup] LOG: scanning segment 3 TLI 6, source 0\n[startup] LOG: Trying fetching history file for TLI=6\n[walreceiver] LOG: fetching timeline history file for timeline 5 from pri...\n[walreceiver] LOG: fetching timeline history file for timeline 6 from pri...\n[walreceiver] LOG: started streaming ... primary at 0/3000000 on timeline 5\n[walreceiver] DETAIL: End of WAL reached on timeline 5 at 0/30006E0.\n[startup] LOG: unexpected timeline ID 1 in log segment 000000050000000000000003, offset 0\n[startup] LOG: Probing history file for TLI=7\n[startup] LOG: scanning segment 3 TLI 6, source 0\n(repeats forever)\n\nThis seems like the behavior the patch wanted to fix. (I'm not sure\nprecisely what happened at the time of the \"unexpected timeline ID\n1..\", though. The line is seen only just after the first conection.)\n\n> will fail as the older timeline ID is not recognized to be an ancestor of\n> the target timeline. If you try to recover from such a backup, using only\n> streaming replication to fetch the WAL, this patch is required for that to\n> work.\n\nAfter I reverted the modification, I got the following behavior\ninstead from the same backup.\n\n[postmaster] LOG: starting PostgreSQL 14beta1 on x86_64-...\n[startup] JST LOG: database system was interrupted while in recovery at log time 2021-05-18 13:45:59 JST\n[startup] JST LOG: Probing history file for TLI=7\n[startup] JST LOG: entering standby mode\n[startup] JST LOG: Loading history file for TLI=6 (2)\n[startup] JST LOG: Trying reading history file for TLI=6\n[startup] JST LOG: scanning segment 3 TLI 6, source 0\n[startup] JST LOG: Trying fetching history file for TLI=6\n[walreceiver] JST LOG: fetching timeline history file for timeline 5 fro...\n[walreceiver] JST LOG: fetching timeline history file for timeline 6 fro...\n[walreceiver] JST LOG: started streaming ... primary at 0/3000000 on timeline 5\n[walreceiver] JST LOG: replication terminated by primary server\n[walreceiver] JST DETAIL: End of WAL reached on timeline 5 at 0/30006E0.\n[startup] LOG: Loading expectedTLEs for 5\n[startup] LOG: Trying reading history file for TLI=5\n[startup] LOG: Checkpoint record: TLI=5, 0/3000668, rectargetTLI=6, exptles=0x3322a60\n[startup] FATAL: requested timeline 6 does not contain minimum recovery point 0/30007C0 on timeline 6\n[postmaster] LOG: startup process (PID 76526) exited with exit code 1\n[postmaster] LOG: aborting startup due to startup process failure\n[postmaster] LOG: database system is shut down\n\nAborts.. Yeah, this is the same issue with what is railed here. So,\nstill I'm not sure I confirmed the case exactly (since the problem is\nstill seen.. but I don't want to bother building the version.)...\nAnyway reading history file for recoveryTargetTLI instead of\nreceiveTLI fixes that.\n\nFWIW, you could be get a problematic base backup by the following steps.\n\n0. (make sure /tmp/hoge is removed)\n1. apply the attached patch\n2. create a primary then start\n3. create a standby then start\n4. place standby.signal to the primary, then restart it.\n5. place the file /tmp/hoge.\n6. promote the \"primary\".\n7. You will see a log line like this\n LOG: WAIT START: CHECKPOINT_ONLINE: TLI=2\n8. Take a base backup (without copying WAL files)\n\n\n> [1]\n> if (rtli)\n> {\n> /* Timeline 1 does not have a history file, all else should */\n> if (rtli != 1 && !existsTimeLineHistory(rtli))\n> ereport(FATAL,\n> (errmsg(\"recovery target timeline %u does not exist\",\n> rtli)));\n> recoveryTargetTLI = rtli;\n> recoveryTargetIsLatest = false;\n> }\n> else\n> {\n> /* We start the \"latest\" search from pg_control's timeline */\n> recoveryTargetTLI = findNewestTimeLine(recoveryTargetTLI);\n> recoveryTargetIsLatest = true;\n> }\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/access/transam/timeline.c b/src/backend/access/transam/timeline.c\nindex 8d0903c175..9483fd055c 100644\n--- a/src/backend/access/transam/timeline.c\n+++ b/src/backend/access/transam/timeline.c\n@@ -55,6 +55,7 @@ restoreTimeLineHistoryFiles(TimeLineID begin, TimeLineID end)\n \n \tfor (tli = begin; tli < end; tli++)\n \t{\n+\t\telog(LOG, \"Trying restoring history file for TLI=%d\", tli);\n \t\tif (tli == 1)\n \t\t\tcontinue;\n \n@@ -95,6 +96,7 @@ readTimeLineHistory(TimeLineID targetTLI)\n \n \tif (ArchiveRecoveryRequested)\n \t{\n+\t\telog(LOG, \"Trying reading history file for TLI=%d\", targetTLI);\n \t\tTLHistoryFileName(histfname, targetTLI);\n \t\tfromArchive =\n \t\t\tRestoreArchivedFile(path, histfname, \"RECOVERYHISTORY\", 0, false);\n@@ -231,6 +233,7 @@ existsTimeLineHistory(TimeLineID probeTLI)\n \n \tif (ArchiveRecoveryRequested)\n \t{\n+\t\telog(LOG, \"Probing history file for TLI=%d\", probeTLI);\n \t\tTLHistoryFileName(histfname, probeTLI);\n \t\tRestoreArchivedFile(path, histfname, \"RECOVERYHISTORY\", 0, false);\n \t}\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 8d163f190f..afd6a0ce0a 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -3726,7 +3726,7 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,\n \t\t\tsnprintf(activitymsg, sizeof(activitymsg), \"waiting for %s\",\n \t\t\t\t\t xlogfname);\n \t\t\tset_ps_display(activitymsg);\n-\n+\t\t\telog(LOG, \"Trying fetching history file for TLI=%d\", tli);\n \t\t\trestoredFromArchive = RestoreArchivedFile(path, xlogfname,\n \t\t\t\t\t\t\t\t\t\t\t\t\t \"RECOVERYXLOG\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t wal_segment_size,\n@@ -3820,7 +3820,10 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)\n \tif (expectedTLEs)\n \t\ttles = expectedTLEs;\n \telse\n+\t{\n+\t\telog(LOG, \"Loading history file for TLI=%d (2)\", recoveryTargetTLI);\n \t\ttles = readTimeLineHistory(recoveryTargetTLI);\n+\t}\n \n \tforeach(cell, tles)\n \t{\n@@ -3834,6 +3837,8 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)\n \t\t * Skip scanning the timeline ID that the logfile segment to read\n \t\t * doesn't belong to\n \t\t */\n+\t\telog(LOG, \"scanning segment %lX TLI %d, source %d\", segno, tli, source);\n+\n \t\tif (hent->begin != InvalidXLogRecPtr)\n \t\t{\n \t\t\tXLogSegNo\tbeginseg = 0;\n@@ -3860,6 +3865,7 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)\n \t\t\t\t\t\t\t XLOG_FROM_ARCHIVE, true);\n \t\t\tif (fd != -1)\n \t\t\t{\n+\t\t\t\telog(LOG, \"found segment %lX TLI %d, from archive\", segno, tli);\n \t\t\t\telog(DEBUG1, \"got WAL segment from archive\");\n \t\t\t\tif (!expectedTLEs)\n \t\t\t\t\texpectedTLEs = tles;\n@@ -3873,6 +3879,7 @@ XLogFileReadAnyTLI(XLogSegNo segno, int emode, XLogSource source)\n \t\t\t\t\t\t\t XLOG_FROM_PG_WAL, true);\n \t\t\tif (fd != -1)\n \t\t\t{\n+\t\t\t\telog(LOG, \"found segment %lX TLI %d, from PG_WAL\", segno, tli);\n \t\t\t\tif (!expectedTLEs)\n \t\t\t\t\texpectedTLEs = tles;\n \t\t\t\treturn fd;\n@@ -6577,6 +6584,8 @@ StartupXLOG(void)\n \telse\n \t\trecoveryTargetTLI = ControlFile->checkPointCopy.ThisTimeLineID;\n \n+\texpectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n+\telog(LOG, \"set expectedtles %d, %d\", recoveryTargetTLI, list_length(expectedTLEs));\n \t/*\n \t * Check for signal files, and if so set up state for offline recovery\n \t */\n@@ -6866,11 +6875,19 @@ StartupXLOG(void)\n \tif (!XLogRecPtrIsInvalid(ControlFile->minRecoveryPoint) &&\n \t\ttliOfPointInHistory(ControlFile->minRecoveryPoint - 1, expectedTLEs) !=\n \t\tControlFile->minRecoveryPointTLI)\n+\t{\n+\t\tListCell *lc;\n+\t\tforeach (lc, expectedTLEs)\n+\t\t{\n+\t\t\tTimeLineHistoryEntry *tle = (TimeLineHistoryEntry *) lfirst(lc);\n+\t\t\telog(LOG, \"TLE %d {%X/%X - %X/%X}\", tle->tli, LSN_FORMAT_ARGS(tle->begin), LSN_FORMAT_ARGS(tle->end));\n+\t\t}\n \t\tereport(FATAL,\n \t\t\t\t(errmsg(\"requested timeline %u does not contain minimum recovery point %X/%X on timeline %u\",\n \t\t\t\t\t\trecoveryTargetTLI,\n \t\t\t\t\t\tLSN_FORMAT_ARGS(ControlFile->minRecoveryPoint),\n \t\t\t\t\t\tControlFile->minRecoveryPointTLI)));\n+\t}\n \n \tLastRec = RecPtr = checkPointLoc;\n \n@@ -8396,7 +8413,7 @@ ReadCheckpointRecord(XLogReaderState *xlogreader, XLogRecPtr RecPtr,\n \n \tXLogBeginRead(xlogreader, RecPtr);\n \trecord = ReadRecord(xlogreader, LOG, true);\n-\n+\telog(LOG, \"Checkpoint record: TLI=%d, %X/%X, rectargetTLI=%d, exptles=%p\", xlogreader->seg.ws_tli, LSN_FORMAT_ARGS(xlogreader->ReadRecPtr), recoveryTargetTLI, expectedTLEs);\n \tif (record == NULL)\n \t{\n \t\tif (!report)\n@@ -10211,6 +10228,19 @@ xlog_redo(XLogReaderState *record)\n \t\tCheckPoint\tcheckPoint;\n \n \t\tmemcpy(&checkPoint, XLogRecGetData(record), sizeof(CheckPoint));\n+\t\t{\n+\t\t\tstruct stat b;\n+\t\t\tbool f = true;\n+\t\t\twhile (stat(\"/tmp/hoge\", &b) == 0)\n+\t\t\t{\n+\t\t\t\tif (f)\n+\t\t\t\t\telog(LOG, \"WAIT START: CHECKPOINT_ONLINE: TLI=%d\", checkPoint.ThisTimeLineID);\n+\t\t\t\tf = false;\n+\t\t\t\tsleep(1);\n+\t\t\t}\n+\t\t\tif (!f)\n+\t\t\t\telog(LOG, \"WAIT END: CHECKPOINT_ONLINE\");\n+\t\t}\n \t\t/* In an ONLINE checkpoint, treat the XID counter as a minimum */\n \t\tLWLockAcquire(XidGenLock, LW_EXCLUSIVE);\n \t\tif (FullTransactionIdPrecedes(ShmemVariableCache->nextXid,\n@@ -12595,7 +12625,7 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n \t\t\t\t\t\t\t * TLI, rather than the position we're reading.\n \t\t\t\t\t\t\t */\n \t\t\t\t\t\t\ttli = tliOfPointInHistory(tliRecPtr, expectedTLEs);\n-\n+\t\t\t\t\t\t\telog(LOG, \"%X/%X is on TLI %X\", LSN_FORMAT_ARGS(tliRecPtr), tli);\n \t\t\t\t\t\t\tif (curFileTLI > 0 && tli < curFileTLI)\n \t\t\t\t\t\t\t\telog(ERROR, \"according to history file, WAL location %X/%X belongs to timeline %u, but previous recovered WAL file came from timeline %u\",\n \t\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(tliRecPtr),\n@@ -12662,7 +12692,11 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n \t\t\t\t\t\tif (readFile < 0)\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\tif (!expectedTLEs)\n-\t\t\t\t\t\t\t\texpectedTLEs = readTimeLineHistory(receiveTLI);\n+\t\t\t\t\t\t\t{\n+\t\t\t\t\t\t\t\t\n+\t\t\t\t\t\t\t\telog(LOG, \"Loading expectedTLEs for %d (%d)\", recoveryTargetTLI, receiveTLI);\n+\t\t\t\t\t\t\t\texpectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n+\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\treadFile = XLogFileRead(readSegNo, PANIC,\n \t\t\t\t\t\t\t\t\t\t\t\t\treceiveTLI,\n \t\t\t\t\t\t\t\t\t\t\t\t\tXLOG_FROM_STREAM, false);", "msg_date": "Tue, 18 May 2021 15:52:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Tue, 18 May 2021 15:52:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> FWIW, you could be get a problematic base backup by the following steps.\n> \n> 0. (make sure /tmp/hoge is removed)\n> 1. apply the attached patch\n> 2. create a primary then start\n> 3. create a standby then start\n> 4. place standby.signal to the primary, then restart it.\n> 5. place the file /tmp/hoge.\n> 6. promote the \"primary\".\n> 7. You will see a log line like this\n> LOG: WAIT START: CHECKPOINT_ONLINE: TLI=2\n> 8. Take a base backup (without copying WAL files)\n\nI carelessly have left the \"modification\" uncommented in the diff file.\n\n@@ -6577,6 +6584,8 @@ StartupXLOG(void)\n \telse\n \t\trecoveryTargetTLI = ControlFile->checkPointCopy.ThisTimeLineID;\n \n+\texpectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n+\telog(LOG, \"set expectedtles %d, %d\", recoveryTargetTLI, list_length(expectedTLEs));\n\nDisabling the lines would show the result of the ancient fix.\n\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 18 May 2021 15:58:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, May 18, 2021 at 12:22 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n\n> And finally I think I could reach the situation the commit wanted to fix.\n>\n> I took a basebackup from a standby just before replaying the first\n> checkpoint of the new timeline (by using debugger), without copying\n> pg_wal. In this backup, the control file contains checkPointCopy of\n> the previous timeline.\n>\n> I modified StartXLOG so that expectedTLEs is set just after first\n> determining recoveryTargetTLI, then started the grandchild node. I\n> have the following error and the server fails to continue replication.\n\n> [postmaster] LOG: starting PostgreSQL 14beta1 on x86_64-pc-linux-gnu...\n> [startup] LOG: database system was interrupted while in recovery at log...\n> [startup] LOG: set expectedtles tli=6, length=1\n> [startup] LOG: Probing history file for TLI=7\n> [startup] LOG: entering standby mode\n> [startup] LOG: scanning segment 3 TLI 6, source 0\n> [startup] LOG: Trying fetching history file for TLI=6\n> [walreceiver] LOG: fetching timeline history file for timeline 5 from pri...\n> [walreceiver] LOG: fetching timeline history file for timeline 6 from pri...\n> [walreceiver] LOG: started streaming ... primary at 0/3000000 on timeline 5\n> [walreceiver] DETAIL: End of WAL reached on timeline 5 at 0/30006E0.\n> [startup] LOG: unexpected timeline ID 1 in log segment 000000050000000000000003, offset 0\n> [startup] LOG: Probing history file for TLI=7\n> [startup] LOG: scanning segment 3 TLI 6, source 0\n> (repeats forever)\n\nSo IIUC, this logs shows that\n\"ControlFile->checkPointCopy.ThisTimeLineID\" is 6 but\n\"ControlFile->checkPoint\" record is on TL 5? I think if you had the\nold version of the code (before the commit) or below code [1], right\nafter initializing expectedTLEs then you would have hit the FATAL the\npatch had fix.\n\nWhile debugging did you check what was the \"ControlFile->checkPoint\"\nLSN vs the first LSN of the first segment with TL6?\n\nexpectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n[1]\nif (tliOfPointInHistory(ControlFile->checkPoint, expectedTLEs) !=\nControlFile->checkPointCopy.ThisTimeLineID)\n{\nreport(FATAL..\n}\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 May 2021 17:46:05 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, May 18, 2021 at 1:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Yeah, it will be a fake 1-element list. But just to be clear that\n> 1-element can only be \"ControlFile->checkPointCopy.ThisTimeLineID\" and\n> nothing else, do you agree to this? Because we initialize\n> recoveryTargetTLI to this value and we might change it in\n> readRecoveryCommandFile() but for that, we must get the history file,\n> so if we are talking about the case where we don't have the history\n> file then \"recoveryTargetTLI\" will still be\n> \"ControlFile->checkPointCopy.ThisTimeLineID\".\n\nI don't think your conclusion is correct. As I understand it, you're\ntalking about the state before\nee994272ca50f70b53074f0febaec97e28f83c4e, because as of now\nreadRecoveryCommandFile() no longer exists in master. Before that\ncommit, StartupXLOG did this:\n\n recoveryTargetTLI = ControlFile->checkPointCopy.ThisTimeLineID;\n readRecoveryCommandFile();\n expectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n\nNow, readRecoveryCommandFile() can change recoveryTargetTLI. Before\ndoing so, it will call existsTimeLineHistory if\nrecovery_target_timeline was set to an integer, and findNewestTimeLine\nif it was set to latest. In the first case, existsTimeLineHistory()\ncalls RestoreArchivedFile(), but that restores it using a temporary\nname, and KeepFileRestoredFromArchive is not called, so we might have\nthe timeline history in RECOVERYHISTORY but that's not the filename\nwe're actually going to try to read from inside readTimeLineHistory().\nIn the second case, findNewestTimeLine() will call\nexistsTimeLineHistory() which results in the same situation. So I\nthink when readRecoveryCommandFile() returns expectedTLI can be\ndifferent but the history file can be absent since it was only ever\nrestored under a temporary name.\n\n> Conclusion:\n> - I think now we agree on the point that initializing expectedTLEs\n> with the recovery target timeline is the right fix.\n> - We still have some differences of opinion about what was the\n> original problem in the base code which was fixed by the commit\n> (ee994272ca50f70b53074f0febaec97e28f83c4e).\n\nI am also still concerned about whether we understand in exactly what\ncases the current logic doesn't work. We seem to more or less agree on\nthe fix, but I don't think we really understand precisely what case we\nare fixing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 May 2021 13:49:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Thu, 20 May 2021 13:49:10 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, May 18, 2021 at 1:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Yeah, it will be a fake 1-element list. But just to be clear that\n> > 1-element can only be \"ControlFile->checkPointCopy.ThisTimeLineID\" and\n> > nothing else, do you agree to this? Because we initialize\n> > recoveryTargetTLI to this value and we might change it in\n> > readRecoveryCommandFile() but for that, we must get the history file,\n> > so if we are talking about the case where we don't have the history\n> > file then \"recoveryTargetTLI\" will still be\n> > \"ControlFile->checkPointCopy.ThisTimeLineID\".\n> \n> I don't think your conclusion is correct. As I understand it, you're\n> talking about the state before\n> ee994272ca50f70b53074f0febaec97e28f83c4e, because as of now\n> readRecoveryCommandFile() no longer exists in master. Before that\n> commit, StartupXLOG did this:\n> \n> recoveryTargetTLI = ControlFile->checkPointCopy.ThisTimeLineID;\n> readRecoveryCommandFile();\n> expectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n> \n> Now, readRecoveryCommandFile() can change recoveryTargetTLI. Before\n> doing so, it will call existsTimeLineHistory if\n> recovery_target_timeline was set to an integer, and findNewestTimeLine\n> if it was set to latest. In the first case, existsTimeLineHistory()\n> calls RestoreArchivedFile(), but that restores it using a temporary\n> name, and KeepFileRestoredFromArchive is not called, so we might have\n> the timeline history in RECOVERYHISTORY but that's not the filename\n> we're actually going to try to read from inside readTimeLineHistory().\n> In the second case, findNewestTimeLine() will call\n> existsTimeLineHistory() which results in the same situation. So I\n> think when readRecoveryCommandFile() returns expectedTLI can be\n> different but the history file can be absent since it was only ever\n> restored under a temporary name.\n\nAnyway it seems that the commit tried to fix an issue happens without\nusing WAL archive.\n\nhttps://www.postgresql.org/message-id/50E43C57.5050101%40vmware.com\n\n> That leaves one case not covered: If you take a backup with plain \n> \"pg_basebackup\" from a standby, without -X, and the first WAL segment \n> contains a timeline switch (ie. you take the backup right after a \n> failover), and you try to recover from it without a WAL archive, it \n> doesn't work. This is the original issue that started this thread, \n> except that I used \"-x\" in my original test case. The problem here is \n> that even though streaming replication will fetch the timeline history \n> file when it connects, at the very beginning of recovery, we expect that \n> we already have the timeline history file corresponding the initial \n> timeline available, either in pg_xlog or the WAL archive. By the time \n> streaming replication has connected and fetched the history file, we've \n> already initialized expectedTLEs to contain just the latest TLI. To fix \n> that, we should delay calling readTimeLineHistoryFile() until streaming \n> replication has connected and fetched the file.\n> If the first segment read by recovery contains a timeline switch, the first\n> pages have older timeline than segment timeline. However, if\n> exepectedTLEs contained only the segment timeline, we cannot know if\n> we can use the record. In that case the following error is issued.\n\nIf expectedTLEs is initialized with the pseudo list,\ntliOfPointInHistory always return the recoveryTargetTLI regardless of\nthe given LSN so the checking about timelines later doesn't work. And\nlater ReadRecord is surprised to see a page of an unknown timeline.\n\n\"unexpected timeline ID 1 in log segment\"\n\nSo the objective is to initialize expectedTLEs with the right content\nof the history file for the recoveryTargetTLI until ReadRecord fetches\nthe first record. After the fix things are working as the following.\n\n- recoveryTargetTimeLine is initialized with\n ControlFile->checkPointCopy.ThisTimeLineID\n\n- readRecoveryCommandFile():\n\n Move recoveryTargetTLI forward to the specified target timline if\n the history file for the timeline is found, or in the case of\n latest, move it forward up to the maximum timeline among the history\n files found in either pg_wal or archive.\n\n !!! Anyway recoveryTargetTLI won't goes back behind the checkpoint\n TLI.\n\n- ReadRecord...XLogFileReadAnyTLI\n\n Tries to load the history file for recoveryTargetTLI either from\n pg_wal or archive onto local TLE list, if the history file is not\n found, use a generateed list with one entry for the\n recoveryTargetTLI.\n\n (a) If the specified segment file for any timeline in the TLE list\n is found, expectedTLEs is initialized with the local list. No need\n to worry about expectedTLEs any longer.\n\n (b) If such a segment is *not* found, expectedTLEs is left\n NIL. Usually recoveryTargetTLI is equal to the last checkpoint\n TLI.\n\n (c) However, in the case where timeline switches happened in the\n segment and the recoveryTargetTLI has been increased, that is, the\n history file for the recoveryTargetTLI is found in pg_wal or\n archive, that is, the issue raised here, recoveryTargetTLI becomes\n the future timline of the checkpoint TLI.\n\n (d) The history file for the recoveryTargetTLI is *not* found but\n the segment file is found, expectedTLEs is initialized with the\n generated list, which doesn't contain past timelines. In this\n case, recoveryTargetTLI has not moved from the initial value of\n the checkpoint TLI. If the REDO point is before a timeline switch,\n the page causes FATAL in ReadRecord later. However, I think there\n cannot be a case where segment file is found before corresponding\n history file. (Except for TLI=1, which is no problem.)\n\n- WaitForWALToBecomeAvailable\n\n if we have had no segments for the last checkpoint, initiate\n streaming from the REDO point of the last checkpoint. We should have\n all history files until receiving segment data.\n\n after sufficient WAL data has been received, the only cases where\n expectedTLEs is still NIL are the (b) and (c) above.\n\n In the case of (b) recoveryTargetTLI == checkpoint TLI.\n\n In the case of (c) recoveryTargetTLI > checkpoint TLI. In this case\n we expecte that checkpint TLI is in the history of\n recoveryTargetTLI. Otherwise recovery failse. This case is similar\n to the case (a) but the relationship between recoveryTargetTLI and\n the checkpoint TLI is not confirmed yet. ReadRecord barks later if\n they are not compatible so there's not a serious problem but might\n be better checking the relation ship there. My first proposal\n performed mutual check between the two but we need to check only\n unidirectionally.\n\n if (readFile < 0)\n {\n if (!expectedTLEs)\n\t {\n\t expectedTLEs = readTimeLineHistory(receiveTLI);\n+ if (!tliOfPointInHistory(receiveTLI, expectedTLEs))\n+ ereport(ERROR, \"the received timeline %d is not found in the history file for timeline %d\");\n\n\n> > Conclusion:\n> > - I think now we agree on the point that initializing expectedTLEs\n> > with the recovery target timeline is the right fix.\n> > - We still have some differences of opinion about what was the\n> > original problem in the base code which was fixed by the commit\n> > (ee994272ca50f70b53074f0febaec97e28f83c4e).\n> \n> I am also still concerned about whether we understand in exactly what\n> cases the current logic doesn't work. We seem to more or less agree on\n> the fix, but I don't think we really understand precisely what case we\n> are fixing.\n\nDoes the discussion above make sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 21 May 2021 11:21:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 21 May 2021 11:21:05 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 20 May 2021 13:49:10 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> In the case of (c) recoveryTargetTLI > checkpoint TLI. In this case\n> we expecte that checkpint TLI is in the history of\n> recoveryTargetTLI. Otherwise recovery failse. This case is similar\n> to the case (a) but the relationship between recoveryTargetTLI and\n> the checkpoint TLI is not confirmed yet. ReadRecord barks later if\n> they are not compatible so there's not a serious problem but might\n> be better checking the relation ship there. My first proposal\n> performed mutual check between the two but we need to check only\n> unidirectionally.\n> \n> if (readFile < 0)\n> {\n> if (!expectedTLEs)\n> \t {\n> \t expectedTLEs = readTimeLineHistory(receiveTLI);\n> + if (!tliOfPointInHistory(receiveTLI, expectedTLEs))\n> + ereport(ERROR, \"the received timeline %d is not found in the history file for timeline %d\");\n> \n> \n> > > Conclusion:\n> > > - I think now we agree on the point that initializing expectedTLEs\n> > > with the recovery target timeline is the right fix.\n> > > - We still have some differences of opinion about what was the\n> > > original problem in the base code which was fixed by the commit\n> > > (ee994272ca50f70b53074f0febaec97e28f83c4e).\n> > \n> > I am also still concerned about whether we understand in exactly what\n> > cases the current logic doesn't work. We seem to more or less agree on\n> > the fix, but I don't think we really understand precisely what case we\n> > are fixing.\n> \n> Does the discussion above make sense?\n\nThis is a revised version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 21 May 2021 16:49:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Thu, May 20, 2021 at 11:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 18, 2021 at 1:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Yeah, it will be a fake 1-element list. But just to be clear that\n> > 1-element can only be \"ControlFile->checkPointCopy.ThisTimeLineID\" and\n> > nothing else, do you agree to this? Because we initialize\n> > recoveryTargetTLI to this value and we might change it in\n> > readRecoveryCommandFile() but for that, we must get the history file,\n> > so if we are talking about the case where we don't have the history\n> > file then \"recoveryTargetTLI\" will still be\n> > \"ControlFile->checkPointCopy.ThisTimeLineID\".\n>\n> I don't think your conclusion is correct. As I understand it, you're\n> talking about the state before\n> ee994272ca50f70b53074f0febaec97e28f83c4e,\n\nRight, I am talking about before this commit.\n\n because as of now\n> readRecoveryCommandFile() no longer exists in master. Before that\n> commit, StartupXLOG did this:\n>\n> recoveryTargetTLI = ControlFile->checkPointCopy.ThisTimeLineID;\n> readRecoveryCommandFile();\n> expectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n>\n> Now, readRecoveryCommandFile() can change recoveryTargetTLI. Before\n> doing so, it will call existsTimeLineHistory if\n> recovery_target_timeline was set to an integer, and findNewestTimeLine\n> if it was set to latest. In the first case, existsTimeLineHistory()\n> calls RestoreArchivedFile(), but that restores it using a temporary\n> name, and KeepFileRestoredFromArchive is not called,\n\nYes, I agree with this.\n\nso we might have\n> the timeline history in RECOVERYHISTORY but that's not the filename\n> we're actually going to try to read from inside readTimeLineHistory().\n> In the second case, findNewestTimeLine() will call\n> existsTimeLineHistory() which results in the same situation. So I\n> think when readRecoveryCommandFile() returns expectedTLI can be\n> different but the history file can be absent since it was only ever\n> restored under a temporary name.\n\nI agree that readTimeLineHistory() will not look for that filename,\nbut it will also try to get the file using (RestoreArchivedFile(path,\nhistfname, \"RECOVERYHISTORY\", 0, false)). So after we check the\nhistory file existence in existsTimeLineHistory(), if the file got\nremoved from the archive (not sure how) then it is possible that now\nreadTimeLineHistory() will not find that history file again. Am I\nmissing something?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 May 2021 20:09:20 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, May 21, 2021 at 7:51 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> https://www.postgresql.org/message-id/50E43C57.5050101%40vmware.com\n>\n> > That leaves one case not covered: If you take a backup with plain\n> > \"pg_basebackup\" from a standby, without -X, and the first WAL segment\n> > contains a timeline switch (ie. you take the backup right after a\n> > failover), and you try to recover from it without a WAL archive, it\n> > doesn't work. This is the original issue that started this thread,\n> > except that I used \"-x\" in my original test case. The problem here is\n> > that even though streaming replication will fetch the timeline history\n> > file when it connects, at the very beginning of recovery, we expect that\n> > we already have the timeline history file corresponding the initial\n> > timeline available, either in pg_xlog or the WAL archive. By the time\n> > streaming replication has connected and fetched the history file, we've\n> > already initialized expectedTLEs to contain just the latest TLI. To fix\n> > that, we should delay calling readTimeLineHistoryFile() until streaming\n> > replication has connected and fetched the file.\n> > If the first segment read by recovery contains a timeline switch, the first\n> > pages have older timeline than segment timeline. However, if\n> > exepectedTLEs contained only the segment timeline, we cannot know if\n> > we can use the record. In that case the following error is issued.\n>\n> If expectedTLEs is initialized with the pseudo list,\n> tliOfPointInHistory always return the recoveryTargetTLI regardless of\n> the given LSN so the checking about timelines later doesn't work. And\n> later ReadRecord is surprised to see a page of an unknown timeline.\n\n From this whole discussion (on the thread given by you), IIUC the\nissue was that if the checkpoint LSN does not exist on the\n\"ControlFile->checkPointCopy.ThisTimeLineID\". If that is true then I\nagree that we will just initialize expectedTLE based on the online\nentry (ControlFile->checkPointCopy.ThisTimeLineID) and later we will\nfail to find the checkpoint record on this timeline because the\ncheckpoint LSN is smaller than the start LSN of this timeline. Right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 May 2021 20:37:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, May 21, 2021 at 10:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > so we might have\n> > the timeline history in RECOVERYHISTORY but that's not the filename\n> > we're actually going to try to read from inside readTimeLineHistory().\n> > In the second case, findNewestTimeLine() will call\n> > existsTimeLineHistory() which results in the same situation. So I\n> > think when readRecoveryCommandFile() returns expectedTLI can be\n> > different but the history file can be absent since it was only ever\n> > restored under a temporary name.\n>\n> I agree that readTimeLineHistory() will not look for that filename,\n> but it will also try to get the file using (RestoreArchivedFile(path,\n> histfname, \"RECOVERYHISTORY\", 0, false)). So after we check the\n> history file existence in existsTimeLineHistory(), if the file got\n> removed from the archive (not sure how) then it is possible that now\n> readTimeLineHistory() will not find that history file again. Am I\n> missing something?\n\nThat sounds right.\n\nI've lost the thread of what we're talking about here a bit. I think\nwhat we've established is that, when running a commit prior to\nee994272ca50f70b53074f0febaec97e28f83c4e, if (a) recovery_target_tli\nis set, (b) restore_command works, and (c) nothing's being removed\nfrom the archive concurrently, then by the time StartupXLOG() does\nexpectedTLEs = readTimeLineHistory(recoveryTargetTLI), any timeline\nhistory file that exists in the archive will have been restored, and\nthe scenario ee994272ca50f70b53074f0febaec97e28f83c4e was concerned\nabout won't occur. That's because it was concerned about a scenario\nwhere we failed to restore the history file until after we set\nexpectedTLEs.\n\nConsequently, if we want to try to reproduce the problem fixed by that\ncommit, we should look for a scenario that does not involve setting\nrecovery_target_tli.\n\nIs that the conclusion you were driving towards?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 May 2021 12:14:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Thu, May 20, 2021 at 10:21 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > > Conclusion:\n> > > - I think now we agree on the point that initializing expectedTLEs\n> > > with the recovery target timeline is the right fix.\n> > > - We still have some differences of opinion about what was the\n> > > original problem in the base code which was fixed by the commit\n> > > (ee994272ca50f70b53074f0febaec97e28f83c4e).\n> >\n> > I am also still concerned about whether we understand in exactly what\n> > cases the current logic doesn't work. We seem to more or less agree on\n> > the fix, but I don't think we really understand precisely what case we\n> > are fixing.\n>\n> Does the discussion above make sense?\n\nI had trouble following it completely, but I didn't really spot\nanything that seemed definitely wrong. However, I don't understand\nwhat it has to do with where we are now. What I want to understand is:\nunder exactly what circumstances does it matter that\nWaitForWALToBecomeAvailable(), when currentSource == XLOG_FROM_STREAM,\nwill stream from receiveTLI rather than recoveryTargetTLI?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 May 2021 12:52:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, May 21, 2021 at 12:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I had trouble following it completely, but I didn't really spot\n> anything that seemed definitely wrong. However, I don't understand\n> what it has to do with where we are now. What I want to understand is:\n> under exactly what circumstances does it matter that\n> WaitForWALToBecomeAvailable(), when currentSource == XLOG_FROM_STREAM,\n> will stream from receiveTLI rather than recoveryTargetTLI?\n\nAh ha! I think I figured it out. To hit this bug, you need to meet the\nfollowing conditions:\n\n1. Both streaming and archiving have to be configured.\n2. You have to promote a new primary.\n3. After promoting the new primary you have to start a new standby\nthat doesn't have local WAL and for which the backup was taken from\nthe previous timeline. In Dilip's original scenario, this new standby\nis actually the old primary, but that's not required.\n4. The new standby has to be able to find the history file it needs in\nthe archive but not the WAL files.\n5. The new standby needs to have recovery_target_timeline='latest'\n(which is the default)\n\nWhen you start the new standby, it will fetch the current TLI from its\ncontrol file. Then, since recovery_target_timeline=latest, the system\nwill try to figure out the latest timeline, which only works because\narchiving is configured. There seems to be no provision for detecting\nthe latest timeline via streaming. With archiving enabled, though,\nfindNewestTimeLine() will be able to restore the history file created\nby the promotion of the new primary, which will cause\nvalidateRecoveryParameters() to change recoveryTargetTLI. Then we'll\ntry to read the WAL segment containing the checkpoint record and fail\nbecause, by stipulation, only history files are available from the\narchive. Now, because streaming is also configured, we'll try\nstreaming. That will work, so we'll be able to read the checkpoint\nrecord, but now, because WaitForWALToBecomeAvailable() initialized\nexpectedTLEs using receiveTLI instead of recoveryTargetTLI, we can't\nswitch to the correct timeline and it all goes wrong.\n\nThe attached test script, test.sh seems to reliably reproduce this.\nPut that file and the recalcitrant_cp script, also attached, into an\nempty directory, cd to that directory, and run test.sh. Afterwards\nexamine pgcascade.log. Basically, these scripts just set up the\nscenario described above. We set up primary and a standby that use\nrecalcitrant_cp as the archive command, and because it's recalcitrant,\nit's only willing to copy history files, and always fails for WAL\nfiles.Then we create a cascading standby by taking a base backup from\nthe standby, but before actually starting it, we promote the original\nstandby. So now it meets all the conditions described above. I tried a\ncouple variants of this test. If I switch the archive command from\nrecalcitrant_cp to just regular cp, then there's no problem. And if I\nswitch it to something that always fails, then there's also no\nproblem. That's because, with either of those changes, condition (4)\nabove is no longer met. In the first case, both files end up in the\narchive, and in the second case, neither file.\n\nWhat about hitting this in real life, with a real archive command?\nWell, you'd probably need the archive command to be kind of slow and\nget unlucky on the timing, but there's nothing to prevent it from\nhappening.\n\nBut, it will be WAY more likely if you have Dilip's original scenario,\nwhere you try to repurpose an old primary as a standby. It would\nnormally be unlikely that the backup used to create a new standby\nwould have an older TLI, because you typically wouldn't switch masters\nin between taking a base backup and using it to create a new standby.\nBut the old master always has an older TLI. So (3) is satisfied. For\n(4) to be satisfied, you need the old master to fail to archive all of\nits WAL when it shuts down.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 21 May 2021 15:44:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sat, May 22, 2021 at 1:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, May 21, 2021 at 12:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I had trouble following it completely, but I didn't really spot\n> > anything that seemed definitely wrong. However, I don't understand\n> > what it has to do with where we are now. What I want to understand is:\n> > under exactly what circumstances does it matter that\n> > WaitForWALToBecomeAvailable(), when currentSource == XLOG_FROM_STREAM,\n> > will stream from receiveTLI rather than recoveryTargetTLI?\n>\n> Ah ha! I think I figured it out. To hit this bug, you need to meet the\n> following conditions:\n>\n> 1. Both streaming and archiving have to be configured.\n> 2. You have to promote a new primary.\n> 3. After promoting the new primary you have to start a new standby\n> that doesn't have local WAL and for which the backup was taken from\n> the previous timeline. In Dilip's original scenario, this new standby\n> is actually the old primary, but that's not required.\n\nNo, in my original scenario also the new standby was not old primary,\nI had 3 nodes\nnode1-> primary, node2 -> standby1, node3-> standby2\nnode2 promoted as a new primary and node3's local WAL was removed (so\nthat it has to stream checkpoint record from new primary and then\nremaining everything happens as you explain in remaining steps).\n\n> 4. The new standby has to be able to find the history file it needs in\n> the archive but not the WAL files.\n> 5. The new standby needs to have recovery_target_timeline='latest'\n> (which is the default)\n>\n> When you start the new standby, it will fetch the current TLI from its\n> control file. Then, since recovery_target_timeline=latest, the system\n> will try to figure out the latest timeline, which only works because\n> archiving is configured. There seems to be no provision for detecting\n> the latest timeline via streaming. With archiving enabled, though,\n> findNewestTimeLine() will be able to restore the history file created\n> by the promotion of the new primary, which will cause\n> validateRecoveryParameters() to change recoveryTargetTLI.\n\nRight\n\n Then we'll\n> try to read the WAL segment containing the checkpoint record and fail\n> because, by stipulation, only history files are available from the\n> archive. Now, because streaming is also configured, we'll try\n> streaming. That will work, so we'll be able to read the checkpoint\n> record, but now, because WaitForWALToBecomeAvailable() initialized\n> expectedTLEs using receiveTLI instead of recoveryTargetTLI, we can't\n> switch to the correct timeline and it all goes wrong.\n\nexactly\n\n> The attached test script, test.sh seems to reliably reproduce this.\n> Put that file and the recalcitrant_cp script, also attached, into an\n> empty directory, cd to that directory, and run test.sh. Afterwards\n> examine pgcascade.log. Basically, these scripts just set up the\n> scenario described above. We set up primary and a standby that use\n> recalcitrant_cp as the archive command, and because it's recalcitrant,\n> it's only willing to copy history files, and always fails for WAL\n> files.Then we create a cascading standby by taking a base backup from\n> the standby, but before actually starting it, we promote the original\n> standby. So now it meets all the conditions described above. I tried a\n> couple variants of this test. If I switch the archive command from\n> recalcitrant_cp to just regular cp, then there's no problem. And if I\n> switch it to something that always fails, then there's also no\n> problem. That's because, with either of those changes, condition (4)\n> above is no longer met. In the first case, both files end up in the\n> archive, and in the second case, neither file.\n\nI haven't tested this, but I will do that. But now we are on the same\npage about the cause of the actual problem I reported.\n\n> What about hitting this in real life, with a real archive command?\n> Well, you'd probably need the archive command to be kind of slow and\n> get unlucky on the timing, but there's nothing to prevent it from\n> happening.\n\nRight\n\n> But, it will be WAY more likely if you have Dilip's original scenario,\n> where you try to repurpose an old primary as a standby. It would\n> normally be unlikely that the backup used to create a new standby\n> would have an older TLI, because you typically wouldn't switch masters\n> in between taking a base backup and using it to create a new standby.\n> But the old master always has an older TLI. So (3) is satisfied. For\n> (4) to be satisfied, you need the old master to fail to archive all of\n> its WAL when it shuts down.\n\nFor my original case, both standby1 and standby2 are connected to the\nprimary. Now, standby1 is promoted and standby2 is shut down. And,\nbefore restarting, all the local WAL of the standby2 is removed so\nthat it can follow the new primary. The primary info and restore\ncommand for standby2 are changed as per the new primary(standby1).\n\nNow the scenario is that the standby1 has switched the timeline in the\nmiddle of the segment which contains the checkpoint record, so the\nsegment with old TL is renamed to (.partial) and the same segment with\nnew TL is not yet archived but the history file for the new TL has\nbeen archived.\n\nNow, when standby2 restart the remaining things happened as you\nexplained, basically it restores the history file and changes the\nrecoveryTargetTLI but it doesn't get the WAL file from the archive.\nSo try to stream checkpoint record from the primary using\n\"ControlFile->checkPointCopy.ThisTimeLineID\", which is old timeline.\n\nNow, we may ask that if the WAL segment with old TL on standby1(new\nprimary) which contains the checkpoint is already renamed to\n\".partial\" then how can it stream using the old TL then the answer is\nbelow code[1] in the walsender. Basically, the checkpoint record is\npresent in both new and old TL as TL switched in the middle of the\nsegment, it will send you the record from the new TL even if the\nwalreciever asks to stream with old TL. Now walrecievr is under\nimpression that it has read from the old TL. And, we know the rest of\nthe story that we will set the expectedTLEs based on the old history\nfile and never be able to go to the new TL.\n\nAnyways now we understand the issue and there are many ways we can\nreproduce it. Still, I thought of explaining the exact steps how it\nhapped for me because now we understand it well so I think it is easy\nto explain :)\n\n[1]\nWalSndSegmentOpen()\n{\n/*-------\n* When reading from a historic timeline, and there is a timeline switch\n* within this segment, read from the WAL segment belonging to the new\n* timeline.\n}\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 22 May 2021 10:15:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sat, May 22, 2021 at 10:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, May 22, 2021 at 1:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > The attached test script, test.sh seems to reliably reproduce this.\n> > Put that file and the recalcitrant_cp script, also attached, into an\n>\n> I haven't tested this, but I will do that. But now we are on the same\n> page about the cause of the actual problem I reported.\n\nNow, I have tested. I am able to reproduce the actual problem with your script.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, May 22, 2021 at 10:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, May 22, 2021 at 1:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, May 21, 2021 at 12:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I had trouble following it completely, but I didn't really spot\n> > > anything that seemed definitely wrong. However, I don't understand\n> > > what it has to do with where we are now. What I want to understand is:\n> > > under exactly what circumstances does it matter that\n> > > WaitForWALToBecomeAvailable(), when currentSource == XLOG_FROM_STREAM,\n> > > will stream from receiveTLI rather than recoveryTargetTLI?\n> >\n> > Ah ha! I think I figured it out. To hit this bug, you need to meet the\n> > following conditions:\n> >\n> > 1. Both streaming and archiving have to be configured.\n> > 2. You have to promote a new primary.\n> > 3. After promoting the new primary you have to start a new standby\n> > that doesn't have local WAL and for which the backup was taken from\n> > the previous timeline. In Dilip's original scenario, this new standby\n> > is actually the old primary, but that's not required.\n>\n> No, in my original scenario also the new standby was not old primary,\n> I had 3 nodes\n> node1-> primary, node2 -> standby1, node3-> standby2\n> node2 promoted as a new primary and node3's local WAL was removed (so\n> that it has to stream checkpoint record from new primary and then\n> remaining everything happens as you explain in remaining steps).\n>\n> > 4. The new standby has to be able to find the history file it needs in\n> > the archive but not the WAL files.\n> > 5. The new standby needs to have recovery_target_timeline='latest'\n> > (which is the default)\n> >\n> > When you start the new standby, it will fetch the current TLI from its\n> > control file. Then, since recovery_target_timeline=latest, the system\n> > will try to figure out the latest timeline, which only works because\n> > archiving is configured. There seems to be no provision for detecting\n> > the latest timeline via streaming. With archiving enabled, though,\n> > findNewestTimeLine() will be able to restore the history file created\n> > by the promotion of the new primary, which will cause\n> > validateRecoveryParameters() to change recoveryTargetTLI.\n>\n> Right\n>\n> Then we'll\n> > try to read the WAL segment containing the checkpoint record and fail\n> > because, by stipulation, only history files are available from the\n> > archive. Now, because streaming is also configured, we'll try\n> > streaming. That will work, so we'll be able to read the checkpoint\n> > record, but now, because WaitForWALToBecomeAvailable() initialized\n> > expectedTLEs using receiveTLI instead of recoveryTargetTLI, we can't\n> > switch to the correct timeline and it all goes wrong.\n>\n> exactly\n>\n> > The attached test script, test.sh seems to reliably reproduce this.\n> > Put that file and the recalcitrant_cp script, also attached, into an\n> > empty directory, cd to that directory, and run test.sh. Afterwards\n> > examine pgcascade.log. Basically, these scripts just set up the\n> > scenario described above. We set up primary and a standby that use\n> > recalcitrant_cp as the archive command, and because it's recalcitrant,\n> > it's only willing to copy history files, and always fails for WAL\n> > files.Then we create a cascading standby by taking a base backup from\n> > the standby, but before actually starting it, we promote the original\n> > standby. So now it meets all the conditions described above. I tried a\n> > couple variants of this test. If I switch the archive command from\n> > recalcitrant_cp to just regular cp, then there's no problem. And if I\n> > switch it to something that always fails, then there's also no\n> > problem. That's because, with either of those changes, condition (4)\n> > above is no longer met. In the first case, both files end up in the\n> > archive, and in the second case, neither file.\n>\n> I haven't tested this, but I will do that. But now we are on the same\n> page about the cause of the actual problem I reported.\n>\n> > What about hitting this in real life, with a real archive command?\n> > Well, you'd probably need the archive command to be kind of slow and\n> > get unlucky on the timing, but there's nothing to prevent it from\n> > happening.\n>\n> Right\n>\n> > But, it will be WAY more likely if you have Dilip's original scenario,\n> > where you try to repurpose an old primary as a standby. It would\n> > normally be unlikely that the backup used to create a new standby\n> > would have an older TLI, because you typically wouldn't switch masters\n> > in between taking a base backup and using it to create a new standby.\n> > But the old master always has an older TLI. So (3) is satisfied. For\n> > (4) to be satisfied, you need the old master to fail to archive all of\n> > its WAL when it shuts down.\n>\n> For my original case, both standby1 and standby2 are connected to the\n> primary. Now, standby1 is promoted and standby2 is shut down. And,\n> before restarting, all the local WAL of the standby2 is removed so\n> that it can follow the new primary. The primary info and restore\n> command for standby2 are changed as per the new primary(standby1).\n>\n> Now the scenario is that the standby1 has switched the timeline in the\n> middle of the segment which contains the checkpoint record, so the\n> segment with old TL is renamed to (.partial) and the same segment with\n> new TL is not yet archived but the history file for the new TL has\n> been archived.\n>\n> Now, when standby2 restart the remaining things happened as you\n> explained, basically it restores the history file and changes the\n> recoveryTargetTLI but it doesn't get the WAL file from the archive.\n> So try to stream checkpoint record from the primary using\n> \"ControlFile->checkPointCopy.ThisTimeLineID\", which is old timeline.\n>\n> Now, we may ask that if the WAL segment with old TL on standby1(new\n> primary) which contains the checkpoint is already renamed to\n> \".partial\" then how can it stream using the old TL then the answer is\n> below code[1] in the walsender. Basically, the checkpoint record is\n> present in both new and old TL as TL switched in the middle of the\n> segment, it will send you the record from the new TL even if the\n> walreciever asks to stream with old TL. Now walrecievr is under\n> impression that it has read from the old TL. And, we know the rest of\n> the story that we will set the expectedTLEs based on the old history\n> file and never be able to go to the new TL.\n>\n> Anyways now we understand the issue and there are many ways we can\n> reproduce it. Still, I thought of explaining the exact steps how it\n> happed for me because now we understand it well so I think it is easy\n> to explain :)\n>\n> [1]\n> WalSndSegmentOpen()\n> {\n> /*-------\n> * When reading from a historic timeline, and there is a timeline switch\n> * within this segment, read from the WAL segment belonging to the new\n> * timeline.\n> }\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 22 May 2021 12:10:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sat, May 22, 2021 at 12:45 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> No, in my original scenario also the new standby was not old primary,\n> I had 3 nodes\n> node1-> primary, node2 -> standby1, node3-> standby2\n> node2 promoted as a new primary and node3's local WAL was removed (so\n> that it has to stream checkpoint record from new primary and then\n> remaining everything happens as you explain in remaining steps).\n\nOh, OK. I misunderstood. I think it could happen that way, though.\n\n> I haven't tested this, but I will do that. But now we are on the same\n> page about the cause of the actual problem I reported.\n\nYeah, sorry, I just didn't understand the exact chain of events before.\n\n> For my original case, both standby1 and standby2 are connected to the\n> primary. Now, standby1 is promoted and standby2 is shut down. And,\n> before restarting, all the local WAL of the standby2 is removed so\n> that it can follow the new primary. The primary info and restore\n> command for standby2 are changed as per the new primary(standby1).\n\nOne thing I don't understand is why the final WAL segment from the\noriginal primary didn't end up in the archive in this scenario. If it\nhad, then we would not have seen the issue in that case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 22 May 2021 11:03:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sat, May 22, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> > For my original case, both standby1 and standby2 are connected to the\n> > primary. Now, standby1 is promoted and standby2 is shut down. And,\n> > before restarting, all the local WAL of the standby2 is removed so\n> > that it can follow the new primary. The primary info and restore\n> > command for standby2 are changed as per the new primary(standby1).\n>\n> One thing I don't understand is why the final WAL segment from the\n> original primary didn't end up in the archive in this scenario. If it\n> had, then we would not have seen the issue in that case.\n\nI used different archive folders for primary and new\nprimary(standby1). I have modified your test.sh slightly (modified\ntest2.sh attached) so that I can demonstrate my scenario where I was\nseeing the issue and this is getting fixed after putting the fix we\ndiscussed[1]\n\n[1]\n-\nexpectedTLEs = readTimeLineHistory(receiveTLI);\n+\nexpectedTLEs = readTimeLineHistory(recoveryTargetTLI);\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 23 May 2021 14:19:18 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sun, May 23, 2021 at 2:19 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, May 22, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\nI have created a tap test based on Robert's test.sh script. It\nreproduces the issue. I am new with perl so this still needs some\ncleanup/improvement, but at least it shows the idea.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 23 May 2021 21:37:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 21 May 2021 12:52:54 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> I had trouble following it completely, but I didn't really spot\n> anything that seemed definitely wrong. However, I don't understand\n> what it has to do with where we are now. What I want to understand is:\n> under exactly what circumstances does it matter that\n> WaitForWALToBecomeAvailable(), when currentSource == XLOG_FROM_STREAM,\n> will stream from receiveTLI rather than recoveryTargetTLI?\n\nExtracing related descriptions from my previous mail,\n\n- recoveryTargetTimeLine is initialized with\n ControlFile->checkPointCopy.ThisTimeLineID\n\n- readRecoveryCommandFile():\n ...or in the case of\n latest, move it forward up to the maximum timeline among the history\n files found in either pg_wal or archive.\n\n- ReadRecord...XLogFileReadAnyTLI\n\n Tries to load the history file for recoveryTargetTLI either from\n pg_wal or archive onto local TLE list, if the history file is not\n found, use a generateed list with one entry for the\n recoveryTargetTLI.\n\n (b) If such a segment is *not* found, expectedTLEs is left\n NIL. Usually recoveryTargetTLI is equal to the last checkpoint\n TLI.\n\n (c) However, in the case where timeline switches happened in the\n segment and the recoveryTargetTLI has been increased, that is, the\n history file for the recoveryTargetTLI is found in pg_wal or\n archive, that is, the issue raised here, recoveryTargetTLI becomes\n the future timline of the checkpoint TLI.\n\n- WaitForWALToBecomeAvailable\n\nIn the case of (c) recoveryTargetTLI > checkpoint TLI. In this case\n we expecte that checkpint TLI is in the history of\n recoveryTargetTLI. Otherwise recovery failse^h. This case is similar\n to the case (a) but the relationship between recoveryTargetTLI and\n the checkpoint TLI is not confirmed yet. ReadRecord barks later if\n they are not compatible so there's not a serious problem but might\n be better checking the relation ship there. My first proposal\n performed mutual check between the two but we need to check only\n unidirectionally.\n\n===\nSo the condition for the Dilip's case is, as you wrote in another mail:\n\n- ControlFile->checkPointCopy.ThisTimeLineID is in the older timeline.\n- Archive or pg_wal offers the history file for the newer timeline.\n- The segment for the checkpoint is not found in pg_wal nor in archive.\n\nThat is,\n\n- A grandchild(c) node is stopped\n- Then the child node(b) is promoted.\n\n- Clear pg_wal directory of (c) then connect it to (b) *before* (b)\n archives the segment for the newer timeline of the\n timeline-switching segments. (if we have switched at segment 3,\n TLI=1, the segment file of the older timeline is renamed to\n .partial, then create the same segment for TLI=2. The former is\n archived while promotion is performed but the latter won't be\n archive until the segment ends.)\n\n\nThe orinal case of after the commit ee994272ca,\n\n- recoveryTargetTimeLine is initialized with\n ControlFile->checkPointCopy.ThisTimeLineID\n\n(X) (Before the commit, we created the one-entry expectedTLEs consists\n only of ControlFile->checkPointCopy.ThisTimeLineID.)\n\n- readRecoveryCommandFile():\n\n Move recoveryTargetTLI forward to the specified target timline if\n the history file for the timeline is found, or in the case of\n latest, move it forward up to the maximum timeline among the history\n files found in either pg_wal or archive.\n\n- ReadRecord...XLogFileReadAnyTLI\n\n Tries to load the history file for recoveryTargetTLI either from\n pg_wal or archive onto local TLE list, if the history file is not\n found, use a generateed list with one entry for the\n recoveryTargetTLI.\n\n (b) If such a segment is *not* found, expectedTLEs is left\n NIL. Usually recoveryTargetTLI is equal to the last checkpoint\n TLI.\n\n- WaitForWALToBecomeAvailable\n\n if we have had no segments for the last checkpoint, initiate\n streaming from the REDO point of the last checkpoint. We should have\n all history files until receiving segment data.\n\n after sufficient WAL data has been received, the only cases where\n expectedTLEs is still NIL are the (b) and (c) above.\n\n In the case of (b) recoveryTargetTLI == checkpoint TLI.\n\nSo I thought that the commit fixed this scenario. Even in this case,\nReadRecord fails because the checkpoint segment contains pages for the\nolder timeline which is not in expectedTLEs if we did (X).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 24 May 2021 11:34:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Sun, 23 May 2021 21:37:58 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Sun, May 23, 2021 at 2:19 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sat, May 22, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I have created a tap test based on Robert's test.sh script. It\n> reproduces the issue. I am new with perl so this still needs some\n> cleanup/improvement, but at least it shows the idea.\n\nI'm not sure I'm following the discussion here, however, if we were\ntrying to reproduce Dilip's case using base backup, we would need such\na broken archive command if using pg_basebackup witn -Xnone. Becuase\nthe current version of pg_basebackup waits for all required WAL\nsegments to be archived when connecting to a standby with -Xnone. I\ndon't bother reconfirming the version that fix took place, but just\nusing -X stream instead of \"none\" we successfully miss the first\nsegment of the new timeline in the upstream archive, though we need to\nerase pg_wal in the backup. Either the broken archive command or\nerasing pg_wal of the cascade is required to the behavior to occur.\n\nThe attached is how it looks like.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n# Copyright (c) 2021, PostgreSQL Global Development Group\n\n# Minimal test testing streaming replication\nuse Cwd;\nuse strict;\nuse warnings;\nuse PostgresNode;\nuse TestLib;\nuse Test::More tests => 1;\n\n# Initialize primary node\nmy $node_primary = get_new_node('primary');\n# A specific role is created to perform some tests related to replication,\n# and it needs proper authentication configuration.\n$node_primary->init(allows_streaming => 1);\n$node_primary->append_conf(\n\t'postgresql.conf', qq(\nwal_keep_size=128MB\n));\n$node_primary->start;\n\nmy $backup_name = 'my_backup';\n\n# Take backup\n$node_primary->backup($backup_name);\n\nmy $node_standby_1 = get_new_node('standby_1');\n$node_standby_1->init_from_backup($node_primary, $backup_name,\n\t\t\t\t\t\t\t\t allows_streaming => 1, has_streaming => 1);\nmy $archivedir_standby_1 = $node_standby_1->archive_dir;\n$node_standby_1->append_conf(\n\t'postgresql.conf', qq(\narchive_mode=always\narchive_command='cp \"%p\" \"$archivedir_standby_1/%f\"'\n));\n$node_standby_1->start;\n\n\n# Take backup of standby 1\n# NB: Use -Xnone so that pg_wal is empty.\n#$node_standby_1->backup($backup_name, backup_options => ['-Xnone']);\n$node_standby_1->backup($backup_name);\n\n# Promote the standby.\n$node_standby_1->psql('postgres', 'SELECT pg_promote()');\n\n# clean up pg_wal from the backup\nmy $pgwaldir = $node_standby_1->backup_dir. \"/\" . $backup_name . \"/pg_wal\";\nopendir my $dh, $pgwaldir or die \"failed to open $pgwaldir\";\nwhile (my $f = readdir($dh))\n{\n\tunlink(\"$pgwaldir/$f\") if (-f \"$pgwaldir/$f\");\n}\nclosedir($dh);\n\n# Create cascading standby but don't start it yet.\n# NB: Must set up both streaming and archiving.\nmy $node_cascade = get_new_node('cascade');\n$node_cascade->init_from_backup($node_standby_1, $backup_name,\n\thas_streaming => 1);\n$node_cascade->append_conf(\n\t'postgresql.conf', qq(\nrestore_command = 'cp \"$archivedir_standby_1/%f\" \"%p\"'\nlog_line_prefix = '%m [%p:%b] %q%a '\narchive_mode=off\n));\n\n\n# Start cascade node\n$node_cascade->start;\n\n# Create some content on primary and check its presence in standby 1\n$node_standby_1->safe_psql('postgres',\n\t\"CREATE TABLE tab_int AS SELECT 1 AS a\");\n\n# Wait for standbys to catch up\n$node_standby_1->wait_for_catchup($node_cascade, 'replay',\n\t$node_standby_1->lsn('replay'));\n\nok(1, 'test'); # it's sucess if we come here.", "msg_date": "Mon, 24 May 2021 13:47:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Mon, May 24, 2021 at 10:17 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sun, 23 May 2021 21:37:58 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Sun, May 23, 2021 at 2:19 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Sat, May 22, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > I have created a tap test based on Robert's test.sh script. It\n> > reproduces the issue. I am new with perl so this still needs some\n> > cleanup/improvement, but at least it shows the idea.\n>\n> I'm not sure I'm following the discussion here, however, if we were\n> trying to reproduce Dilip's case using base backup, we would need such\n> a broken archive command if using pg_basebackup witn -Xnone. Becuase\n> the current version of pg_basebackup waits for all required WAL\n> segments to be archived when connecting to a standby with -Xnone.\n\nRight, that's the reason if you see my patch I have dynamically\ngenerated such archive command which skips everything other than the\nhistory file\nsee below snippet from my patch, where I am generating a skip_cp\ncommand and then I am using that as an archive command.\n\n==\n+# Prepare a alternative archive command to skip WAL files\n+my $script = \"#!/usr/bin/perl \\n\n+use File::Copy; \\n\n+my (\\$source, \\$target) = \\@ARGV; \\n\n+if (\\$source =~ /history/) \\n\n+{ \\n\n+ copy(\\$source, \\$target); \\n\n+}\";\n+\n+open my $fh, '>', \"skip_cp\";\n+print {$fh} $script;\n===\n\n I\n> don't bother reconfirming the version that fix took place, but just\n> using -X stream instead of \"none\" we successfully miss the first\n> segment of the new timeline in the upstream archive, though we need to\n> erase pg_wal in the backup. Either the broken archive command or\n> erasing pg_wal of the cascade is required to the behavior to occur.\n>\n> The attached is how it looks like.\n\nI will test this and let you know. Thanks!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 May 2021 10:34:36 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sun, May 23, 2021 at 12:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have created a tap test based on Robert's test.sh script. It\n> reproduces the issue. I am new with perl so this still needs some\n> cleanup/improvement, but at least it shows the idea.\n\nThanks. I think this is the right idea but just needs a few adjustments.\n\nI don't think that dynamically writing out a file into the current\nworking directory of the script is the right approach. Instead I think\nwe should be planning to check this file into the repository and then\nhave the test script find it. Now the trick is how to do that in a\nportable way. I think we can probably use the same idea that the\npg_rewind tests use to find a perl module located in the test\ndirectory. That is:\n\nuse FindBin;\n\nand then use $FindBin::RealBin to construct a path name to the executable, e.g.\n\n$node_primary->append_conf(\n 'postgresql.conf', qq(\narchive_command = '\"$FindBin::RealBin/skip_cp\" \"%p\" \"$archivedir_primary/%f\"'\n));\n\nThis avoids issues such as: leaving behind files if the script is\nterminated, needing the current working directory to be writable,\npossible permissions issues with the new file under Windows or\nSE-Linux.\n\nThe restore_command needs to be \"cp\" on Linux but \"copy\" on Windows.\nMaybe you can use PostgresNode.pm's enable_restoring? Or if that\ndoesn't work, then you need to mimic the logic, as\nsrc/test/recovery/t/020_archive_status.pl does for archive_command.\n\nWhy do you set log_line_prefix? Is that needed?\n\nWhy are the nodes called standby_1 and cascade? Either use standby and\ncascade or standby_1 and standby_2.\n\nThere is a comment that says \"Create some content on primary and check\nits presence in standby 1\" but it only creates the content, and does\nnot check anything. I think we don't really need to do any of this,\nbut at least the code and the comment have to match.\n\nLet's not call the command skip_cp. It's not very descriptive. If you\ndon't like recalcitrant_cp, then maybe something like cp_history_files\nor so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 May 2021 11:46:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, May 25, 2021 at 9:16 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> use FindBin;\n>\n> and then use $FindBin::RealBin to construct a path name to the executable, e.g.\n>\n> $node_primary->append_conf(\n> 'postgresql.conf', qq(\n> archive_command = '\"$FindBin::RealBin/skip_cp\" \"%p\" \"$archivedir_primary/%f\"'\n> ));\n>\n> This avoids issues such as: leaving behind files if the script is\n> terminated, needing the current working directory to be writable,\n> possible permissions issues with the new file under Windows or\n> SE-Linux.\n\nDone\n\n> The restore_command needs to be \"cp\" on Linux but \"copy\" on Windows.\n> Maybe you can use PostgresNode.pm's enable_restoring? Or if that\n> doesn't work, then you need to mimic the logic, as\n> src/test/recovery/t/020_archive_status.pl does for archive_command.\n\nDone\n\n> Why do you set log_line_prefix? Is that needed?\n\nNo, it was not, removed\n\n> Why are the nodes called standby_1 and cascade? Either use standby and\n> cascade or standby_1 and standby_2.\n\nFixed\n\n> There is a comment that says \"Create some content on primary and check\n> its presence in standby 1\" but it only creates the content, and does\n> not check anything. I think we don't really need to do any of this,\n> but at least the code and the comment have to match.\n\nI think we need to create some content on promoted standby and check\nwhether the cascade standby is able to get that or not, that will\nguarantee that it is actually following the promoted standby, I have\nadded the test for that so that it matches the comments.\n\n> Let's not call the command skip_cp. It's not very descriptive. If you\n> don't like recalcitrant_cp, then maybe something like cp_history_files\n> or so.\n\nDone\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 26 May 2021 12:14:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, May 26, 2021 at 2:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I think we need to create some content on promoted standby and check\n> whether the cascade standby is able to get that or not, that will\n> guarantee that it is actually following the promoted standby, I have\n> added the test for that so that it matches the comments.\n\nOK, but I ran this test against an unpatched server and it passed.\nThat's not so great, because the test should fail without the bug fix.\nIt seems to be because there's only one actual test in this test case.\nLooking at the log file,\nsrc/test/recovery/tmp_check/log/regress_log_025_timeline_issue, the\nonly \"ok\" nor \"not ok\" line is:\n\nok 1 - check streamed content on cascade standby\n\nSo either that test is not right or some other test is needed. I think\nthere's something else going wrong here, because when I run my\noriginal test.sh script, I see this:\n\n2021-05-26 11:37:47.794 EDT [57961] LOG: restored log file\n\"00000002.history\" from archive\n...\n2021-05-26 11:37:47.916 EDT [57961] LOG: redo starts at 0/2000028\n...\n2021-05-26 11:37:47.927 EDT [57966] LOG: replication terminated by\nprimary server\n2021-05-26 11:37:47.927 EDT [57966] DETAIL: End of WAL reached on\ntimeline 1 at 0/3000000\n\nBut in the src/test/recovery/tmp_check/log/025_timeline_issue_cascade.log\nfile generated by this test case:\n\ncp: /Users/rhaas/pgsql/src/test/recovery/tmp_check/t_025_timeline_issue_primary_data/archives/00000002.history:\nNo such file or directory\n...\n2021-05-26 11:41:08.149 EDT [63347] LOG: fetching timeline history\nfile for timeline 2 from primary server\n...\n2021-05-26 11:41:08.288 EDT [63344] LOG: new target timeline is 2\n...\n2021-05-26 11:41:08.303 EDT [63344] LOG: redo starts at 0/2000028\n...\n2021-05-26 11:41:08.331 EDT [63347] LOG: restarted WAL streaming at\n0/3000000 on timeline 2\n\nSo it doesn't seem like the test is actually reproducing the problem\ncorrectly. The timeline history file isn't available from the archive,\nso it streams it, and then the problem doesn't occur. I guess that's\nbecause there's nothing to guarantee that the history file reaches the\narchive before 'cascade' is started. The code just does:\n\n# Promote the standby.\n$node_standby->psql('postgres', 'SELECT pg_promote()');\n\n# Start cascade node\n$node_cascade->start;\n\n...which has a clear race condition.\nsrc/test/recovery/t/023_pitr_prepared_xact.pl has logic to wait for a\nWAL file to be archived, so maybe we can steal that logic and use it\nhere.\n\nI suggest we rename the test to something a bit more descriptive. Like\ninstead of 025_timeline_issue.pl, perhaps\n025_stuck_on_old_timeline.pl? Or I'm open to other suggestions, but\n\"timeline issue\" is a bit too vague for my taste.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 May 2021 12:10:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, May 26, 2021 at 9:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 2:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I think we need to create some content on promoted standby and check\n> > whether the cascade standby is able to get that or not, that will\n> > guarantee that it is actually following the promoted standby, I have\n> > added the test for that so that it matches the comments.\n>\n> OK, but I ran this test against an unpatched server and it passed.\n> That's not so great, because the test should fail without the bug fix.\n> It seems to be because there's only one actual test in this test case.\n> Looking at the log file,\n> src/test/recovery/tmp_check/log/regress_log_025_timeline_issue, the\n> only \"ok\" nor \"not ok\" line is:\n>\n> ok 1 - check streamed content on cascade standby\n>\n> So either that test is not right or some other test is needed. I think\n> there's something else going wrong here, because when I run my\n> original test.sh script, I see this:\n\nThats strange, when I ran the test I can see below in log of cascade\nnode (which shows that cascade get the history file but not the WAL\nfile and then it select the old timeline and never go to the new\ntimeline)\n\n...\n2021-05-26 21:46:54.412 IST [84080] LOG: restored log file\n\"00000002.history\" from archive\ncp: cannot stat\n‘/home/dilipkumar/work/PG/postgresql/src/test/recovery/tmp_check/t_025_timeline_issue_primary_data/archives/00000003.history’:\nNo such file or directory\n2021-05-26 21:46:54.415 IST [84080] LOG: entering standby mode\n2021-05-26 21:46:54.419 IST [84080] LOG: restored log file\n\"00000002.history\" from archive\n.....\n2021-05-26 21:46:54.429 IST [84085] LOG: started streaming WAL from\nprimary at 0/2000000 on timeline 1 -> stream using previous TL\n2021-05-26 21:46:54.466 IST [84080] LOG: redo starts at 0/2000028\n2021-05-26 21:46:54.466 IST [84080] LOG: consistent recovery state\nreached at 0/3000000\n2021-05-26 21:46:54.467 IST [84079] LOG: database system is ready to\naccept read only connections\n2021-05-26 21:46:54.483 IST [84085] LOG: replication terminated by\nprimary server\n2021-05-26 21:46:54.483 IST [84085] DETAIL: End of WAL reached on\ntimeline 1 at 0/3000000.\ncp: cannot stat\n‘/home/dilipkumar/work/PG/postgresql/src/test/recovery/tmp_check/t_025_timeline_issue_primary_data/archives/00000003.history’:\nNo such file or directory\ncp: cannot stat\n‘/home/dilipkumar/work/PG/postgresql/src/test/recovery/tmp_check/t_025_timeline_issue_primary_data/archives/000000010000000000000003’:\nNo such file or directory\n2021-05-26 21:46:54.498 IST [84085] LOG: primary server contains no\nmore WAL on requested timeline 1\n\n<failure continues as it never go to timeline 2>\n\nAnd finally the test case fails because the cascade can never get the changes.\n\nI will check if there is any timing dependency in the test case.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 May 2021 21:56:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, May 26, 2021 at 12:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I will check if there is any timing dependency in the test case.\n\nThere is. I explained it in the second part of my email, which you may\nhave failed to notice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 May 2021 12:35:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, 26 May 2021 at 10:06 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, May 26, 2021 at 12:26 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > I will check if there is any timing dependency in the test case.\n>\n> There is. I explained it in the second part of my email, which you may\n> have failed to notice.\n\n\nSorry, my bad. I got your point now. I will change the test.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, 26 May 2021 at 10:06 PM, Robert Haas <robertmhaas@gmail.com> wrote:On Wed, May 26, 2021 at 12:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I will check if there is any timing dependency in the test case.\n\nThere is. I explained it in the second part of my email, which you may\nhave failed to notice.Sorry, my bad.  I got your point now.  I will change the test.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 26 May 2021 22:08:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Wed, 26 May 2021 22:08:32 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Wed, 26 May 2021 at 10:06 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> > On Wed, May 26, 2021 at 12:26 PM Dilip Kumar <dilipbalaut@gmail.com>\n> > wrote:\n> > > I will check if there is any timing dependency in the test case.\n> >\n> > There is. I explained it in the second part of my email, which you may\n> > have failed to notice.\n> \n> \n> Sorry, my bad. I got your point now. I will change the test.\n\nI didn't noticed that but that is actually possible to happen.\n\n\nBy the way I'm having a hard time understanding what was happening on\nthis thread.\n\nIn the very early in this thread I posted a test script that exactly\nreproduces Dilip's case by starting from two standbys based on his\nexplanation. But *we* didn't understand what the original commit\nee994272ca intended and I understood that we wanted to know it.\n\nSo in the mail [1] and [2] I tried to describe what's going on around\nthe two issues. Although I haven't have a response to [2], can I\nthink that we clarified the intention of ee994272ca? And may I think\nthat we decided that we don't add a test for the commit?\n\nThen it seems to me that Robert refound how to reproduce Dilip's case\nusing basebackup instead of using two standbys. It is using a broken\narchive_command with pg_basebackup -Xnone and I showed that the same\nresulting state is available by pg_basebackup -Xstream/fetch clearing\npg_wal directory of the resulting backup including an explanation of\nwhy.\n\n*I* think that it is better to avoid to have the archive_command since\nit seems to me that just unlinking some files seems simpler tha having\nthe broken archive_command. However, since Robert ignored it, I guess\nthat Robert thinks that the broken archive_command is better than\nthat.\n\nIt my understanding above about the current status of this thread is\nright?\n\n\nFWIW, regarding to the name of the test script, putting aside what it\nactually does, I proposed to place it as a part or\n004_timeline_switch.pl because this issue is related to timeline\nswitching.\n\n\n[1] 20210521.112105.27166595366072396.horikyota.ntt@gmail.com\n[2] https://www.postgresql.org/message-id/20210524.113402.1922481024406047229.horikyota.ntt@gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 May 2021 09:49:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Thu, May 27, 2021 at 6:19 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 26 May 2021 22:08:32 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Wed, 26 May 2021 at 10:06 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > > On Wed, May 26, 2021 at 12:26 PM Dilip Kumar <dilipbalaut@gmail.com>\n> > > wrote:\n> > > > I will check if there is any timing dependency in the test case.\n> > >\n> > > There is. I explained it in the second part of my email, which you may\n> > > have failed to notice.\n> >\n> >\n> > Sorry, my bad. I got your point now. I will change the test.\n>\n> I didn't noticed that but that is actually possible to happen.\n>\n>\n> By the way I'm having a hard time understanding what was happening on\n> this thread.\n>\n> In the very early in this thread I posted a test script that exactly\n> reproduces Dilip's case by starting from two standbys based on his\n> explanation. But *we* didn't understand what the original commit\n> ee994272ca intended and I understood that we wanted to know it.\n>\n> So in the mail [1] and [2] I tried to describe what's going on around\n> the two issues. Although I haven't have a response to [2], can I\n> think that we clarified the intention of ee994272ca? And may I think\n> that we decided that we don't add a test for the commit?\n>\n> Then it seems to me that Robert refound how to reproduce Dilip's case\n> using basebackup instead of using two standbys. It is using a broken\n> archive_command with pg_basebackup -Xnone and I showed that the same\n> resulting state is available by pg_basebackup -Xstream/fetch clearing\n> pg_wal directory of the resulting backup including an explanation of\n> why.\n\nMaybe we can somehow achieve that without a broken archive command,\nbut I am not sure how it is enough to just delete WAL from pg_wal? I\nmean my original case was that\n1. Got the new history file from the archive but did not get the WAL\nfile yet which contains the checkpoint after TL switch\n2. So the standby2 try to stream using new primary using old TL and\nset the wrong TL in expectedTLEs\n\nBut if you are not doing anything to stop archiving WAL files or to\nguarantee that WAL has come to archive and you deleted those then I am\nnot sure how we are reproducing the original problem.\n\nBTW, I have also tested your script and I found below log, which shows\nthat standby2 is successfully able to select the timeline2 so it is\nnot reproducing the issue. Am I missing something?\n\n--standby-1--\n2021-05-27 10:45:35.866 IST [5096] LOG: last completed transaction\nwas at log time 2021-05-27 10:45:35.699316+05:30\n2021-05-27 10:45:35.867 IST [5096] LOG: selected new timeline ID: 2\n2021-05-27 10:45:35.882 IST [5096] LOG: archive recovery complete\n2021-05-27 10:45:35.911 IST [5095] LOG: database system is ready to\naccept connections\n2021-05-27 10:45:36.096 IST [5134] standby_2 LOG: received\nreplication command: IDENTIFY_SYSTEM\n2021-05-27 10:45:36.096 IST [5134] standby_2 STATEMENT: IDENTIFY_SYSTEM\n\n--standby-2--\n2021-05-27 10:45:36.089 IST [5129] LOG: entering standby mode\n2021-05-27 10:45:36.090 IST [5129] LOG: redo starts at 0/2000028\n2021-05-27 10:45:36.092 IST [5129] LOG: consistent recovery state\nreached at 0/3030320\n2021-05-27 10:45:36.092 IST [5129] LOG: invalid record length at\n0/3030320: wanted 24, got 0\n2021-05-27 10:45:36.092 IST [5128] LOG: database system is ready to\naccept read only connections\n2021-05-27 10:45:36.096 IST [5133] LOG: fetching timeline history\nfile for timeline 2 from primary server\n2021-05-27 10:45:36.097 IST [5133] LOG: started streaming WAL from\nprimary at 0/3000000 on timeline 1\n2021-05-27 10:45:36.098 IST [5133] LOG: replication terminated by\nprimary server\n2021-05-27 10:45:36.098 IST [5133] DETAIL: End of WAL reached on\ntimeline 1 at 0/3030320.\n2021-05-27 10:45:36.098 IST [5129] LOG: new target timeline is 2\n2021-05-27 10:45:36.098 IST [5133] LOG: restarted WAL streaming at\n0/3000000 on timeline 2\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 May 2021 11:44:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, May 26, 2021 at 9:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> ...which has a clear race condition.\n> src/test/recovery/t/023_pitr_prepared_xact.pl has logic to wait for a\n> WAL file to be archived, so maybe we can steal that logic and use it\n> here.\n\nYeah, done that, I think we can use exact same logic for history files\nas well because if wal file is archived then history file must be\nbecause a) history file get created during promotion so created before\nWAL file with new TL is ready for archive b) Archiver archive history\nfiles before archiving any WAL files.\n\nsrc/test/recovery/t/025_stuck_on_old_timeline.pl\n\n> I suggest we rename the test to something a bit more descriptive. Like\n> instead of 025_timeline_issue.pl, perhaps\n> 025_stuck_on_old_timeline.pl? Or I'm open to other suggestions, but\n> \"timeline issue\" is a bit too vague for my taste.\n\nChanged as suggested.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 27 May 2021 11:56:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Thu, 27 May 2021 11:44:47 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> Maybe we can somehow achieve that without a broken archive command,\n> but I am not sure how it is enough to just delete WAL from pg_wal? I\n> mean my original case was that\n> 1. Got the new history file from the archive but did not get the WAL\n> file yet which contains the checkpoint after TL switch\n> 2. So the standby2 try to stream using new primary using old TL and\n> set the wrong TL in expectedTLEs\n> \n> But if you are not doing anything to stop archiving WAL files or to\n> guarantee that WAL has come to archive and you deleted those then I am\n> not sure how we are reproducing the original problem.\n\nThanks for the reply!\n\nWe're writing at the very beginning of the switching segment at the\npromotion time. So it is guaranteed that the first segment of the\nnewer timline won't be archived until the rest almost 16MB in the\nsegment is consumed or someone explicitly causes a segment switch\n(including archive timeout).\n\n> BTW, I have also tested your script and I found below log, which shows\n> that standby2 is successfully able to select the timeline2 so it is\n> not reproducing the issue. Am I missing something?\n\nstandby_2? My last one 026_timeline_issue_2.pl doesn't use that name\nand uses \"standby_1 and \"cascade\". In the ealier ones, standby_4 and\n5 (or 3 and 4 in the later versions) are used in ths additional tests.\n\nSo I think it shold be something different?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 May 2021 15:39:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Thu, May 27, 2021 at 12:09 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 27 May 2021 11:44:47 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > Maybe we can somehow achieve that without a broken archive command,\n> > but I am not sure how it is enough to just delete WAL from pg_wal? I\n> > mean my original case was that\n> > 1. Got the new history file from the archive but did not get the WAL\n> > file yet which contains the checkpoint after TL switch\n> > 2. So the standby2 try to stream using new primary using old TL and\n> > set the wrong TL in expectedTLEs\n> >\n> > But if you are not doing anything to stop archiving WAL files or to\n> > guarantee that WAL has come to archive and you deleted those then I am\n> > not sure how we are reproducing the original problem.\n>\n> Thanks for the reply!\n>\n> We're writing at the very beginning of the switching segment at the\n> promotion time. So it is guaranteed that the first segment of the\n> newer timline won't be archived until the rest almost 16MB in the\n> segment is consumed or someone explicitly causes a segment switch\n> (including archive timeout).\n\nI agree\n\n> > BTW, I have also tested your script and I found below log, which shows\n> > that standby2 is successfully able to select the timeline2 so it is\n> > not reproducing the issue. Am I missing something?\n>\n> standby_2? My last one 026_timeline_issue_2.pl doesn't use that name\n> and uses \"standby_1 and \"cascade\". In the ealier ones, standby_4 and\n> 5 (or 3 and 4 in the later versions) are used in ths additional tests.\n>\n> So I think it shold be something different?\n\nYeah, I tested with your patch where you had a different test case,\nwith \"026_timeline_issue_2.pl\", I am able to reproduce the issue.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 May 2021 12:47:30 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Thu, 27 May 2021 12:47:30 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Thu, May 27, 2021 at 12:09 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 27 May 2021 11:44:47 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > We're writing at the very beginning of the switching segment at the\n> > promotion time. So it is guaranteed that the first segment of the\n> > newer timline won't be archived until the rest almost 16MB in the\n> > segment is consumed or someone explicitly causes a segment switch\n> > (including archive timeout).\n> \n> I agree\n>\n> > > BTW, I have also tested your script and I found below log, which shows\n> > > that standby2 is successfully able to select the timeline2 so it is\n> > > not reproducing the issue. Am I missing something?\n> >\n> > standby_2? My last one 026_timeline_issue_2.pl doesn't use that name\n> > and uses \"standby_1 and \"cascade\". In the ealier ones, standby_4 and\n> > 5 (or 3 and 4 in the later versions) are used in ths additional tests.\n> >\n> > So I think it shold be something different?\n> \n> Yeah, I tested with your patch where you had a different test case,\n> with \"026_timeline_issue_2.pl\", I am able to reproduce the issue.\n\nThat said, I don't object if we decide to choose the crafted archive\ncommand as far as we consider the trade-offs between the two ways.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 May 2021 16:37:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, May 26, 2021 at 8:49 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> So in the mail [1] and [2] I tried to describe what's going on around\n> the two issues. Although I haven't have a response to [2], can I\n> think that we clarified the intention of ee994272ca? And may I think\n> that we decided that we don't add a test for the commit?\n\nRegarding the first question, I feel that the intention of ee994272ca\nis fairly clear at this point. Someone else might feel differently so\nI won't presume to speak for anyone but me.\n\nRegarding the second question, I am not opposed to adding a test for\nthat commit, but I think it is a lot more important to fix the bug we\nhave now than to add a test for a bug that was fixed a long time ago.\n\n> Then it seems to me that Robert refound how to reproduce Dilip's case\n> using basebackup instead of using two standbys. It is using a broken\n> archive_command with pg_basebackup -Xnone and I showed that the same\n> resulting state is available by pg_basebackup -Xstream/fetch clearing\n> pg_wal directory of the resulting backup including an explanation of\n> why.\n\nYes, it makes sense that we could get to the same state either by not\nfetching the WAL in the first place, or alternatively by fetching it\nand then removing it.\n\n> *I* think that it is better to avoid to have the archive_command since\n> it seems to me that just unlinking some files seems simpler tha having\n> the broken archive_command. However, since Robert ignored it, I guess\n> that Robert thinks that the broken archive_command is better than\n> that.\n\nWell ... I don't see those things as quite related. As far as I can\nsee, unlinking files from pg_wal is an alternative to using -Xnone. On\nthe other hand, the broken archive_command is there to make sure the\nnew primary doesn't archive its WAL segment too soon.\n\nRegarding the first point, I think using -Xnone is better than using\n-Xfetch/stream and then removing the WAL, because (1) it doesn't seem\nefficient to fetch WAL only to turn around and remove it and (2)\nsomeone might question whether removing the WAL afterward is a\nsupported procedure, whereas using an option built into the tool must\nsurely be supported.\n\nRegarding the second point, I think using the broken archive command\nis superior because we can be sure of the behavior. If we just rely on\nnot having crossed a segment boundary, then anything that causes more\nWAL to be generated than we are expecting could break the test. I\ndon't think it's particularly likely in a case like this that\nautovacuum or any other thing would kick in and generate extra WAL,\nbut the broken archive command ensures that even if it does happen,\nthe test will still work as intended. That, to me, seems like a good\nenough reason to do it that way.\n\n> FWIW, regarding to the name of the test script, putting aside what it\n> actually does, I proposed to place it as a part or\n> 004_timeline_switch.pl because this issue is related to timeline\n> switching.\n\nI think it is better to keep it separate. Long test scripts that test\nmultiple things with completely separate tests are hard to read.\n\nKyotaro-san, I hope I have not given any offense. I am doing my best,\nand certainly did not mean to be rude.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 May 2021 15:05:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Hi Horiguchi-san,\n\nIn a project I helped with, I encountered an issue where\nthe archive command kept failing. I thought this issue was\nrelated to the problem in this thread, so I'm sharing it here.\nIf I should create a new thread, please let me know.\n\n* Problem\n - The archive_command is failed always.\n\n* Conditions under which the problem occurs (parameters)\n - archive_mode=always\n - Using the test command in archive_command\n \n* Probable cause\n - I guess that is because the .history file already exists,\n and the test command fails.\n (but if we use archive_mode=on, archive_command is successful).\n\n* How to reproduce\n - Attached is a script to reproduce the problem.\n  Note: the script will remove $PGDATA when it started\n\nThe test command is listed as an example of the use of archive_command\nin postgresql.conf, and the project faced this problem because it used\nthe example as is. If this behavior is a specification, it would be\nbetter not to write the test command as a usage example.\nOr maybe there should be a note that the test command should not be used\nwhen archive_mode=always. Maybe, I'm missing something, sorry.\n\nRegards,\nTatsuro Yamada", "msg_date": "Fri, 28 May 2021 12:18:35 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Thanks!\n\nAt Thu, 27 May 2021 15:05:44 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Wed, May 26, 2021 at 8:49 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > So in the mail [1] and [2] I tried to describe what's going on around\n> > the two issues. Although I haven't have a response to [2], can I\n> > think that we clarified the intention of ee994272ca? And may I think\n> > that we decided that we don't add a test for the commit?\n> \n> Regarding the first question, I feel that the intention of ee994272ca\n> is fairly clear at this point. Someone else might feel differently so\n> I won't presume to speak for anyone but me.\n\nI completely misunderstood your intention here.\n\n> Regarding the second question, I am not opposed to adding a test for\n> that commit, but I think it is a lot more important to fix the bug we\n> have now than to add a test for a bug that was fixed a long time ago.\n\nYes. I agree to that. Glad to see that.\n\n> > Then it seems to me that Robert refound how to reproduce Dilip's case\n> > using basebackup instead of using two standbys. It is using a broken\n> > archive_command with pg_basebackup -Xnone and I showed that the same\n> > resulting state is available by pg_basebackup -Xstream/fetch clearing\n> > pg_wal directory of the resulting backup including an explanation of\n> > why.\n> \n> Yes, it makes sense that we could get to the same state either by not\n> fetching the WAL in the first place, or alternatively by fetching it\n> and then removing it.\n\nSure. That is an opinion and I can agree to that.\n\n> > *I* think that it is better to avoid to have the archive_command since\n> > it seems to me that just unlinking some files seems simpler tha having\n> > the broken archive_command. However, since Robert ignored it, I guess\n> > that Robert thinks that the broken archive_command is better than\n> > that.\n> \n> Well ... I don't see those things as quite related. As far as I can\n> see, unlinking files from pg_wal is an alternative to using -Xnone. On\n> the other hand, the broken archive_command is there to make sure the\n> new primary doesn't archive its WAL segment too soon.\n\nI agree to use the archive_command just to create the desired state.\n\n> Regarding the first point, I think using -Xnone is better than using\n> -Xfetch/stream and then removing the WAL, because (1) it doesn't seem\n> efficient to fetch WAL only to turn around and remove it and (2)\n> someone might question whether removing the WAL afterward is a\n> supported procedure, whereas using an option built into the tool must\n> surely be supported.\n\nMmmm. That looks like meaning that we don't intend to support the\nDilip's case, and means that we support the use of\narchive-command-copies-only-other-than-wal-segments?\n\n> Regarding the second point, I think using the broken archive command\n> is superior because we can be sure of the behavior. If we just rely on\n> not having crossed a segment boundary, then anything that causes more\n> WAL to be generated than we are expecting could break the test. I\n> don't think it's particularly likely in a case like this that\n> autovacuum or any other thing would kick in and generate extra WAL,\n> but the broken archive command ensures that even if it does happen,\n> the test will still work as intended. That, to me, seems like a good\n> enough reason to do it that way.\n\nYeah. It is what the most convincing reason.\n\n> > FWIW, regarding to the name of the test script, putting aside what it\n> > actually does, I proposed to place it as a part or\n> > 004_timeline_switch.pl because this issue is related to timeline\n> > switching.\n> \n> I think it is better to keep it separate. Long test scripts that test\n> multiple things with completely separate tests are hard to read.\n\nAgreed. I often annoyed by a long-lasting TAP script when I wanted to\ndo one of the test items in it. However, I was not sure which is our\npolicy here, consolidating all related tests into one script or having\nseparate scripts containing tests up to a \"certain\" number or a set of\ntests that would take a certain time, or limiting by number the of\nlines. I thought that we are on the first way as I have told several\ntimes to put new tests into an existing script.\n\n> Kyotaro-san, I hope I have not given any offense. I am doing my best,\n> and certainly did not mean to be rude.\n\nNo. Thanks for the words, Robert. I might be a bit too naive, but I\nhad an anxious feeling that I might have been totally pointless or my\nwords might have been too cryptic/broken (my fingers are quite fat),\nor I might have done something wrong or anything other. Anyway I\nthought I might have done something wrong here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 May 2021 15:05:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "(Sorry for being a bit off-topic)\n\nAt Fri, 28 May 2021 12:18:35 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> Hi Horiguchi-san,\n\n(Why me?)\n\n> In a project I helped with, I encountered an issue where\n> the archive command kept failing. I thought this issue was\n> related to the problem in this thread, so I'm sharing it here.\n> If I should create a new thread, please let me know.\n> \n> * Problem\n> - The archive_command is failed always.\n\nAlthough I think the configuration is a kind of broken, it can be seen\nas it is mimicing the case of shared-archive, where primary and\nstandby share the same archive directory.\n\nBasically we need to use an archive command like the following for\nthat case to avoid this kind of failure. The script returns \"success\"\nwhen the target file is found but identical with the source file. I\ndon't find such a description in the documentation, and haven't\nbothered digging into the mailing-list archive.\n\n==\n#! /bin/bash\n\nif [ -f $2 ]; then\n\tcmp -s $1 $2\n\tif [ $? != 0 ]; then\n\t\texit 1\n\tfi\n\texit 0\nfi\n\ncp $1 $2\n==\n\nA maybe-non-optimal behavior is both 00000002.history.done and .ready\nfiles are found at once in archive_status directory but that doesn't\npractically matter. (Or I faintly remember that it is designed to work\neven in that case.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 May 2021 16:40:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Hi Horiguchi-san,\n\n> (Why me?)\n\nBecause the story was also related to PG-REX, which you are\nalso involved in developing. Perhaps off-list instead of\n-hackers would have been better, but I emailed -hackers because\nthe same problem could be encountered by PostgreSQL users who\ndo not use PG-REX.\n\n \n>> In a project I helped with, I encountered an issue where\n>> the archive command kept failing. I thought this issue was\n>> related to the problem in this thread, so I'm sharing it here.\n>> If I should create a new thread, please let me know.\n>>\n>> * Problem\n>> - The archive_command is failed always.\n> \n> Although I think the configuration is a kind of broken, it can be seen\n> as it is mimicing the case of shared-archive, where primary and\n> standby share the same archive directory.\n\n\nTo be precise, the environment of this reproduction script is\ndifferent from our actual environment. I tried to make it as\nsimple as possible to reproduce the problem.\n(In order to make it look like the actual environment, you have\nto build a PG-REX environment.)\n\nA simple replication environment might be enough, so I'll try to\nrecreate a script that is closer to the actual environment later.\n\n \n> Basically we need to use an archive command like the following for\n> that case to avoid this kind of failure. The script returns \"success\"\n> when the target file is found but identical with the source file. I\n> don't find such a description in the documentation, and haven't\n> bothered digging into the mailing-list archive.\n> \n> ==\n> #! /bin/bash\n> \n> if [ -f $2 ]; then\n> \tcmp -s $1 $2\n> \tif [ $? != 0 ]; then\n> \t\texit 1\n> \tfi\n> \texit 0\n> fi\n> \n> cp $1 $2\n> ==\n\nThanks for your reply.\nSince the above behavior is different from the behavior of the\ntest command in the following example in postgresql.conf, I think\nwe should write a note about this example.\n\n# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'\n\nLet me describe the problem we faced.\n- When archive_mode=always, archive_command is (sometimes) executed\n in a situation where the history file already exists on the standby\n side.\n\n- In this case, if \"test ! -f\" is written in the archive_command of\n postgresql.conf on the standby side, the command will keep failing.\n\n Note that this problem does not occur when archive_mode=on.\n\nSo, what should we do for the user? I think we should put some notes\nin postgresql.conf or in the documentation. For example, something\nlike this:\n\n====\nNote: If you use archive_mode=always, the archive_command on the standby side should not be used \"test ! -f\".\n====\n\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Mon, 31 May 2021 11:52:05 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "So, I started a thread for this topic diverged from the following\nthread.\n\nhttps://www.postgresql.org/message-id/4698027d-5c0d-098f-9a8e-8cf09e36a555@nttcom.co.jp_1\n\nAt Mon, 31 May 2021 11:52:05 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> Since the above behavior is different from the behavior of the\n> test command in the following example in postgresql.conf, I think\n> we should write a note about this example.\n> \n> # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p\n> # /mnt/server/archivedir/%f'\n>\n> Let me describe the problem we faced.\n> - When archive_mode=always, archive_command is (sometimes) executed\n> in a situation where the history file already exists on the standby\n> side.\n> \n> - In this case, if \"test ! -f\" is written in the archive_command of\n> postgresql.conf on the standby side, the command will keep failing.\n> \n> Note that this problem does not occur when archive_mode=on.\n> \n> So, what should we do for the user? I think we should put some notes\n> in postgresql.conf or in the documentation. For example, something\n> like this:\n\nI'm not sure about the exact configuration you have in mind, but that\nwould happen on the cascaded standby in the case where the upstream\npromotes. In this case, the history file for the new timeline is\narchived twice. walreceiver triggers archiving of the new history\nfile at the time of the promotion, then startup does the same when it\nrestores the file from archive. Is it what you complained about?\n\nThe same workaround using the alternative archive script works for the\ncase.\n\nWe could check pg_wal before fetching archive, however, archiving is\nnot controlled so strictly that duplicate archiving never happens and\nI think we choose possible duplicate archiving than having holes in\narchive. (so we suggest the \"test ! -f\" script)\n\n> ====\n> Note: If you use archive_mode=always, the archive_command on the\n> standby side should not be used \"test ! -f\".\n> ====\n\nIt could be one workaround. However, I would suggest not to overwrite\nexisting files (with a file with different content) to protect archive\nfrom corruption.\n\nWe might need to write that in the documentation...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 31 May 2021 16:58:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Duplicate history file?" }, { "msg_contents": "Moved to another thread.\n\nhttps://www.postgresql.org/message-id/20210531.165825.921389284096975508.horikyota.ntt@gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 31 May 2021 17:03:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Hi Horiguchi-san,\n\nOn 2021/05/31 16:58, Kyotaro Horiguchi wrote:\n> So, I started a thread for this topic diverged from the following\n> thread.\n> \n> https://www.postgresql.org/message-id/4698027d-5c0d-098f-9a8e-8cf09e36a555@nttcom.co.jp_1\n> \n>> So, what should we do for the user? I think we should put some notes\n>> in postgresql.conf or in the documentation. For example, something\n>> like this:\n> \n> I'm not sure about the exact configuration you have in mind, but that\n> would happen on the cascaded standby in the case where the upstream\n> promotes. In this case, the history file for the new timeline is\n> archived twice. walreceiver triggers archiving of the new history\n> file at the time of the promotion, then startup does the same when it\n> restores the file from archive. Is it what you complained about?\n\n\nThank you for creating a new thread and explaining this.\nWe are not using cascade replication in our environment, but I think\nthe situation is similar. As an overview, when I do a promote,\nthe archive_command fails due to the history file.\n\nI've created a reproduction script that includes building replication,\nand I'll share it with you. (I used Robert's test.sh as a reference\nfor creating the reproduction script. Thanks)\n\nThe scenario (sr_test_historyfile.sh) is as follows.\n\n#1 Start pgprimary as a main\n#2 Create standby\n#3 Start pgstandby as a standby\n#4 Execute archive command\n#5 Shutdown pgprimary\n#6 Start pgprimary as a standby\n#7 Promote pgprimary\n#8 Execute archive_command again, but failed since duplicate history\n file exists (see pgstandby.log)\n\nNote that this may not be appropriate if you consider it as a recovery\nprocedure for replication configuration. However, I'm sharing it as it is\nbecause this seems to be the procedure used in the customer's environment (PG-REX).\n\n \n> The same workaround using the alternative archive script works for the\n> case.\n> \n> We could check pg_wal before fetching archive, however, archiving is\n> not controlled so strictly that duplicate archiving never happens and\n> I think we choose possible duplicate archiving than having holes in\n> archive. (so we suggest the \"test ! -f\" script)\n> \n>> ====\n>> Note: If you use archive_mode=always, the archive_command on the\n>> standby side should not be used \"test ! -f\".\n>> ====\n> \n> It could be one workaround. However, I would suggest not to overwrite\n> existing files (with a file with different content) to protect archive\n> from corruption.\n> \n> We might need to write that in the documentation...\n\nI think you're right, replacing it with an alternative archive script\nthat includes the cmp command will resolve the error. The reason is that\nI checked with the diff command that the history files are identical.\n\n=====\n$ diff -s pgprimary/arc/00000002.history pgstandby/arc/00000002.history\nFiles pgprimary/arc/00000002.history and pgstandby/arc/00000002.history are identical\n=====\n\nRegarding \"test ! -f\",\nI am wondering how many people are using the test command for\narchive_command. If I remember correctly, the guide provided by\nNTT OSS Center that we are using does not recommend using the test command.\n\n\nRegards,\nTatsuro Yamada", "msg_date": "Tue, 01 Jun 2021 13:03:22 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Fri, May 28, 2021 at 2:05 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Mmmm. That looks like meaning that we don't intend to support the\n> Dilip's case, and means that we support the use of\n> archive-command-copies-only-other-than-wal-segments?\n\nActually, I think Dilip's case ought to be supported, but I also think\nthat somebody else might disagree, so it's better for me if the test\ndoesn't need to rely on it.\n\n> Agreed. I often annoyed by a long-lasting TAP script when I wanted to\n> do one of the test items in it. However, I was not sure which is our\n> policy here, consolidating all related tests into one script or having\n> separate scripts containing tests up to a \"certain\" number or a set of\n> tests that would take a certain time, or limiting by number the of\n> lines. I thought that we are on the first way as I have told several\n> times to put new tests into an existing script.\n\nDifferent people might have different opinions about this, but my\nopinion is that when it's possible to combine the test cases in a way\nthat feels natural, it's good to do. For example if I have two tests\nthat require the same setup and teardown but do different things in\nthe middle, and if those things seem related, then it's great to set\nup once, try both things, and tear down once. However I don't support\ncombining test cases where it's just concatenating them one after\nanother, because that sort of thing seems to have no benefit. Fewer\nfiles in the source tree is not a goal of itself.\n\n> No. Thanks for the words, Robert. I might be a bit too naive, but I\n> had an anxious feeling that I might have been totally pointless or my\n> words might have been too cryptic/broken (my fingers are quite fat),\n> or I might have done something wrong or anything other. Anyway I\n> thought I might have done something wrong here.\n\nNo, I don't think so. I think the difficulty is more that the three of\nus who are mostly involved in this conversation all have different\nnative languages, and we are trying to discuss an issue which is very\nsubtle. Sometimes I am having difficulty understanding precisely what\neither you or Dilip are intending to say, and it would not surprise me\nto learn that there are difficulties in the other direction also. If\nwe seem to be covering the same topics multiple times or if any\nimportant points seem to be getting ignored, that's probably the\nreason.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Jun 2021 16:45:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, Jun 01, 2021 at 04:45:52PM -0400, Robert Haas wrote:\n> On Fri, May 28, 2021 at 2:05 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > Agreed. I often annoyed by a long-lasting TAP script when I wanted to\n> > do one of the test items in it. However, I was not sure which is our\n> > policy here, consolidating all related tests into one script or having\n> > separate scripts containing tests up to a \"certain\" number or a set of\n> > tests that would take a certain time, or limiting by number the of\n> > lines. I thought that we are on the first way as I have told several\n> > times to put new tests into an existing script.\n> \n> Different people might have different opinions about this, but my\n> opinion is that when it's possible to combine the test cases in a way\n> that feels natural, it's good to do. For example if I have two tests\n> that require the same setup and teardown but do different things in\n> the middle, and if those things seem related, then it's great to set\n> up once, try both things, and tear down once. However I don't support\n> combining test cases where it's just concatenating them one after\n> another, because that sort of thing seems to have no benefit. Fewer\n> files in the source tree is not a goal of itself.\n\nI agree, particularly for the recovery and subscription TAP suites. When one\nof those tests fails on the buildfarm, it's often not obvious to me which log\nmessages are relevant to the failure. Smaller test files simplify the\ninvestigation somewhat.\n\n\n", "msg_date": "Wed, 2 Jun 2021 06:01:31 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Tue, 1 Jun 2021 16:45:52 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Fri, May 28, 2021 at 2:05 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Mmmm. That looks like meaning that we don't intend to support the\n> > Dilip's case, and means that we support the use of\n> > archive-command-copies-only-other-than-wal-segments?\n> \n> Actually, I think Dilip's case ought to be supported, but I also think\n> that somebody else might disagree, so it's better for me if the test\n> doesn't need to rely on it.\n\nUnderstood.\n\n> > Agreed. I often annoyed by a long-lasting TAP script when I wanted to\n> > do one of the test items in it. However, I was not sure which is our\n> > policy here, consolidating all related tests into one script or having\n> > separate scripts containing tests up to a \"certain\" number or a set of\n> > tests that would take a certain time, or limiting by number the of\n> > lines. I thought that we are on the first way as I have told several\n> > times to put new tests into an existing script.\n> \n> Different people might have different opinions about this, but my\n> opinion is that when it's possible to combine the test cases in a way\n> that feels natural, it's good to do. For example if I have two tests\n> that require the same setup and teardown but do different things in\n> the middle, and if those things seem related, then it's great to set\n> up once, try both things, and tear down once. However I don't support\n> combining test cases where it's just concatenating them one after\n> another, because that sort of thing seems to have no benefit. Fewer\n> files in the source tree is not a goal of itself.\n\nSounds like a reasonable criteria.\n\n> > No. Thanks for the words, Robert. I might be a bit too naive, but I\n> > had an anxious feeling that I might have been totally pointless or my\n> > words might have been too cryptic/broken (my fingers are quite fat),\n> > or I might have done something wrong or anything other. Anyway I\n> > thought I might have done something wrong here.\n> \n> No, I don't think so. I think the difficulty is more that the three of\n> us who are mostly involved in this conversation all have different\n> native languages, and we are trying to discuss an issue which is very\n> subtle. Sometimes I am having difficulty understanding precisely what\n> either you or Dilip are intending to say, and it would not surprise me\n> to learn that there are difficulties in the other direction also. If\n> we seem to be covering the same topics multiple times or if any\n> important points seem to be getting ignored, that's probably the\n> reason.\n\nThat makes me convinced. Thanks for the thought and sorry for\nbothering with the complaint.\n\n\nAnyway, Now I agree to the all of the direction here.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 03 Jun 2021 13:54:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Tue, 01 Jun 2021 13:03:22 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> Hi Horiguchi-san,\n> \n> On 2021/05/31 16:58, Kyotaro Horiguchi wrote:\n> > So, I started a thread for this topic diverged from the following\n> > thread.\n> > https://www.postgresql.org/message-id/4698027d-5c0d-098f-9a8e-8cf09e36a555@nttcom.co.jp_1\n> > \n> >> So, what should we do for the user? I think we should put some notes\n> >> in postgresql.conf or in the documentation. For example, something\n> >> like this:\n> > I'm not sure about the exact configuration you have in mind, but that\n> > would happen on the cascaded standby in the case where the upstream\n> > promotes. In this case, the history file for the new timeline is\n> > archived twice. walreceiver triggers archiving of the new history\n> > file at the time of the promotion, then startup does the same when it\n> > restores the file from archive. Is it what you complained about?\n> \n> \n> Thank you for creating a new thread and explaining this.\n> We are not using cascade replication in our environment, but I think\n> the situation is similar. As an overview, when I do a promote,\n> the archive_command fails due to the history file.\n\nAh, I remembered that PG-REX starts a primary as a standby then\npromotes it.\n\n> I've created a reproduction script that includes building replication,\n> and I'll share it with you. (I used Robert's test.sh as a reference\n> for creating the reproduction script. Thanks)\n> \n> The scenario (sr_test_historyfile.sh) is as follows.\n> \n> #1 Start pgprimary as a main\n> #2 Create standby\n> #3 Start pgstandby as a standby\n> #4 Execute archive command\n> #5 Shutdown pgprimary\n> #6 Start pgprimary as a standby\n> #7 Promote pgprimary\n> #8 Execute archive_command again, but failed since duplicate history\n> file exists (see pgstandby.log)\n\nOk, I clearly understood what you meant. (However, it is not the legit\nstate where a standby is running without the primary is running..)\nAnyway the \"test ! -f\" can be problematic in the case.\n\n> Note that this may not be appropriate if you consider it as a recovery\n> procedure for replication configuration. However, I'm sharing it as it\n> is\n> because this seems to be the procedure used in the customer's\n> environment (PG-REX).\n\nUnderstood.\n\n> Regarding \"test ! -f\",\n> I am wondering how many people are using the test command for\n> archive_command. If I remember correctly, the guide provided by\n> NTT OSS Center that we are using does not recommend using the test\n> command.\n\nI think, as the PG-REX documentation says, the simple cp works well as\nfar as the assumption of PG-REX - no double failure happenes, and\nfollowing the instruction - holds.\n\n\nOn the other hand, I found that the behavior happens more generally.\n\nIf a standby with archive_mode=always craches, it starts recovery from\nthe last checkpoint. If the checkpoint were in a archived segment, the\nrestarted standby will fetch the already-archived segment from archive\nthen fails to archive it. (The attached).\n\nSo, your fear stated upthread is applicable for wider situations. The\nfollowing suggestion is rather harmful for the archive_mode=always\nsetting.\n\nhttps://www.postgresql.org/docs/14/continuous-archiving.html\n> The archive command should generally be designed to refuse to\n> overwrite any pre-existing archive file. This is an important safety\n> feature to preserve the integrity of your archive in case of\n> administrator error (such as sending the output of two different\n> servers to the same archive directory).\n\nI'm not sure how we should treat this.. Since archive must store\nfiles actually applied to the server data, just being already archived\ncannot be the reason for omitting archiving. We need to make sure the\nnew file is byte-identical to the already-archived version. We could\ncompare just *restored* file to the same file in pg_wal but it might\nbe too much of penalty for for the benefit. (Attached second file.)\n\nOtherwise the documentation would need someting like the following if\nwe assume the current behavior.\n\n> The archive command should generally be designed to refuse to\n> overwrite any pre-existing archive file. This is an important safety\n> feature to preserve the integrity of your archive in case of\n> administrator error (such as sending the output of two different\n> servers to the same archive directory).\n+ For standby with the setting archive_mode=always, there's a case where\n+ the same file is archived more than once. For safety, it is\n+ recommended that when the destination file exists, the archive_command\n+ returns zero if it is byte-identical to the source file.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n# Copyright (c) 2021, PostgreSQL Global Development Group\n\n#\n# Tests related to WAL archiving and recovery.\n#\nuse strict;\nuse warnings;\nuse PostgresNode;\nuse TestLib;\nuse Test::More tests => 1;\nuse Config;\n\nmy $backup_name='mybackup';\n\nmy $primary = get_new_node('primary');\n$primary->init(\n\thas_archiving => 1,\n\tallows_streaming => 1);\n$primary->append_conf('postgresql.conf', qq[\nwal_keep_size=128MB\narchive_mode=always\nlog_checkpoints=yes\n\n]);\nmy $primary_archive = $primary->archive_dir;\n$primary->start;\n\n$primary->backup($backup_name);\nmy $standby = get_new_node('standby');\nmy $standby_archive = $standby->archive_dir;\n$standby->init_from_backup($primary, $backup_name, has_streaming=>1);\n$standby->append_conf('postgresql.conf', qq[\nrestore_command='cp $primary_archive/%f %p'\narchive_command='test ! -f $standby_archive/%f && cp %p $standby_archive/%f'\n]);\n$standby->start;\n\n$primary->psql('postgres', 'CHECKPOINT;SELECT pg_switch_wal();CREATE TABLE t(); pg_switch_wal();');\n$standby->psql('postgres', 'CHECKPOINT');\n$standby->stop('immediate');\n$standby->start;\n\n$primary->psql('postgres', 'CHECKPOINT;SELECT pg_switch_wal();CHECKPOINT');\n$standby->psql('postgres', 'CHECKPOINT');\n\nmy $result;\nwhile (1) {\n\n\t$result = \n\t $standby->safe_psql('postgres',\n\t\t\t\t\t\t \"SELECT last_archived_wal, last_failed_wal FROM pg_stat_archiver\");\n\tsleep(0.1);\n\tlast if ($result ne \"|\");\n}\n\nok($result =~ /^[^|]+\\|$/, 'archive check 1');\n\ndiff --git a/src/backend/access/transam/xlogarchive.c b/src/backend/access/transam/xlogarchive.c\nindex 26b023e754..037da0aa3d 100644\n--- a/src/backend/access/transam/xlogarchive.c\n+++ b/src/backend/access/transam/xlogarchive.c\n@@ -382,6 +382,7 @@ KeepFileRestoredFromArchive(const char *path, const char *xlogfname)\n {\n \tchar\t\txlogfpath[MAXPGPATH];\n \tbool\t\treload = false;\n+\tbool\t\tskip_archive = false;\n \tstruct stat statbuf;\n \n \tsnprintf(xlogfpath, MAXPGPATH, XLOGDIR \"/%s\", xlogfname);\n@@ -416,6 +417,56 @@ KeepFileRestoredFromArchive(const char *path, const char *xlogfname)\n \t\t/* same-size buffers, so this never truncates */\n \t\tstrlcpy(oldpath, xlogfpath, MAXPGPATH);\n #endif\n+\t\t/*\n+\t\t * On a standby with archive_mode=always, there's the case where the\n+\t\t * same file is archived more than once. If the archive_command rejects\n+\t\t * overwriting, WAL-archiving won't go further than the file forever.\n+\t\t * Avoid duplicate archiving attempts when the file is known to have\n+\t\t * been archived and the content doesn't change.\n+\t\t */\n+\t\tif (XLogArchiveMode == ARCHIVE_MODE_ALWAYS &&\n+\t\t\tXLogArchiveCheckDone(xlogfname))\n+\t\t{\n+\t\t\tunsigned char srcbuf[XLOG_BLCKSZ];\n+\t\t\tunsigned char dstbuf[XLOG_BLCKSZ];\n+\t\t\tint fd1 = BasicOpenFile(path, O_RDONLY | PG_BINARY);\n+\t\t\tint fd2 = BasicOpenFile(oldpath, O_RDONLY | PG_BINARY);\n+\t\t\tuint32 i;\n+\t\t\tuint32 off = 0;\n+\n+\t\t\t/*\n+\t\t\t * Compare the two files' contents. We don't bother completing if\n+\t\t\t * something's wrong meanwhile.\n+\t\t\t */\n+\t\t\tfor (i = 0 ; i < wal_segment_size / XLOG_BLCKSZ ; i++)\n+\t\t\t{\n+\t\t\t\tif (pg_pread(fd1, srcbuf, XLOG_BLCKSZ, (off_t) off)\n+\t\t\t\t\t!= XLOG_BLCKSZ)\n+\t\t\t\t\tbreak;\n+\t\t\t\t\n+\t\t\t\tif (pg_pread(fd2, dstbuf, XLOG_BLCKSZ, (off_t) off)\n+\t\t\t\t\t!= XLOG_BLCKSZ)\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\tif (memcmp(srcbuf, dstbuf, XLOG_BLCKSZ) != 0)\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\toff += XLOG_BLCKSZ;\n+\t\t\t}\n+\n+\t\t\tclose(fd1);\n+\t\t\tclose(fd2);\n+\t\t\t\n+\t\t\tif (i == wal_segment_size / XLOG_BLCKSZ)\n+\t\t\t{\n+\t\t\t\tskip_archive = true;\n+\n+\t\t\t\tereport(LOG,\n+\t\t\t\t\t\t(errmsg (\"log file \\\"%s\\\" have been already archived, skip archiving\",\n+\t\t\t\t\t\t\t\t xlogfname)));\n+\t\t\t}\n+\t\t}\n+\n \t\tif (unlink(oldpath) != 0)\n \t\t\tereport(FATAL,\n \t\t\t\t\t(errcode_for_file_access(),\n@@ -430,7 +481,7 @@ KeepFileRestoredFromArchive(const char *path, const char *xlogfname)\n \t * Create .done file forcibly to prevent the restored segment from being\n \t * archived again later.\n \t */\n-\tif (XLogArchiveMode != ARCHIVE_MODE_ALWAYS)\n+\tif (XLogArchiveMode != ARCHIVE_MODE_ALWAYS || skip_archive)\n \t\tXLogArchiveForceDone(xlogfname);\n \telse\n \t\tXLogArchiveNotify(xlogfname);", "msg_date": "Thu, 03 Jun 2021 21:52:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Thu, May 27, 2021 at 2:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Changed as suggested.\n\nI don't think the code as written here is going to work on Windows,\nbecause your code doesn't duplicate enable_restoring's call to\nperl2host or its backslash-escaping logic. It would really be better\nif we could use enable_restoring directly. Also, I discovered that the\n'return' in cp_history_files should really say 'exit', because\notherwise it generates a complaint every time it's run. It should also\nhave 'use strict' and 'use warnings' at the top.\n\nHere's a version of your test case patch with the 1-line code fix\nadded, the above issues addressed, and a bunch of cosmetic tweaks.\nUnfortunately, it doesn't pass for me consistently. I'm not sure if\nthat's because I broke something with my changes, or because the test\ncontains an underlying race condition which we need to address.\nAttached also are the log files from a failed run if you want to look\nat them. The key lines seem to be:\n\n2021-06-03 16:16:53.984 EDT [47796] LOG: restarted WAL streaming at\n0/3000000 on timeline 2\n2021-06-03 16:16:54.197 EDT [47813] 025_stuck_on_old_timeline.pl LOG:\nstatement: SELECT count(*) FROM tab_int\n2021-06-03 16:16:54.197 EDT [47813] 025_stuck_on_old_timeline.pl\nERROR: relation \"tab_int\" does not exist at character 22\n\nOr from the main log:\n\nWaiting for replication conn cascade's replay_lsn to pass '0/3000000' on standby\ndone\nerror running SQL: 'psql:<stdin>:1: ERROR: relation \"tab_int\" does not exist\nLINE 1: SELECT count(*) FROM tab_int\n ^'\n\nI wonder whether that problem points to an issue with this incantation:\n\n$node_standby->wait_for_catchup($node_cascade, 'replay',\n $node_standby->lsn('replay'));\n\nBut I'm not sure, and I'm out of time to investigate for today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Jun 2021 16:33:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Thu, 03 Jun 2021 21:52:08 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> \n> https://www.postgresql.org/docs/14/continuous-archiving.html\n> > The archive command should generally be designed to refuse to\n> > overwrite any pre-existing archive file. This is an important safety\n> > feature to preserve the integrity of your archive in case of\n> > administrator error (such as sending the output of two different\n> > servers to the same archive directory).\n> \n> I'm not sure how we should treat this.. Since archive must store\n> files actually applied to the server data, just being already archived\n> cannot be the reason for omitting archiving. We need to make sure the\n> new file is byte-identical to the already-archived version. We could\n> compare just *restored* file to the same file in pg_wal but it might\n> be too much of penalty for for the benefit. (Attached second file.)\n\n(To recap: In a replication set using archive, startup tries to\nrestore WAL files from archive before checking pg_wal directory for\nthe desired file. The behavior itself is intentionally designed and\nreasonable. However, the restore code notifies of a restored file\nregardless of whether it has been already archived or not. If\narchive_command is written so as to return error for overwriting as we\nsuggest in the documentation, that behavior causes archive failure.)\n\nAfter playing with this, I see the problem just by restarting a\nstandby even in a simple archive-replication set after making\nnot-special prerequisites. So I think this is worth fixing.\n\nWith this patch, KeepFileRestoredFromArchive compares the contents of\njust-restored file and the existing file for the same segment only\nwhen:\n\n - archive_mode = always\n and - the file to restore already exists in pgwal\n and - it has a .done and/or .ready status file.\n\nwhich doesn't happen usually. Then the function skips archive\nnotification if the contents are identical. The included TAP test is\nworking both on Linux and Windows.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 04 Jun 2021 16:21:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Fri, Jun 4, 2021 at 2:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 2:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Changed as suggested.\n>\n> I don't think the code as written here is going to work on Windows,\n> because your code doesn't duplicate enable_restoring's call to\n> perl2host or its backslash-escaping logic. It would really be better\n> if we could use enable_restoring directly. Also, I discovered that the\n> 'return' in cp_history_files should really say 'exit', because\n> otherwise it generates a complaint every time it's run. It should also\n> have 'use strict' and 'use warnings' at the top.\n\nOk\n\n> Here's a version of your test case patch with the 1-line code fix\n> added, the above issues addressed, and a bunch of cosmetic tweaks.\n> Unfortunately, it doesn't pass for me consistently. I'm not sure if\n> that's because I broke something with my changes, or because the test\n> contains an underlying race condition which we need to address.\n> Attached also are the log files from a failed run if you want to look\n> at them. The key lines seem to be:\n\nI could not reproduce this but I think I got the issue, I think I used\nthe wrong target LSN in wait_for_catchup, instead of checking the last\n\"insert LSN\" of the standby I was waiting for last \"replay LSN\" of\nstandby which was wrong. Changed as below in the attached patch.\n\ndiff --git a/src/test/recovery/t/025_stuck_on_old_timeline.pl\nb/src/test/recovery/t/025_stuck_on_old_timeline.pl\nindex 09eb3eb..ee7d78d 100644\n--- a/src/test/recovery/t/025_stuck_on_old_timeline.pl\n+++ b/src/test/recovery/t/025_stuck_on_old_timeline.pl\n@@ -78,7 +78,7 @@ $node_standby->safe_psql('postgres', \"CREATE TABLE\ntab_int AS SELECT 1 AS a\");\n\n # Wait for the replication to catch up\n $node_standby->wait_for_catchup($node_cascade, 'replay',\n- $node_standby->lsn('replay'));\n+ $node_standby->lsn('insert'));\n\n # Check that cascading standby has the new content\n my $result =\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 4 Jun 2021 13:21:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 4 Jun 2021 13:21:08 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Fri, Jun 4, 2021 at 2:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, May 27, 2021 at 2:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > Changed as suggested.\n> >\n> > I don't think the code as written here is going to work on Windows,\n> > because your code doesn't duplicate enable_restoring's call to\n> > perl2host or its backslash-escaping logic. It would really be better\n> > if we could use enable_restoring directly. Also, I discovered that the\n> > 'return' in cp_history_files should really say 'exit', because\n> > otherwise it generates a complaint every time it's run. It should also\n> > have 'use strict' and 'use warnings' at the top.\n> \n> Ok\n> \n> > Here's a version of your test case patch with the 1-line code fix\n> > added, the above issues addressed, and a bunch of cosmetic tweaks.\n> > Unfortunately, it doesn't pass for me consistently. I'm not sure if\n> > that's because I broke something with my changes, or because the test\n> > contains an underlying race condition which we need to address.\n> > Attached also are the log files from a failed run if you want to look\n> > at them. The key lines seem to be:\n> \n> I could not reproduce this but I think I got the issue, I think I used\n> the wrong target LSN in wait_for_catchup, instead of checking the last\n> \"insert LSN\" of the standby I was waiting for last \"replay LSN\" of\n> standby which was wrong. Changed as below in the attached patch.\n\nI think that's right. And the test script detects the issue for me\nboth on Linux but doesn't work for Windows.\n\n'\"C:/../Documents/work/postgresql/src/test/recovery/t/cp_history_files\"' is not recognized as an internal command or external command ..\n\nBecause Windows' cmd.exe doesn't have the shbang feature. On Windows,\nmaybe archive_command should be like\n\n'\".../perl\" \"$FindBin../cp_history_files\" \"%p\"...\n\nIf I did this I got another error.\n\n\"couldn't copy pg_wal\\00000002.history to C:/../Documents/work/postgresql/src\test^Mecovery/tmp_check/t_000_a_primary_data/archives/00000002.history: at C:/../Documents/work/postgresql/src/test/recovery/t/cp_history_files line 10.^M\"\n\n(\"^M\" are the replacement for carrage return)\nSo.. I'm not sure what is happening but the error messages, or..\nAnyway I don't have a time to investigate it.\n\n\n+ # clean up\n+ $node_primary->teardown_node;\n+ $node_standby->teardown_node;\n+ $node_cascade->teardown_node;\n\nI don't think explicit teardown is useless as the final cleanup.\n\nBy the way the attached patch is named as \"Fix-corner-case...\" but\ndoesn't contain the fix. Is it intentional?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 04 Jun 2021 18:24:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, Jun 4, 2021 at 3:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I could not reproduce this but I think I got the issue, I think I used\n> the wrong target LSN in wait_for_catchup, instead of checking the last\n> \"insert LSN\" of the standby I was waiting for last \"replay LSN\" of\n> standby which was wrong. Changed as below in the attached patch.\n\nYeah, that fixes it for me. Thanks.\n\nWith that change, this test reliably passes for me with the fix, and\nreliably fails for me without the fix. Woohoo!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Jun 2021 10:37:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, Jun 4, 2021 at 5:25 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I think that's right. And the test script detects the issue for me\n> both on Linux but doesn't work for Windows.\n>\n> '\"C:/../Documents/work/postgresql/src/test/recovery/t/cp_history_files\"' is not recognized as an internal command or external command ..\n\nHmm, that's a problem. Can you try the attached version?\n\n> + # clean up\n> + $node_primary->teardown_node;\n> + $node_standby->teardown_node;\n> + $node_cascade->teardown_node;\n>\n> I don't think explicit teardown is useless as the final cleanup.\n\nI don't know what you mean by this. If it's not useless, good, because\nwe're doing it. Or do you mean that you think it is useless, and we\nshould remove it?\n\n> By the way the attached patch is named as \"Fix-corner-case...\" but\n> doesn't contain the fix. Is it intentional?\n\nNo, that was a goof.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 4 Jun 2021 10:56:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 4 Jun 2021 10:56:12 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Fri, Jun 4, 2021 at 5:25 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I think that's right. And the test script detects the issue for me\n> > both on Linux but doesn't work for Windows.\n> >\n> > '\"C:/../Documents/work/postgresql/src/test/recovery/t/cp_history_files\"' is not recognized as an internal command or external command ..\n> \n> Hmm, that's a problem. Can you try the attached version?\n\nUnfortunately no. The backslashes in the binary path need to be\nescaped. (taken from PostgresNode.pm:1008)\n\n> (my $perlbin = $^X) =~ s{\\\\}{\\\\\\\\}g if ($TestLib::windows_os);\n> $node_primary->append_conf(\n> \t'postgresql.conf', qq(\n> archive_command = '$perlbin \"$FindBin::RealBin/cp_history_files\" \"%p\" \"$archivedir_primary/%f\"'\n> ));\n\nThis works for me.\n\n> > + # clean up\n> > + $node_primary->teardown_node;\n> > + $node_standby->teardown_node;\n> > + $node_cascade->teardown_node;\n> >\n> > I don't think explicit teardown is useless as the final cleanup.\n> \n> I don't know what you mean by this. If it's not useless, good, because\n> we're doing it. Or do you mean that you think it is useless, and we\n> should remove it?\n\nUgh! Sorry. I meant \"The explicit teardowns are useless\". That's not\nharmful but it is done by PostgresNode.pm automatically(implicitly)\nand we don't do that in the existing scripts.\n\n> > By the way the attached patch is named as \"Fix-corner-case...\" but\n> > doesn't contain the fix. Is it intentional?\n> \n> No, that was a goof.\n\nAs I said upthread the relationship between receiveTLI and\nrecoveryTargetTLI is not confirmed yet at the point.\nfindNewestTimeLine() simply searches for the history file with the\nlargest timeline id so the returned there's a case where the timeline\nid that the function returns is not a future of the latest checkpoint\nTLI. I think that the fact that rescanLatestTimeLine() checks the\nrelationship is telling us that we need to do the same in the path as\nwell.\n\nIn my previous proposal, it is done just after the line the patch\ntouches but it can be in the if (fetching_ckpt) branch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Jun 2021 13:57:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Sorry, some extra words are left alone.\n\nAt Mon, 07 Jun 2021 13:57:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> As I said upthread the relationship between receiveTLI and\n> recoveryTargetTLI is not confirmed yet at the point.\n- findNewestTimeLine() simply searches for the history file with the\n- largest timeline id so the returned there's a case where the timeline\n+ findNewestTimeLine() simply searches for the history file with the\n+ largest timeline id so there's a case where the timeline\n> id that the function returns is not a future of the latest checkpoint\n> TLI. I think that the fact that rescanLatestTimeLine() checks the\n> relationship is telling us that we need to do the same in the path as\n> well.\n> \n> In my previous proposal, it is done just after the line the patch\n> touches but it can be in the if (fetching_ckpt) branch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Jun 2021 14:01:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Hi Horiguchi-san,\n\n>> Regarding \"test ! -f\",\n>> I am wondering how many people are using the test command for\n>> archive_command. If I remember correctly, the guide provided by\n>> NTT OSS Center that we are using does not recommend using the test\n>> command.\n> \n> I think, as the PG-REX documentation says, the simple cp works well as\n> far as the assumption of PG-REX - no double failure happenes, and\n> following the instruction - holds.\n\n\nI believe that this assumption started to be wrong after\narchive_mode=always was introduced. As far as I can tell, it doesn't\nhappen when it's archive_mode=on.\n\n\n> On the other hand, I found that the behavior happens more generally.\n> \n> If a standby with archive_mode=always craches, it starts recovery from\n> the last checkpoint. If the checkpoint were in a archived segment, the\n> restarted standby will fetch the already-archived segment from archive\n> then fails to archive it. (The attached).\n> \n> So, your fear stated upthread is applicable for wider situations. The\n> following suggestion is rather harmful for the archive_mode=always\n> setting.\n> \n> https://www.postgresql.org/docs/14/continuous-archiving.html\n>> The archive command should generally be designed to refuse to\n>> overwrite any pre-existing archive file. This is an important safety\n>> feature to preserve the integrity of your archive in case of\n>> administrator error (such as sending the output of two different\n>> servers to the same archive directory).\n> \n> I'm not sure how we should treat this.. Since archive must store\n> files actually applied to the server data, just being already archived\n> cannot be the reason for omitting archiving. We need to make sure the\n> new file is byte-identical to the already-archived version. We could\n> compare just *restored* file to the same file in pg_wal but it might\n> be too much of penalty for for the benefit. (Attached second file.)\n\n\nThanks for creating the patch!\n\n \n> Otherwise the documentation would need someting like the following if\n> we assume the current behavior.\n> \n>> The archive command should generally be designed to refuse to\n>> overwrite any pre-existing archive file. This is an important safety\n>> feature to preserve the integrity of your archive in case of\n>> administrator error (such as sending the output of two different\n>> servers to the same archive directory).\n> + For standby with the setting archive_mode=always, there's a case where\n> + the same file is archived more than once. For safety, it is\n> + recommended that when the destination file exists, the archive_command\n> + returns zero if it is byte-identical to the source file.\n\n\nAgreed.\nThat is same solution as I mentioned earlier.\nIf possible, it also would better to write it postgresql.conf (that might\nbe overkill?!).\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Mon, 07 Jun 2021 15:57:00 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Hi Horiguchi-san,\n\n\n> (To recap: In a replication set using archive, startup tries to\n> restore WAL files from archive before checking pg_wal directory for\n> the desired file. The behavior itself is intentionally designed and\n> reasonable. However, the restore code notifies of a restored file\n> regardless of whether it has been already archived or not. If\n> archive_command is written so as to return error for overwriting as we\n> suggest in the documentation, that behavior causes archive failure.)\n> \n> After playing with this, I see the problem just by restarting a\n> standby even in a simple archive-replication set after making\n> not-special prerequisites. So I think this is worth fixing.\n> \n> With this patch, KeepFileRestoredFromArchive compares the contents of\n> just-restored file and the existing file for the same segment only\n> when:\n> \n> - archive_mode = always\n> and - the file to restore already exists in pgwal\n> and - it has a .done and/or .ready status file.\n> \n> which doesn't happen usually. Then the function skips archive\n> notification if the contents are identical. The included TAP test is\n> working both on Linux and Windows.\n\n\nThank you for the analysis and the patch.\nI'll try the patch tomorrow.\n\nI just noticed that this thread is still tied to another thread\n(it's not an independent thread). To fix that, it may be better to\ncreate a new thread again.\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Mon, 07 Jun 2021 16:13:08 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Mon, 07 Jun 2021 16:13:08 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> I just noticed that this thread is still tied to another thread\n> (it's not an independent thread). To fix that, it may be better to\n> create a new thread again.\n\nMmm. Maybe my mailer automatically inserted In-Reply-To field for the\ncited messsage. Do we (the two of us) bother re-launching a new\nthread?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Jun 2021 16:31:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Mon, 07 Jun 2021 15:57:00 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> Hi Horiguchi-san,\n> \n> >> Regarding \"test ! -f\",\n> >> I am wondering how many people are using the test command for\n> >> archive_command. If I remember correctly, the guide provided by\n> >> NTT OSS Center that we are using does not recommend using the test\n> >> command.\n> > I think, as the PG-REX documentation says, the simple cp works well as\n> > far as the assumption of PG-REX - no double failure happenes, and\n> > following the instruction - holds.\n> \n> \n> I believe that this assumption started to be wrong after\n> archive_mode=always was introduced. As far as I can tell, it doesn't\n> happen when it's archive_mode=on.\n\n?? Doesn't *simple* cp (without \"test\") work for you? I meant that\nthe operating assumption of PG-REX ensures that overwriting doesn't\ncause a problem.\n\n> > Otherwise the documentation would need someting like the following if\n> > we assume the current behavior.\n> > \n> >> The archive command should generally be designed to refuse to\n> >> overwrite any pre-existing archive file. This is an important safety\n> >> feature to preserve the integrity of your archive in case of\n> >> administrator error (such as sending the output of two different\n> >> servers to the same archive directory).\n> > + For standby with the setting archive_mode=always, there's a case\n> > where\n> > + the same file is archived more than once. For safety, it is\n> > + recommended that when the destination file exists, the\n> > archive_command\n> > + returns zero if it is byte-identical to the source file.\n> \n> \n> Agreed.\n> That is same solution as I mentioned earlier.\n> If possible, it also would better to write it postgresql.conf (that\n> might\n> be overkill?!).\n\nMmmm, I didn't noticed that. I don't think such a complex caveat fits\nthe configuration file. And if we need such a caveart there, it might\nbe the sign that we need to fix the causal behavior...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Jun 2021 16:38:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On 2021/06/07 16:31, Kyotaro Horiguchi wrote:\n> At Mon, 07 Jun 2021 16:13:08 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in\n>> I just noticed that this thread is still tied to another thread\n>> (it's not an independent thread). To fix that, it may be better to\n>> create a new thread again.\n> \n> Mmm. Maybe my mailer automatically inserted In-Reply-To field for the\n> cited messsage. Do we (the two of us) bother re-launching a new\n> thread?\n\n\nThe reason I suggested it was because I thought it might be\nconfusing if the threads were not independent when registered in\na commitfest. If that is not a problem, then I'm fine with it as is. :-D\n\nRegards,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Mon, 07 Jun 2021 16:54:49 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "So, this is the new new thread.\n\nThis thread should have been started here:\n\nhttps://www.postgresql.org/message-id/20210531.165825.921389284096975508.horikyota.ntt%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Jun 2021 17:31:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Duplicate history file?" }, { "msg_contents": "(Sorry for the noise on the old thread..)\n\nAt Mon, 07 Jun 2021 16:54:49 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> On 2021/06/07 16:31, Kyotaro Horiguchi wrote:\n> > At Mon, 07 Jun 2021 16:13:08 +0900, Tatsuro Yamada\n> > <tatsuro.yamada.tf@nttcom.co.jp> wrote in\n> >> I just noticed that this thread is still tied to another thread\n> >> (it's not an independent thread). To fix that, it may be better to\n> >> create a new thread again.\n> > Mmm. Maybe my mailer automatically inserted In-Reply-To field for the\n> > cited messsage. Do we (the two of us) bother re-launching a new\n> > thread?\n> \n> \n> The reason I suggested it was because I thought it might be\n> confusing if the threads were not independent when registered in\n> a commitfest. If that is not a problem, then I'm fine with it as\n> is. :-D\n\n(You can freely do that, too:p)\n\nHmm. I found that the pgsql-hackers archive treats the new thread as a\npart of the old thread, so CF-app would do the same.\n\nAnyway I re-launched a new standalone thread.\n\nhttps://www.postgresql.org/message-id/20210607.173108.348241508233844279.horikyota.ntt%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Jun 2021 17:32:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Mon, Jun 7, 2021 at 12:57 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Unfortunately no. The backslashes in the binary path need to be\n> escaped. (taken from PostgresNode.pm:1008)\n>\n> > (my $perlbin = $^X) =~ s{\\\\}{\\\\\\\\}g if ($TestLib::windows_os);\n> > $node_primary->append_conf(\n> > 'postgresql.conf', qq(\n> > archive_command = '$perlbin \"$FindBin::RealBin/cp_history_files\" \"%p\" \"$archivedir_primary/%f\"'\n> > ));\n>\n> This works for me.\n\nHmm, OK. Do you think we also need to use perl2host in this case?\n\n> Ugh! Sorry. I meant \"The explicit teardowns are useless\". That's not\n> harmful but it is done by PostgresNode.pm automatically(implicitly)\n> and we don't do that in the existing scripts.\n\nOK. I don't think it's a big deal, but we can remove them.\n\n> As I said upthread the relationship between receiveTLI and\n> recoveryTargetTLI is not confirmed yet at the point.\n> findNewestTimeLine() simply searches for the history file with the\n> largest timeline id so the returned there's a case where the timeline\n> id that the function returns is not a future of the latest checkpoint\n> TLI. I think that the fact that rescanLatestTimeLine() checks the\n> relationship is telling us that we need to do the same in the path as\n> well.\n>\n> In my previous proposal, it is done just after the line the patch\n> touches but it can be in the if (fetching_ckpt) branch.\n\nI went back and looked at your patch again, now that I understand the\nissue better. I believe it's not necessary to do this here, because\nStartupXLOG() already contains a check for the same thing:\n\n /*\n * If the location of the checkpoint record is not on the expected\n * timeline in the history of the requested timeline, we cannot proceed:\n * the backup is not part of the history of the requested timeline.\n */\n Assert(expectedTLEs); /* was initialized by reading checkpoint\n * record */\n if (tliOfPointInHistory(checkPointLoc, expectedTLEs) !=\n checkPoint.ThisTimeLineID)\n...\n\nThis code is always run after ReadCheckpointRecord() returns. And I\nthink that your only concern here is about the case where the\ncheckpoint record is being fetched, because otherwise expectedTLEs\nmust already be set.\n\nBy the way, I also noticed that your version of the patch contains a\nfew words which are spelled incorrectly: hearafter, and incosistent.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Jun 2021 10:40:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Greetings,\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> So, this is the new new thread.\n\nThis is definitely not the way I would recommend starting up a new\nthread as you didn't include the actual text of the prior discussion for\npeople to be able to read and respond to, instead making them go hunt\nfor the prior discussion on the old thread and negating the point of\nstarting a new thread..\n\nStill, I went and found the other email-\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> At Mon, 31 May 2021 11:52:05 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> > Since the above behavior is different from the behavior of the\n> > test command in the following example in postgresql.conf, I think\n> > we should write a note about this example.\n> > \n> > # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p\n> > # /mnt/server/archivedir/%f'\n> >\n> > Let me describe the problem we faced.\n> > - When archive_mode=always, archive_command is (sometimes) executed\n> > in a situation where the history file already exists on the standby\n> > side.\n> > \n> > - In this case, if \"test ! -f\" is written in the archive_command of\n> > postgresql.conf on the standby side, the command will keep failing.\n> > \n> > Note that this problem does not occur when archive_mode=on.\n> > \n> > So, what should we do for the user? I think we should put some notes\n> > in postgresql.conf or in the documentation. For example, something\n> > like this:\n\nFirst off, we should tell them to not use test or cp in their actual\narchive command because they don't do things like make sure that the WAL\nthat's been archived has actually been fsync'd. Multiple people have\ntried to make improvements in this area but the long and short of it is\nthat trying to provide a simple archive command in the documentation\nthat actually *works* isn't enough- you need a real tool. Maybe someone\nwill write one some day that's part of core, but it's not happened yet\nand instead there's external solutions which actually do the correct\nthings.\n\nThe existing documentation should be taken as purely \"this is how the\nvariables which are passed in get expanded\" not as \"this is what you\nshould do\", because it's very much not the latter in any form.\n\nThanks,\n\nStephen", "msg_date": "Mon, 7 Jun 2021 14:20:38 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Hi,\n\nI tried back-porting my version of this patch to 9.6 to see what would\nhappen there. One problem is that some of the functions have different\nnames before v10. So 9.6 needs this:\n\n- \"SELECT pg_walfile_name(pg_current_wal_lsn());\");\n+ \"SELECT pg_xlogfile_name(pg_current_xlog_location());\");\n\nBut there's also another problem, which is that this doesn't work before v12:\n\n$node_standby->psql('postgres', 'SELECT pg_promote()');\n\nSo I tried changing it to this:\n\n$node_standby->promote;\n\nBut then the test fails, because pg_promote() has logic built into it\nto wait until the promotion actually happens, but ->promote doesn't,\nso SELECT pg_walfile_name(pg_current_wal_lsn()) errors out because the\nsystem is still in recovery. I'm not sure what to do about that. I\nquickly tried adding -w to 'sub promote' in PostgresNode.pm, but that\ndidn't fix it, so I guess we'll have to find some other way to wait\nuntil the promotion is complete.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Jun 2021 15:02:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Mon, 7 Jun 2021 10:40:27 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Mon, Jun 7, 2021 at 12:57 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Unfortunately no. The backslashes in the binary path need to be\n> > escaped. (taken from PostgresNode.pm:1008)\n> >\n> > > (my $perlbin = $^X) =~ s{\\\\}{\\\\\\\\}g if ($TestLib::windows_os);\n> > > $node_primary->append_conf(\n> > > 'postgresql.conf', qq(\n> > > archive_command = '$perlbin \"$FindBin::RealBin/cp_history_files\" \"%p\" \"$archivedir_primary/%f\"'\n> > > ));\n> >\n> > This works for me.\n> \n> Hmm, OK. Do you think we also need to use perl2host in this case?\n\nI understand that perl2host converts '/some/where' style path to the\nnative windows path 'X:/any/where' if needed. Since perl's $^X is\nalready in native style so I think we don't need to use it.\n\n> > Ugh! Sorry. I meant \"The explicit teardowns are useless\". That's not\n> > harmful but it is done by PostgresNode.pm automatically(implicitly)\n> > and we don't do that in the existing scripts.\n> \n> OK. I don't think it's a big deal, but we can remove them.\n\nThanks.\n\n> I went back and looked at your patch again, now that I understand the\n> issue better. I believe it's not necessary to do this here, because\n> StartupXLOG() already contains a check for the same thing:\n> \n> /*\n> * If the location of the checkpoint record is not on the expected\n> * timeline in the history of the requested timeline, we cannot proceed:\n> * the backup is not part of the history of the requested timeline.\n> */\n> Assert(expectedTLEs); /* was initialized by reading checkpoint\n> * record */\n> if (tliOfPointInHistory(checkPointLoc, expectedTLEs) !=\n> checkPoint.ThisTimeLineID)\n> ...\n> \n> This code is always run after ReadCheckpointRecord() returns. And I\n> think that your only concern here is about the case where the\n> checkpoint record is being fetched, because otherwise expectedTLEs\n> must already be set.\n\nSure. Thanks for confirming that, and agreed.\n\n> By the way, I also noticed that your version of the patch contains a\n> few words which are spelled incorrectly: hearafter, and incosistent.\n\nMmm. Sorry for them..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Jun 2021 10:29:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On 2021/06/07 17:32, Kyotaro Horiguchi wrote:\n>>>> I just noticed that this thread is still tied to another thread\n>>>> (it's not an independent thread). To fix that, it may be better to\n>>>> create a new thread again.\n>>> Mmm. Maybe my mailer automatically inserted In-Reply-To field for the\n>>> cited messsage. Do we (the two of us) bother re-launching a new\n>>> thread?\n>>\n>>\n>> The reason I suggested it was because I thought it might be\n>> confusing if the threads were not independent when registered in\n>> a commitfest. If that is not a problem, then I'm fine with it as\n>> is. :-D\n> \n> (You can freely do that, too:p)\n\nI should have told you that I would be happy to create a new thread.\n\nWhy I didn't create new thread is that because I didn't want people to\nthink I had hijacked the thread. :)\n\n\n> Hmm. I found that the pgsql-hackers archive treats the new thread as a\n> part of the old thread, so CF-app would do the same.\n> \n> Anyway I re-launched a new standalone thread.\n> \n> https://www.postgresql.org/message-id/20210607.173108.348241508233844279.horikyota.ntt%40gmail.com\n\nThank you!\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Tue, 08 Jun 2021 11:33:16 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "(Mmm. thunderbird or gmail connects this thread to the previous one..)\n\nAt Mon, 7 Jun 2021 14:20:38 -0400, Stephen Frost <sfrost@snowman.net> wrote in \n> Greetings,\n> \n> * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > So, this is the new new thread.\n> \n> This is definitely not the way I would recommend starting up a new\n> thread as you didn't include the actual text of the prior discussion for\n> people to be able to read and respond to, instead making them go hunt\n> for the prior discussion on the old thread and negating the point of\n> starting a new thread..\n\nSorry for that. I'll do that next time.\n\n> Still, I went and found the other email-\n\nThanks!\n\n> * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > At Mon, 31 May 2021 11:52:05 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> > > Since the above behavior is different from the behavior of the\n> > > test command in the following example in postgresql.conf, I think\n> > > we should write a note about this example.\n> > > \n> > > # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p\n> > > # /mnt/server/archivedir/%f'\n> > >\n> > > Let me describe the problem we faced.\n> > > - When archive_mode=always, archive_command is (sometimes) executed\n> > > in a situation where the history file already exists on the standby\n> > > side.\n> > > \n> > > - In this case, if \"test ! -f\" is written in the archive_command of\n> > > postgresql.conf on the standby side, the command will keep failing.\n> > > \n> > > Note that this problem does not occur when archive_mode=on.\n> > > \n> > > So, what should we do for the user? I think we should put some notes\n> > > in postgresql.conf or in the documentation. For example, something\n> > > like this:\n> \n> First off, we should tell them to not use test or cp in their actual\n> archive command because they don't do things like make sure that the WAL\n> that's been archived has actually been fsync'd. Multiple people have\n> tried to make improvements in this area but the long and short of it is\n> that trying to provide a simple archive command in the documentation\n> that actually *works* isn't enough- you need a real tool. Maybe someone\n> will write one some day that's part of core, but it's not happened yet\n> and instead there's external solutions which actually do the correct\n> things.\n\nIdeally I agree that it is definitely right. But the documentation\ndoesn't say a bit of \"don't use the simple copy command in any case\n(or at least the cases where more than a certain level of durability\nand integrity guarantee is required).\".\n\nActually many people are satisfied with just \"cp/copy\" and I think\nthey know that the command doesn't guarantee on the integrity of\narchived files on , say, some disastrous situation like a sudden power\ncut.\n\nHowever, the use of \"test ! -f...\" is in a bit different kind of\nsuggestion.\n\nhttps://www.postgresql.org/docs/13/continuous-archiving.html\n| The archive command should generally be designed to refuse to\n| overwrite any pre-existing archive file. This is an important safety\n| feature to preserve the integrity of your archive in case of\n| administrator error (such as sending the output of two different\n| servers to the same archive directory)\n\nThis implies that no WAL segment are archived more than once at least\nunder any valid operation. Some people are following this suggestion\nto prevent archive from breaking by some *wrong* operations.\n\n> The existing documentation should be taken as purely \"this is how the\n> variables which are passed in get expanded\" not as \"this is what you\n> should do\", because it's very much not the latter in any form.\n\nIt describes \"how archive_command should be like\" and showing examples\namong the description implies that the example conforms the\nshould-be's.\n\nNevertheless, the issue here is that there's a case where archiving\nstalls when following the suggestion above under a certain condition.\nEven if it is written premising \"set .. archive_mode to on\", I don't\nbelieve that people can surmise that the same archive_command might\nfail when setting archive_mode to always, because the description\nimplies\n\n\nSo I think we need to revise the documentation, or need to *fix* the\nrevealed problem that is breaking the assumption of the documentation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Jun 2021 12:04:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Yeah, it's hot these days...\n\nAt Tue, 08 Jun 2021 12:04:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> (Mmm. thunderbird or gmail connects this thread to the previous one..)\n> \n> At Mon, 7 Jun 2021 14:20:38 -0400, Stephen Frost <sfrost@snowman.net> wrote in \n> > Greetings,\n> > \n> > * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > > So, this is the new new thread.\n> > \n> > This is definitely not the way I would recommend starting up a new\n> > thread as you didn't include the actual text of the prior discussion for\n> > people to be able to read and respond to, instead making them go hunt\n> > for the prior discussion on the old thread and negating the point of\n> > starting a new thread..\n> \n> Sorry for that. I'll do that next time.\n> \n> > Still, I went and found the other email-\n> \n> Thanks!\n> \n> > * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > > At Mon, 31 May 2021 11:52:05 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> > > > Since the above behavior is different from the behavior of the\n> > > > test command in the following example in postgresql.conf, I think\n> > > > we should write a note about this example.\n> > > > \n> > > > # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p\n> > > > # /mnt/server/archivedir/%f'\n> > > >\n> > > > Let me describe the problem we faced.\n> > > > - When archive_mode=always, archive_command is (sometimes) executed\n> > > > in a situation where the history file already exists on the standby\n> > > > side.\n> > > > \n> > > > - In this case, if \"test ! -f\" is written in the archive_command of\n> > > > postgresql.conf on the standby side, the command will keep failing.\n> > > > \n> > > > Note that this problem does not occur when archive_mode=on.\n> > > > \n> > > > So, what should we do for the user? I think we should put some notes\n> > > > in postgresql.conf or in the documentation. For example, something\n> > > > like this:\n> > \n> > First off, we should tell them to not use test or cp in their actual\n> > archive command because they don't do things like make sure that the WAL\n> > that's been archived has actually been fsync'd. Multiple people have\n> > tried to make improvements in this area but the long and short of it is\n> > that trying to provide a simple archive command in the documentation\n> > that actually *works* isn't enough- you need a real tool. Maybe someone\n> > will write one some day that's part of core, but it's not happened yet\n> > and instead there's external solutions which actually do the correct\n> > things.\n> \n> Ideally I agree that it is definitely right. But the documentation\n> doesn't say a bit of \"don't use the simple copy command in any case\n> (or at least the cases where more than a certain level of durability\n> and integrity guarantee is required).\".\n> \n> Actually many people are satisfied with just \"cp/copy\" and I think\n> they know that the command doesn't guarantee on the integrity of\n> archived files on , say, some disastrous situation like a sudden power\n> cut.\n> \n> However, the use of \"test ! -f...\" is in a bit different kind of\n> suggestion.\n> \n> https://www.postgresql.org/docs/13/continuous-archiving.html\n> | The archive command should generally be designed to refuse to\n> | overwrite any pre-existing archive file. This is an important safety\n> | feature to preserve the integrity of your archive in case of\n> | administrator error (such as sending the output of two different\n> | servers to the same archive directory)\n> \n> This implies that no WAL segment are archived more than once at least\n> under any valid operation. Some people are following this suggestion\n> to prevent archive from breaking by some *wrong* operations.\n> \n> > The existing documentation should be taken as purely \"this is how the\n> > variables which are passed in get expanded\" not as \"this is what you\n> > should do\", because it's very much not the latter in any form.\n> \n\n- It describes \"how archive_command should be like\" and showing examples\n+ It describes \"what archive_command should be like\" and showing examples\n\n> among the description implies that the example conforms the\n> should-be's.\n> \n> Nevertheless, the issue here is that there's a case where archiving\n> stalls when following the suggestion above under a certain condition.\n> Even if it is written premising \"set .. archive_mode to on\", I don't\n> believe that people can surmise that the same archive_command might\n- fail when setting archive_mode to always, because the description\n- implies\n+ fail when setting archive_mode to always.\n\n> \n> So I think we need to revise the documentation, or need to *fix* the\n> revealed problem that is breaking the assumption of the documentation.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Jun 2021 13:17:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Tue, Jun 8, 2021 at 12:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I tried back-porting my version of this patch to 9.6 to see what would\n> happen there. One problem is that some of the functions have different\n> names before v10. So 9.6 needs this:\n>\n> - \"SELECT pg_walfile_name(pg_current_wal_lsn());\");\n> + \"SELECT pg_xlogfile_name(pg_current_xlog_location());\");\n>\n> But there's also another problem, which is that this doesn't work before v12:\n>\n> $node_standby->psql('postgres', 'SELECT pg_promote()');\n>\n> So I tried changing it to this:\n>\n> $node_standby->promote;\n>\n> But then the test fails, because pg_promote() has logic built into it\n> to wait until the promotion actually happens, but ->promote doesn't,\n> so SELECT pg_walfile_name(pg_current_wal_lsn()) errors out because the\n> system is still in recovery. I'm not sure what to do about that. I\n> quickly tried adding -w to 'sub promote' in PostgresNode.pm, but that\n> didn't fix it, so I guess we'll have to find some other way to wait\n> until the promotion is complete.\n>\n\nMaybe we can use it ?\n\n# Wait until the node exits recovery.\n$standby->poll_query_until('postgres', \"SELECT pg_is_in_recovery() = 'f';\")\nor die \"Timed out while waiting for promotion\";\n\nI will try to generate a version for 9.6 based on this idea and see how it goes\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Jun 2021 11:13:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, Jun 8, 2021 at 11:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> # Wait until the node exits recovery.\n> $standby->poll_query_until('postgres', \"SELECT pg_is_in_recovery() = 'f';\")\n> or die \"Timed out while waiting for promotion\";\n>\n> I will try to generate a version for 9.6 based on this idea and see how it goes\n\nI have changed for as per 9.6 but I am seeing some crash (both\nwith/without fix), I could not figure out the reason, it did not\ngenerate any core dump, although I changed pg_ctl in PostgresNode.pm\nto use \"-c\" so that it can generate core but it did not generate any\ncore file.\n\nThis is log from cascading node (025_stuck_on_old_timeline_cascade.log)\n-------------\ncp: cannot stat\n‘/home/dilipkumar/work/PG/postgresql/src/test/recovery/tmp_check/data_primary_52dW/archives/000000010000000000000003’:\nNo such file or directory\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back\nthe current transaction and exit, because another server process\nexited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nFATAL: could not receive database system identifier and timeline ID\nfrom the primary server: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n--------------\n\nThe attached logs are when I ran without a fix.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 8 Jun 2021 14:17:06 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Hi Horiguchi-san,\n\n> This thread should have been started here:\n> \n> https://www.postgresql.org/message-id/20210531.165825.921389284096975508.horikyota.ntt%40gmail.com\n>>\n>> (To recap: In a replication set using archive, startup tries to\n>> restore WAL files from archive before checking pg_wal directory for\n>> the desired file. The behavior itself is intentionally designed and\n>> reasonable. However, the restore code notifies of a restored file\n>> regardless of whether it has been already archived or not. If\n>> archive_command is written so as to return error for overwriting as we\n>> suggest in the documentation, that behavior causes archive failure.)\n>>\n>> After playing with this, I see the problem just by restarting a\n>> standby even in a simple archive-replication set after making\n>> not-special prerequisites. So I think this is worth fixing.\n>>\n>> With this patch, KeepFileRestoredFromArchive compares the contents of\n>> just-restored file and the existing file for the same segment only\n>> when:\n>>\n>> - archive_mode = always\n>> and - the file to restore already exists in pgwal\n>> and - it has a .done and/or .ready status file.\n>>\n>> which doesn't happen usually. Then the function skips archive\n>> notification if the contents are identical. The included TAP test is\n>> working both on Linux and Windows.\n> \n> \n> Thank you for the analysis and the patch.\n> I'll try the patch tomorrow.\n> \n> I just noticed that this thread is still tied to another thread\n> (it's not an independent thread). To fix that, it may be better to\n> create a new thread again. \n\n\nI've tried your patch. Unfortunately, it didn't seem to have any good\neffect on the script I sent to reproduce the problem.\n\nI understand that, as Stefan says, the test and cp commands have\nproblems and should not be used for archive commands. Maybe this is not\na big problem for the community.\nNevertheless, even if we do not improve the feature, I think it is a\ngood idea to explicitly state in the documentation that archiving may\nfail under certain conditions for new users.\n\nI'd like to hear the opinions of experts on the archive command.\n\nP.S.\nMy customer's problem has already been solved, so it's ok. I've\nemailed -hackers with the aim of preventing users from encountering\nthe same problem.\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Tue, 08 Jun 2021 18:19:04 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Tue, Jun 8, 2021 at 4:47 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have changed for as per 9.6 but I am seeing some crash (both\n> with/without fix), I could not figure out the reason, it did not\n> generate any core dump, although I changed pg_ctl in PostgresNode.pm\n> to use \"-c\" so that it can generate core but it did not generate any\n> core file.\n\nI think the problem is here:\n\nCan't locate object method \"lsn\" via package \"PostgresNode\" at\nt/025_stuck_on_old_timeline.pl line 84.\n\nWhen that happens, it bails out, and cleans everything up, doing an\nimmediate shutdown of all the nodes. The 'lsn' method was added by\ncommit fb093e4cb36fe40a1c3f87618fb8362845dae0f0, so it only appears in\nv10 and later. I think maybe we can think of back-porting that to 9.6.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Jun 2021 12:26:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Tue, Jun 8, 2021 at 12:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think the problem is here:\n>\n> Can't locate object method \"lsn\" via package \"PostgresNode\" at\n> t/025_stuck_on_old_timeline.pl line 84.\n>\n> When that happens, it bails out, and cleans everything up, doing an\n> immediate shutdown of all the nodes. The 'lsn' method was added by\n> commit fb093e4cb36fe40a1c3f87618fb8362845dae0f0, so it only appears in\n> v10 and later. I think maybe we can think of back-porting that to 9.6.\n\nHere's an updated set of patches. I removed the extra teardown_node\ncalls per Kyotaro Horiguchi's request. I adopted his suggestion for\nsetting a $perlbin variable from $^X, but found that $perlbin was\nundefined, so I split the incantation into two lines to fix that. I\nupdated the code to use ->promote() instead of calling pg_promote(),\nand to use poll_query_until() afterwards to wait for promotion as\nsuggested by Dilip. Also, I added a comment to the change in xlog.c.\n\nThen I tried to get things working on 9.6. There's a patch attached to\nback-port a couple of PostgresNode.pm methods from 10 to 9.6, and also\na version of the main patch attached with the necessary wal->xlog,\nlsn->location renaming. Unfortunately ... the new test case still\nfails on 9.6 in a way that looks an awful lot like the bug isn't\nactually fixed:\n\nLOG: primary server contains no more WAL on requested timeline 1\ncp: /Users/rhaas/pgsql/src/test/recovery/tmp_check/data_primary_enMi/archives/000000010000000000000003:\nNo such file or directory\n(repeated many times)\n\nI find that the same failure happens if I back-port the master version\nof the patch to v10 or v11, but if I back-port it to v12 or v13 then\nthe test passes as expected. I haven't figured out what the issue is\nyet. I also noticed that if I back-port it to v12 and then revert the\ncode change, the test still passes. So I think there may be something\nsubtly wrong with this test case yet. Or maybe a code bug.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 8 Jun 2021 16:37:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On 2021/06/08 18:19, Tatsuro Yamada wrote:\n> I've tried your patch. Unfortunately, it didn't seem to have any good\n> effect on the script I sent to reproduce the problem.\n\nOops! The patch forgot about history files.\n\nI checked the attached with your repro script and it works fine.\n\n> I understand that, as Stefan says, the test and cp commands have\n> problems and should not be used for archive commands. Maybe this is not\n> a big problem for the community.\n> Nevertheless, even if we do not improve the feature, I think it is a\n> good idea to explicitly state in the documentation that archiving may\n> fail under certain conditions for new users.\n>\n> I'd like to hear the opinions of experts on the archive command.\n>\n> P.S.\n> My customer's problem has already been solved, so it's ok. I've\n> emailed -hackers with the aim of preventing users from encountering\n> the same problem.\n>\nI understand that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 9 Jun 2021 11:47:21 +0900", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Hi Horiguchi-san,\n\nOn 2021/06/09 11:47, Kyotaro Horiguchi wrote:\n> On 2021/06/08 18:19, Tatsuro Yamada wrote:\n>> I've tried your patch. Unfortunately, it didn't seem to have any good\n>> effect on the script I sent to reproduce the problem.\n> \n> Oops! The patch forgot about history files.\n> \n> I checked the attached with your repro script and it works fine.\n\n\nThank you for fixing the patch.\nThe new patch works well in my environment. :-D\n\n\nRegards,\nTatsuro Yamada\n\n\n\n", "msg_date": "Wed, 09 Jun 2021 13:55:19 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 2:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Then I tried to get things working on 9.6. There's a patch attached to\n> back-port a couple of PostgresNode.pm methods from 10 to 9.6, and also\n> a version of the main patch attached with the necessary wal->xlog,\n> lsn->location renaming. Unfortunately ... the new test case still\n> fails on 9.6 in a way that looks an awful lot like the bug isn't\n> actually fixed:\n>\n> LOG: primary server contains no more WAL on requested timeline 1\n> cp:\n> /Users/rhaas/pgsql/src/test/recovery/tmp_check/data_primary_enMi/archives/000000010000000000000003:\n> No such file or directory\n> (repeated many times)\n>\n> I find that the same failure happens if I back-port the master version\n> of the patch to v10 or v11,\n\n\nI think this fails because prior to v12 the recovery target tli was not set\nto the latest by default because it was not GUC at that time. So after\nbelow fix it started passing on v11(only tested on v11 so far).\n\n\ndiff --git a/src/test/recovery/t/025_stuck_on_old_timeline.pl\nb/src/test/recovery/t/025_stuck_on_old_timeline.pl\nindex 842878a..b3ce5da 100644\n--- a/src/test/recovery/t/025_stuck_on_old_timeline.pl\n+++ b/src/test/recovery/t/025_stuck_on_old_timeline.pl\n@@ -50,6 +50,9 @@ my $node_cascade = get_new_node('cascade');\n $node_cascade->init_from_backup($node_standby, $backup_name,\n has_streaming => 1);\n $node_cascade->enable_restoring($node_primary);\n+$node_cascade->append_conf('recovery.conf', qq(\n+recovery_target_timeline='latest'\n+));\n\nBut now it started passing even without the fix and the log says that it\nnever tried to stream from primary using TL 1 so it never hit the defect\nlocation.\n\n2021-06-09 12:11:08.618 IST [122456] LOG: entering standby mode\n2021-06-09 12:11:08.622 IST [122456] LOG: restored log file\n\"00000002.history\" from archive\ncp: cannot stat\n‘/home/dilipkumar/work/PG/postgresql/src/test/recovery/tmp_check/t_025_stuck_on_old_timeline_primary_data/archives/000000010000000000000002’:\nNo such file or directory\n2021-06-09 12:11:08.627 IST [122456] LOG: redo starts at 0/2000028\n2021-06-09 12:11:08.627 IST [122456] LOG: consistent recovery state\nreached at 0/3000000\n\nNext, I will investigate, without a fix on v11 (maybe v12, v10..) why it is\nnot hitting the defect location at all. And after that, I will check the\nstatus on other older versions.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Jun 9, 2021 at 2:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\nThen I tried to get things working on 9.6. There's a patch attached to\nback-port a couple of PostgresNode.pm methods from 10 to 9.6, and also\na version of the main patch attached with the necessary wal->xlog,\nlsn->location renaming. Unfortunately ... the new test case still\nfails on 9.6 in a way that looks an awful lot like the bug isn't\nactually fixed:\n\nLOG:  primary server contains no more WAL on requested timeline 1\ncp: /Users/rhaas/pgsql/src/test/recovery/tmp_check/data_primary_enMi/archives/000000010000000000000003:\nNo such file or directory\n(repeated many times)\n\nI find that the same failure happens if I back-port the master version\nof the patch to v10 or v11, I think this fails because prior to v12 the recovery target tli was not set to the latest by default because it was not GUC at that time.  So after below fix it started passing on v11(only tested on v11 so far).diff --git a/src/test/recovery/t/025_stuck_on_old_timeline.pl b/src/test/recovery/t/025_stuck_on_old_timeline.plindex 842878a..b3ce5da 100644--- a/src/test/recovery/t/025_stuck_on_old_timeline.pl+++ b/src/test/recovery/t/025_stuck_on_old_timeline.pl@@ -50,6 +50,9 @@ my $node_cascade = get_new_node('cascade'); $node_cascade->init_from_backup($node_standby, $backup_name,        has_streaming => 1); $node_cascade->enable_restoring($node_primary);+$node_cascade->append_conf('recovery.conf', qq(+recovery_target_timeline='latest'+)); But now it started passing even without the fix and the log says that it never tried to stream from primary using TL 1 so it never hit the defect location.2021-06-09 12:11:08.618 IST [122456] LOG:  entering standby mode2021-06-09 12:11:08.622 IST [122456] LOG:  restored log file \"00000002.history\" from archivecp: cannot stat ‘/home/dilipkumar/work/PG/postgresql/src/test/recovery/tmp_check/t_025_stuck_on_old_timeline_primary_data/archives/000000010000000000000002’: No such file or directory2021-06-09 12:11:08.627 IST [122456] LOG:  redo starts at 0/20000282021-06-09 12:11:08.627 IST [122456] LOG:  consistent recovery state reached at 0/3000000Next, I will investigate, without a fix on v11 (maybe v12, v10..) why it is not hitting the defect location at all.  And after that, I will check the status on other older versions. -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 9 Jun 2021 12:14:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Hi\n\n> Thank you for fixing the patch.\n> The new patch works well in my environment. :-D\n\nThis may not be important at this time since it is a\nPoC patch, but I would like to inform you that there\nwas a line that contained multiple spaces instead of tabs.\n\n$ git diff --check\nsrc/backend/access/transam/xlogarchive.c:465: trailing whitespace.\n+\n\nRegards,\nTatsuro Yamada\n\n\n\n", "msg_date": "Wed, 09 Jun 2021 15:58:28 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "\n\nOn 2021/06/09 15:58, Tatsuro Yamada wrote:\n> Hi\n> \n>> Thank you for fixing the patch.\n>> The new patch works well in my environment. :-D\n> \n> This may not be important at this time since it is a\n> PoC patch, but I would like to inform you that there\n> was a line that contained multiple spaces instead of tabs.\n> \n> $ git diff --check\n> src/backend/access/transam/xlogarchive.c:465: trailing whitespace.\n> +\n\nEven with the patch, if \"test ! -f ...\" is used in archive_command,\nyou may still *easily* get the trouble that WAL archiving keeps failing?\n\nInstead, we should consider and document \"better\" command for\narchive_command, or implement something like pg_archivecopy command\ninto the core (as far as I remember, there was the discussion about\nthis feature before...)?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 9 Jun 2021 16:23:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Hi,\n\nOn 2021/06/09 16:23, Fujii Masao wrote:\n> On 2021/06/09 15:58, Tatsuro Yamada wrote:\n>> This may not be important at this time since it is a\n>> PoC patch, but I would like to inform you that there\n>> was a line that contained multiple spaces instead of tabs.\n>>\n>> $ git diff --check\n>> src/backend/access/transam/xlogarchive.c:465: trailing whitespace.\n>> +\n> \n> Even with the patch, if \"test ! -f ...\" is used in archive_command,\n> you may still *easily* get the trouble that WAL archiving keeps failing?\n\nThanks for your comment.\n\nYes, it may solve the error when using the test command, but it is\ndangerous to continue using the cp command, which is listed as an\nexample of an archive command.\n\n \n> Instead, we should consider and document \"better\" command for\n> archive_command, or implement something like pg_archivecopy command\n> into the core (as far as I remember, there was the discussion about\n> this feature before...)?\n\n\nI agree with that idea.\nSince archiving is important for all users, I think there should be\neither a better and safer command in the documentation, or an archive\ncommand (pg_archivecopy?) that we provide as a community, as you said.\nI am curious about the conclusions of past discussions. :)\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Wed, 09 Jun 2021 16:56:14 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 12:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 2:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> 2021-06-09 12:11:08.618 IST [122456] LOG: entering standby mode\n> 2021-06-09 12:11:08.622 IST [122456] LOG: restored log file \"00000002.history\" from archive\n> cp: cannot stat ‘/home/dilipkumar/work/PG/postgresql/src/test/recovery/tmp_check/t_025_stuck_on_old_timeline_primary_data/archives/000000010000000000000002’: No such file or directory\n> 2021-06-09 12:11:08.627 IST [122456] LOG: redo starts at 0/2000028\n> 2021-06-09 12:11:08.627 IST [122456] LOG: consistent recovery state reached at 0/3000000\n>\n> Next, I will investigate, without a fix on v11 (maybe v12, v10..) why it is not hitting the defect location at all. And after that, I will check the status on other older versions.\n\nReason for the problem was that the \"-Xnone\" parameter was not\naccepted by \"sub backup\" in PostgresNode.pm so I created that for\nbackpatch. With attached patches I am to make it pass in v12,v11,v10\n(with fix) and fail (without fix). However, we will have to make some\nchange for 9.6 because pg_basebackup doesn't support -Xnone on 9.6,\nmaybe we can delete the content from pg_wal after the backup, if we\nthink that approach looks fine then I will make the changes for 9.6 as\nwell.\n\nNote: for param backport for v12 and v11 same patch getting applied\nbut for v10 due to some conflict we need a separate patch (both\nattached).\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 9 Jun 2021 13:37:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Wed, 09 Jun 2021 16:56:14 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> Hi,\n> \n> On 2021/06/09 16:23, Fujii Masao wrote:\n> > On 2021/06/09 15:58, Tatsuro Yamada wrote:\n> >> This may not be important at this time since it is a\n> >> PoC patch, but I would like to inform you that there\n> >> was a line that contained multiple spaces instead of tabs.\n> >>\n> >> $ git diff --check\n> >> src/backend/access/transam/xlogarchive.c:465: trailing whitespace.\n> >> +\n> > Even with the patch, if \"test ! -f ...\" is used in archive_command,\n> > you may still *easily* get the trouble that WAL archiving keeps\n> > failing?\n\nI'm not sure, but in regard to the the cause that the patch treats, if\nan already-archived file is recycled or deleted then the same file is\nrestored from archive, that could happen. But the WAL segment that\ncontains the latest checkpoint won't be deleted. The same can be said\non history files.\n\n> Thanks for your comment.\n> \n> Yes, it may solve the error when using the test command, but it is\n> dangerous to continue using the cp command, which is listed as an\n> example of an archive command.\n\n\"test\" command?\n\nAt first I thought that the archive command needs to compare the whole\nfile content *always*, but that happens with the same frequency with\nthe patch runs a whole-file comparison.\n\n> > Instead, we should consider and document \"better\" command for\n> > archive_command, or implement something like pg_archivecopy command\n> > into the core (as far as I remember, there was the discussion about\n> > this feature before...)?\n> \n> \n> I agree with that idea.\n> Since archiving is important for all users, I think there should be\n> either a better and safer command in the documentation, or an archive\n> command (pg_archivecopy?) that we provide as a community, as you said.\n> I am curious about the conclusions of past discussions. :)\n\nHow perfect the officially-provided script or command need to be? The\nreason that the script in the documentation is so simple is, I guess,\nwe don't/can't offer a steps sufficiently solid for all-directions.\n\nSince we didn't noticed that the \"test ! -f\" harms so it has been\nthere but finally we need to remove it. Instead, we need to write\ndoen the known significant requirements by words. I'm afraid that the\nconcrete script would be a bit complex for the documentation..\n\nSo what we can do that is:\n\n - Remove the \"test ! -f\" from the sample command (for *nixen).\n\n - Rewrite at least the following portion in the documentation. [1]\n\n > The archive command should generally be designed to refuse to\n > overwrite any pre-existing archive file. This is an important\n > safety feature to preserve the integrity of your archive in case\n > of administrator error (such as sending the output of two\n > different servers to the same archive directory).\n > \n > It is advisable to test your proposed archive command to ensure\n > that it indeed does not overwrite an existing file, and that it\n > returns nonzero status in this case. The example command above\n > for Unix ensures this by including a separate test step. On some\n > Unix platforms, cp has switches such as -i that can be used to do\n > the same thing less verbosely, but you should not rely on these\n > without verifying that the right exit status is returned. (In\n > particular, GNU cp will return status zero when -i is used and\n > the target file already exists, which is not the desired\n > behavior.)\n\nThe replacement would be something like:\n\n\"There is a case where WAL file and timeline history files is archived\nmore than once. The archive command should generally be designed to\nrefuse to replace any pre-existing archive file with a file with\ndifferent content but to return zero if the file to be archived is\nidentical with the preexisting file.\"\n\nBut I'm not sure how it looks like.. (even ignoring the broken\nphrasing..)\n \n\n1: https://www.postgresql.org/docs/11/continuous-archiving.html\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 09 Jun 2021 18:12:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 1:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 12:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jun 9, 2021 at 2:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > 2021-06-09 12:11:08.618 IST [122456] LOG: entering standby mode\n> > 2021-06-09 12:11:08.622 IST [122456] LOG: restored log file \"00000002.history\" from archive\n> > cp: cannot stat ‘/home/dilipkumar/work/PG/postgresql/src/test/recovery/tmp_check/t_025_stuck_on_old_timeline_primary_data/archives/000000010000000000000002’: No such file or directory\n> > 2021-06-09 12:11:08.627 IST [122456] LOG: redo starts at 0/2000028\n> > 2021-06-09 12:11:08.627 IST [122456] LOG: consistent recovery state reached at 0/3000000\n> >\n> > Next, I will investigate, without a fix on v11 (maybe v12, v10..) why it is not hitting the defect location at all. And after that, I will check the status on other older versions.\n>\n> Reason for the problem was that the \"-Xnone\" parameter was not\n> accepted by \"sub backup\" in PostgresNode.pm so I created that for\n> backpatch. With attached patches I am to make it pass in v12,v11,v10\n> (with fix) and fail (without fix). However, we will have to make some\n> change for 9.6 because pg_basebackup doesn't support -Xnone on 9.6,\n> maybe we can delete the content from pg_wal after the backup, if we\n> think that approach looks fine then I will make the changes for 9.6 as\n> well.\n>\n> Note: for param backport for v12 and v11 same patch getting applied\n> but for v10 due to some conflict we need a separate patch (both\n> attached).\n\nI have fixed it for 9.6 as well by removing the wal from the xlog\ndirectory. Attaching all the patches in single mail to avoid\nconfusion.\n\nNote:\nv7-0001 applies to master, v13,v12 (but for v12 before this we need to\napply backport)\nv12-v8-0001-Backport is same as v11-v8-0001-Backport (duplicated for\nversion wise separation)\nv11-v8-0002 is same as v10-v8-0002\n\nBasically, for v12 and v11 same backport patch works and for V11 and\nV10 same main patch works, still I duplicated them to avoid confusion.\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 9 Jun 2021 15:11:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 4:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Reason for the problem was that the \"-Xnone\" parameter was not\n> accepted by \"sub backup\" in PostgresNode.pm so I created that for\n> backpatch. With attached patches I am to make it pass in v12,v11,v10\n> (with fix) and fail (without fix). However, we will have to make some\n> change for 9.6 because pg_basebackup doesn't support -Xnone on 9.6,\n> maybe we can delete the content from pg_wal after the backup, if we\n> think that approach looks fine then I will make the changes for 9.6 as\n> well.\n\nAh. I looked into this and found that this is because commit\n081876d75ea15c3bd2ee5ba64a794fd8ea46d794 is new in master, so actually\nthat change is absent in all the back-branches. I have now back-ported\nthat portion of that commit to v13, v12, v11, and v10.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Jun 2021 12:38:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 4:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Reason for the problem was that the \"-Xnone\" parameter was not\n> accepted by \"sub backup\" in PostgresNode.pm so I created that for\n> backpatch. With attached patches I am to make it pass in v12,v11,v10\n> (with fix) and fail (without fix). However, we will have to make some\n> change for 9.6 because pg_basebackup doesn't support -Xnone on 9.6,\n> maybe we can delete the content from pg_wal after the backup, if we\n> think that approach looks fine then I will make the changes for 9.6 as\n> well.\n\nGot it. I have now committed the patch to all branches, after adapting\nyour changes just a little bit.\n\nThanks to you and Kyotaro-san for all the time spent on this. What a slog!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Jun 2021 17:03:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Got it. I have now committed the patch to all branches, after adapting\n> your changes just a little bit.\n> Thanks to you and Kyotaro-san for all the time spent on this. What a slog!\n\nconchuela failed its first encounter with this test case:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-06-09%2021%3A12%3A25\n\nThat machine has a certain, er, history of flakiness; so this may\nnot mean anything. Still, we'd better keep an eye out to see if\nthe test needs more stabilization.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Jun 2021 19:09:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Wed, 09 Jun 2021 19:09:54 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Got it. I have now committed the patch to all branches, after adapting\n> > your changes just a little bit.\n> > Thanks to you and Kyotaro-san for all the time spent on this. What a slog!\n> \n> conchuela failed its first encounter with this test case:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-06-09%2021%3A12%3A25\n> \n> That machine has a certain, er, history of flakiness; so this may\n> not mean anything. Still, we'd better keep an eye out to see if\n> the test needs more stabilization.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=conchuela&dt=2021-06-09%2021%3A12%3A25&stg=recovery-check\n\n> ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/025_stuck_on_old_timeline_cascade.log ==~_~===-=-===~_~==\n....\n> 2021-06-09 23:31:10.439 CEST [893820:1] LOG: started streaming WAL from primary at 0/2000000 on timeline 1\n> 2021-06-09 23:31:10.439 CEST [893820:2] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000002 has already been removed\n\nThe script 025_stuck_on_olde_timeline.pl (and I) forgets to set\nwal_keep_size(segments).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 10 Jun 2021 10:12:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, Jun 9, 2021 at 9:12 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=conchuela&dt=2021-06-09%2021%3A12%3A25&stg=recovery-check\n>\n> > ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/025_stuck_on_old_timeline_cascade.log ==~_~===-=-===~_~==\n> ....\n> > 2021-06-09 23:31:10.439 CEST [893820:1] LOG: started streaming WAL from primary at 0/2000000 on timeline 1\n> > 2021-06-09 23:31:10.439 CEST [893820:2] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000002 has already been removed\n>\n> The script 025_stuck_on_olde_timeline.pl (and I) forgets to set\n> wal_keep_size(segments).\n\nThanks for the analysis and the patches. I have committed them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Jun 2021 09:56:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Greetings,\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> At Wed, 09 Jun 2021 16:56:14 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> > On 2021/06/09 16:23, Fujii Masao wrote:\n> > > Instead, we should consider and document \"better\" command for\n> > > archive_command, or implement something like pg_archivecopy command\n> > > into the core (as far as I remember, there was the discussion about\n> > > this feature before...)?\n> > \n> > I agree with that idea.\n> > Since archiving is important for all users, I think there should be\n> > either a better and safer command in the documentation, or an archive\n> > command (pg_archivecopy?) that we provide as a community, as you said.\n> > I am curious about the conclusions of past discussions. :)\n> \n> How perfect the officially-provided script or command need to be? The\n> reason that the script in the documentation is so simple is, I guess,\n> we don't/can't offer a steps sufficiently solid for all-directions.\n> \n> Since we didn't noticed that the \"test ! -f\" harms so it has been\n> there but finally we need to remove it. Instead, we need to write\n> doen the known significant requirements by words. I'm afraid that the\n> concrete script would be a bit complex for the documentation..\n\nWe don't have any 'officially-provided' tool for archive command.\n\n> So what we can do that is:\n> \n> - Remove the \"test ! -f\" from the sample command (for *nixen).\n\n... or just remove the example entirely. It really doesn't do anything\ngood for us, in my view.\n\n> - Rewrite at least the following portion in the documentation. [1]\n> \n> > The archive command should generally be designed to refuse to\n> > overwrite any pre-existing archive file. This is an important\n> > safety feature to preserve the integrity of your archive in case\n> > of administrator error (such as sending the output of two\n> > different servers to the same archive directory).\n> > \n> > It is advisable to test your proposed archive command to ensure\n> > that it indeed does not overwrite an existing file, and that it\n> > returns nonzero status in this case. The example command above\n> > for Unix ensures this by including a separate test step. On some\n> > Unix platforms, cp has switches such as -i that can be used to do\n> > the same thing less verbosely, but you should not rely on these\n> > without verifying that the right exit status is returned. (In\n> > particular, GNU cp will return status zero when -i is used and\n> > the target file already exists, which is not the desired\n> > behavior.)\n> \n> The replacement would be something like:\n> \n> \"There is a case where WAL file and timeline history files is archived\n> more than once. The archive command should generally be designed to\n> refuse to replace any pre-existing archive file with a file with\n> different content but to return zero if the file to be archived is\n> identical with the preexisting file.\"\n> \n> But I'm not sure how it looks like.. (even ignoring the broken\n> phrasing..)\n\nThere is so much more that we should be including here, like \"you should\nmake sure your archive command will reliably sync the WAL file to disk\nbefore returning success to PG, since PG will feel free to immediately\nremove the WAL file once archive command has returned successfully\", and\n\"the archive command should check that there exists a .history file for\nany timeline after timeline 1 in the repo for the WAL file that's being\narchived\" and \"the archive command should allow the exist, binary\nidentical, WAL file to be archived multiple times without error, but\nshould error if a new WAL file is archived which would overwrite a\nbinary distinct WAL file in the repo\", and \"the archive command should\ncheck the WAL header to make sure that the WAL file matches the cluster\nin the corresponding backup repo\", and \"whatever is expiring the WAL\nfiles after they've been archived should make sure to not expire out any\nWAL that is needed for any of the backups that remain\", and \"oh, by the\nway, depending on the exit code of the command, PG may consider the\nfailure to be something which can be retried, or not\", and other things\nthat I can't think of off the top of my head right now.\n\nI have to say that it gets to a point where it feels like we're trying\nto document everything about writing a C extension to PG using the\nhooks which we make available. We've generally agreed that folks should\nbe looking at the source code if they're writing a serious C extension\nand it's certainly the case that, in writing a serious archive command\nand backup tool, getting into the PG source code has been routinely\nnecessary.\n\nThanks,\n\nStephen", "msg_date": "Thu, 10 Jun 2021 10:00:21 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Thu, 10 Jun 2021 09:56:51 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Wed, Jun 9, 2021 at 9:12 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=conchuela&dt=2021-06-09%2021%3A12%3A25&stg=recovery-check\n> >\n> > > ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/025_stuck_on_old_timeline_cascade.log ==~_~===-=-===~_~==\n> > ....\n> > > 2021-06-09 23:31:10.439 CEST [893820:1] LOG: started streaming WAL from primary at 0/2000000 on timeline 1\n> > > 2021-06-09 23:31:10.439 CEST [893820:2] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000002 has already been removed\n> >\n> > The script 025_stuck_on_olde_timeline.pl (and I) forgets to set\n> > wal_keep_size(segments).\n> \n> Thanks for the analysis and the patches. I have committed them.\n\nThanks for committing it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:40:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Thu, 10 Jun 2021 09:56:51 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n>> Thanks for the analysis and the patches. I have committed them.\n\n> Thanks for committing it.\n\nPlease note that conchuela and jacana are still failing ...\n\nconchuela's failure is evidently not every time, but this test\ndefinitely postdates the \"fix\":\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-06-10%2014%3A09%3A08\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 21:53:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Thu, 10 Jun 2021 10:00:21 -0400, Stephen Frost <sfrost@snowman.net> wrote in \n> Greetings,\n> \n> * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > At Wed, 09 Jun 2021 16:56:14 +0900, Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote in \n> > > On 2021/06/09 16:23, Fujii Masao wrote:\n> > > > Instead, we should consider and document \"better\" command for\n> > > > archive_command, or implement something like pg_archivecopy command\n> > > > into the core (as far as I remember, there was the discussion about\n> > > > this feature before...)?\n> > > \n> > > I agree with that idea.\n> > > Since archiving is important for all users, I think there should be\n> > > either a better and safer command in the documentation, or an archive\n> > > command (pg_archivecopy?) that we provide as a community, as you said.\n> > > I am curious about the conclusions of past discussions. :)\n> > \n> > How perfect the officially-provided script or command need to be? The\n> > reason that the script in the documentation is so simple is, I guess,\n> > we don't/can't offer a steps sufficiently solid for all-directions.\n> > \n> > Since we didn't noticed that the \"test ! -f\" harms so it has been\n> > there but finally we need to remove it. Instead, we need to write\n> > doen the known significant requirements by words. I'm afraid that the\n> > concrete script would be a bit complex for the documentation..\n> \n> We don't have any 'officially-provided' tool for archive command.\n\nI meant the \"test ! -f ..\" by the \"officially-provided script\". The\nfact we show it in the documentation (without a caveart) means that\nthe script at least doesn't break the server behavior that is running\nnormally including promotion.\n\n> > So what we can do that is:\n> > \n> > - Remove the \"test ! -f\" from the sample command (for *nixen).\n> \n> ... or just remove the example entirely. It really doesn't do anything\n> good for us, in my view.\n\nYeah. I feel like so. But that also means the usage instruction of the\nreplacements disappears from our documentation. The least problematic\nexample in the regards above is just \"cp ..\" without \"test\" as the\ninstruction.\n\n> > The replacement would be something like:\n> > \n> > \"There is a case where WAL file and timeline history files is archived\n> > more than once. The archive command should generally be designed to\n> > refuse to replace any pre-existing archive file with a file with\n> > different content but to return zero if the file to be archived is\n> > identical with the preexisting file.\"\n> > \n> > But I'm not sure how it looks like.. (even ignoring the broken\n> > phrasing..)\n> \n> There is so much more that we should be including here, like \"you should\n\nMmm. Yeah, I can understand your sentiment maybe completely.\n\n> make sure your archive command will reliably sync the WAL file to disk\n> before returning success to PG, since PG will feel free to immediately\n> remove the WAL file once archive command has returned successfully\", and\n> \"the archive command should check that there exists a .history file for\n> any timeline after timeline 1 in the repo for the WAL file that's being\n> archived\" and \"the archive command should allow the exist, binary\n> identical, WAL file to be archived multiple times without error, but\n> should error if a new WAL file is archived which would overwrite a\n> binary distinct WAL file in the repo\", and \"the archive command should\n> check the WAL header to make sure that the WAL file matches the cluster\n> in the corresponding backup repo\", and \"whatever is expiring the WAL\n> files after they've been archived should make sure to not expire out any\n> WAL that is needed for any of the backups that remain\", and \"oh, by the\n> way, depending on the exit code of the command, PG may consider the\n> failure to be something which can be retried, or not\", and other things\n> that I can't think of off the top of my head right now.\n> I have to say that it gets to a point where it feels like we're trying\n> to document everything about writing a C extension to PG using the\n> hooks which we make available. We've generally agreed that folks should\n> be looking at the source code if they're writing a serious C extension\n> and it's certainly the case that, in writing a serious archive command\n> and backup tool, getting into the PG source code has been routinely\n> necessary.\n\nNevertheless I agree to it, still don't we need a minimum workable\nsetup as the first step? Something like below.\n\n===\nThe following is an example of the minimal archive_command.\n\nExample: cp %p /blah/%f\n\nHowever, it is far from perfect. The following is the discussion about\nwhat is needed for archive_command to be more reliable.\n\n<the long list of the requirements>\n====\n\nAnyway it doesn't seem to be the time to do that, but as now that we\nknow that there's a case where the current example doesn't prevent PG\nfrom working correctly, we cannot use the \"test ! -f\" example and\ncannot suggest \"do not overwrite existing archived files\" without a\ncaveat. At least don't we need to *fix* that parts for now?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:25:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Recently my brain is always twisted..\n\nAt Fri, 11 Jun 2021 11:25:51 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Anyway it doesn't seem to be the time to do that, but as now that we\n- know that there's a case where the current example doesn't prevent PG\n+ know that there's a case where the current example prevents PG\n> from working correctly, we cannot use the \"test ! -f\" example and\n> cannot suggest \"do not overwrite existing archived files\" without a\n> caveat. At least don't we need to *fix* that parts for now?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:28:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Fri, Jun 11, 2021 at 11:25:51AM +0900, Kyotaro Horiguchi wrote:\n> \n> Nevertheless I agree to it, still don't we need a minimum workable\n> setup as the first step? Something like below.\n> \n> ===\n> The following is an example of the minimal archive_command.\n> \n> Example: cp %p /blah/%f\n> \n> However, it is far from perfect. The following is the discussion about\n> what is needed for archive_command to be more reliable.\n> \n> <the long list of the requirements>\n> ====\n\n\"far from perfect\" is a strong understatement for \"appears to work but will\nrandomly and silently breaks everything without a simple way to detect it\".\n\nWe should document a minimum workable setup, but cp isn't an example of that,\nand I don't think that there will be a simple example unless we provide a\ndedicated utility.\n\nIt could however be something along those lines:\n\nExample: archive_program %p /path/to/%d\n\narchive_program being a script ensuring that all those requirements are met:\n<the long list of the requirements>\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:48:32 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Thu, 10 Jun 2021 21:53:18 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Thu, 10 Jun 2021 09:56:51 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> >> Thanks for the analysis and the patches. I have committed them.\n> \n> > Thanks for committing it.\n> \n> Please note that conchuela and jacana are still failing ...\n> \n> conchuela's failure is evidently not every time, but this test\n> definitely postdates the \"fix\":\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-06-10%2014%3A09%3A08\n\nA different test is failing there. Maybe from different issue.\n\n\n> ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/regress_log_002_archiving ==~_~===-=-===~_~==\n...\n> # Postmaster PID for node \"standby2\" is 342349\n> ### Promoting node \"standby2\"\n> # Running: pg_ctl -D /home/pgbf/buildroot/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_standby2_data/pgdata -l /home/pgbf/buildroot/HEAD/pgsql.build/src/test/recovery/tmp_check/log/002_archiving_standby2.log promote\n> waiting for server to promote................................................................................................................ stopped waiting\n> pg_ctl: server did not promote in time\n> Bail out! system pg_ctl failed\n\n> ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/002_archiving_standby2.log ==~_~===-=-===~_~==\n...\n> 2021-06-10 16:21:21.870 CEST [342350:9] LOG: received promote request\n> 2021-06-10 16:21:21.870 CEST [342350:10] LOG: redo done at 0/3030200 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.07 s\n> 2021-06-10 16:21:21.870 CEST [342350:11] LOG: last completed transaction was at log time 2021-06-10 16:21:21.010599+02\n> 2021-06-10 16:21:21.893 CEST [342350:12] LOG: restored log file \"000000010000000000000003\" from archive\n> cp: /home/pgbf/buildroot/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_primary_data/archives/00000003.history: No such file or directory\n> 2021-06-10 16:21:21.896 CEST [342350:13] LOG: selected new timeline ID: 3\n(log ends here)\n\n\n> ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/002_archiving_primary.log ==~_~===-=-===~_~==\n...\n> 2021-06-10 16:21:21.107 CEST [342322:4] 002_archiving.pl LOG: disconnection: session time: 0:00:00.022 user=pgbf database=postgres host=[local]\n> 2021-06-10 16:23:21.965 CEST [342279:4] LOG: received immediate shutdown request\n\nSo the standby2 was stuck after selecting the new timeline and before\nupdating control file and its postmaster couldn't even respond to\nSIGQUIT.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 14:07:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 11 Jun 2021 14:07:45 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 10 Jun 2021 21:53:18 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > conchuela's failure is evidently not every time, but this test\n> > definitely postdates the \"fix\":\n\nconchuela failed recovery_check this time, and\n\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-06-10%2014%3A09%3A08\n> So the standby2 was stuck after selecting the new timeline and before\n> updating control file and its postmaster couldn't even respond to\n> SIGQUIT.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-06-09%2021%3A12%3A25\n\n This is before the \"fix\"\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-06-08%2014%3A07%3A46\n\n failed in pg_verifybackupCheck\n\n> ==~_~===-=-===~_~== pgsql.build/src/bin/pg_verifybackup/tmp_check/log/regress_log_003_corruption ==~_~===-=-===~_~==\n...\n> # Failed test 'base backup ok'\n> # at t/003_corruption.pl line 115.\n> # Running: pg_verifybackup /home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_verifybackup/tmp_check/t_003_corruption_primary_data/backup/open_directory_fails\n> pg_verifybackup: fatal: could not open file \"/home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_verifybackup/tmp_check/t_003_corruption_primary_data/backup/open_directory_fails/backup_manifest\": No such file or directory\n> not ok 38 - intact backup verified\n\nThe manifest file is missing in backup. In this case also the servers\nfailed to handle SIGQUIT.\n\n> ==~_~===-=-===~_~== pgsql.build/src/bin/pg_verifybackup/tmp_check/log/003_corruption_primary.log ==~_~===-=-===~_~==\n...\n> 2021-06-08 16:17:41.706 CEST [51792:9] 003_corruption.pl LOG: received replication command: START_REPLICATION SLOT \"pg_basebackup_51792\" 0/B000000 TIMELINE 1\n> 2021-06-08 16:17:41.706 CEST [51792:10] 003_corruption.pl STATEMENT: START_REPLICATION SLOT \"pg_basebackup_51792\" 0/B000000 TIMELINE 1\n(log ends here)\n\nThere seems like some hardware failure?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 14:26:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Thu, 10 Jun 2021 21:53:18 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \ntgl> Please note that conchuela and jacana are still failing ...\n\nI forgot jacana's case..\n\nIt is failing for the issue the first patch should have fixed.\n\n> ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/025_stuck_on_old_timeline_primary.log ==~_~===-=-===~_~==\n...\n> The system cannot find the path specified.\n> 2021-06-10 22:56:17.754 EDT [60c2d0cf.54c:1] LOG: archive command failed with exit code 1\n> 2021-06-10 22:56:17.754 EDT [60c2d0cf.54c:2] DETAIL: The failed archive command was: /usr/bin/perl \"/home/pgrunner/bf/root/HEAD/pgsql/src/test/recovery/t/cp_history_files\" \"pg_wal\\\\000000010000000000000001\" \"/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_025_stuck_on_old_timeline_primary_data/archives/000000010000000000000001\"\n\nthe cp_history_files exits with just \"exit\" for the files with that\nname, which should set status to 0. ActivePerl did so.\n\nIf I specified nonexistent command like /hoge/perl, %ERRORLEVEL% is\nset to 3, not 1.\n\nI don't find what is happening there so far.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 15:14:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 11 Jun 2021 10:48:32 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Fri, Jun 11, 2021 at 11:25:51AM +0900, Kyotaro Horiguchi wrote:\n> > \n> > Nevertheless I agree to it, still don't we need a minimum workable\n> > setup as the first step? Something like below.\n> > \n> > ===\n> > The following is an example of the minimal archive_command.\n> > \n> > Example: cp %p /blah/%f\n> > \n> > However, it is far from perfect. The following is the discussion about\n> > what is needed for archive_command to be more reliable.\n> > \n> > <the long list of the requirements>\n> > ====\n> \n> \"far from perfect\" is a strong understatement for \"appears to work but will\n> randomly and silently breaks everything without a simple way to detect it\".\n\nI think it's overstating. It sounds like a story of a mission critical\nworld. How perfect archive_command should be depends on the\nrequirements of every system. Simple cp is actualy sufficient in\ncertain log? range of usages, maybe.\n\n> We should document a minimum workable setup, but cp isn't an example of that,\n> and I don't think that there will be a simple example unless we provide a\n> dedicated utility.\n\nIt looks somewhat strange like \"Well, you need a special track to\ndrive your car, however, we don't have one. It's your responsibility\nto construct a track that protects it from accidents perfectly.\".\n\n\"Yeah, I'm not going to push it so hard and don't care it gets some\nsmall scratches, couldn't I drive it on a public road?\"\n\n(Sorry for the bad analogy).\n\nI think cp can be an example as far as we explain the limitations. (On\nthe other hand \"test !-f\" cannot since it actually prevents server\nfrom working correctly.)\n\n> It could however be something along those lines:\n> \n> Example: archive_program %p /path/to/%d\n> \n> archive_program being a script ensuring that all those requirements are met:\n> <the long list of the requirements>\n\nIsn't it almost saying that anything less than pgBackRest isn't\nqualified as archive_program?\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Jun 2021 15:32:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Fri, Jun 11, 2021 at 03:32:28PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 11 Jun 2021 10:48:32 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n>> \"far from perfect\" is a strong understatement for \"appears to work but will\n>> randomly and silently breaks everything without a simple way to detect it\".\n\nYeah. Users like unplugging their hosts, because that's *fast* and\neasy to do.\n\n> I think it's overstating. It sounds like a story of a mission critical\n> world. How perfect archive_command should be depends on the\n> requirements of every system. Simple cp is actually sufficient in\n> certain log? range of usages, maybe.\n> \n>> We should document a minimum workable setup, but cp isn't an example of that,\n>> and I don't think that there will be a simple example unless we provide a\n>> dedicated utility.\n> \n> I think cp can be an example as far as we explain the limitations. (On\n> the other hand \"test !-f\" cannot since it actually prevents server\n> from working correctly.)\n\nDisagreed. I think that we should not try to change this area until\nwe can document a reliable solution, and a simple \"cp\" is not that.\nHmm. A simple command that could be used as reference is for example\n\"dd\" that flushes the file by itself, or we could just revisit the\ndiscussions about having a pg_copy command, or we could document a\nsmall utility in perl that does the job.\n--\nMichael", "msg_date": "Fri, 11 Jun 2021 16:08:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Fri, Jun 11, 2021 at 03:32:28PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 11 Jun 2021 10:48:32 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > \n> > \"far from perfect\" is a strong understatement for \"appears to work but will\n> > randomly and silently breaks everything without a simple way to detect it\".\n> \n> I think it's overstating. It sounds like a story of a mission critical\n> world. How perfect archive_command should be depends on the\n> requirements of every system. Simple cp is actualy sufficient in\n> certain log? range of usages, maybe.\n\nI disagree, cp is probably the worst command that can be used for this purpose.\nOn top on the previous problems already mentioned, you also have the fact that\nthe copy isn't atomic. It means that any concurrent restore_command (or\nanything that would consume the archived files) will happily process a half\ncopied WAL file, and in case of any error during the copy you end up with a\nfile for which you don't know if it contains valid data or not. I don't see\nany case where you would actually want to use that, unless maybe if you want to\nbenchmark how long it takes before you lose some data.\n\n> > We should document a minimum workable setup, but cp isn't an example of that,\n> > and I don't think that there will be a simple example unless we provide a\n> > dedicated utility.\n> \n> It looks somewhat strange like \"Well, you need a special track to\n> drive your car, however, we don't have one. It's your responsibility\n> to construct a track that protects it from accidents perfectly.\".\n> \n> \"Yeah, I'm not going to push it so hard and don't care it gets some\n> small scratches, couldn't I drive it on a public road?\"\n> \n> (Sorry for the bad analogy).\n\nI think that a better analogy would be \"I don't need working brakes on my car\nsince I only drive on highway and there aren't any red light there\".\n\n> Isn't it almost saying that anything less than pgBackRest isn't\n> qualified as archive_program?\n\nI don't know, I'm assuming that barman also provides one, such as wal-e and\nwal-g (assuming that the distant providers do their part of the job correctly).\nMaybe there are other tools too. But as long as we don't document what exactly\nare the requirements, it's not really a surprise that most people don't\nimplement them.\n\n\n", "msg_date": "Fri, 11 Jun 2021 15:18:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Fri, Jun 11, 2021 at 11:45 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 10 Jun 2021 21:53:18 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> tgl> Please note that conchuela and jacana are still failing ...\n>\n> I forgot jacana's case..\n>\n> It is failing for the issue the first patch should have fixed.\n>\n> > ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/025_stuck_on_old_timeline_primary.log ==~_~===-=-===~_~==\n> ...\n> > The system cannot find the path specified.\n> > 2021-06-10 22:56:17.754 EDT [60c2d0cf.54c:1] LOG: archive command failed with exit code 1\n> > 2021-06-10 22:56:17.754 EDT [60c2d0cf.54c:2] DETAIL: The failed archive command was: /usr/bin/perl \"/home/pgrunner/bf/root/HEAD/pgsql/src/test/recovery/t/cp_history_files\" \"pg_wal\\\\000000010000000000000001\" \"/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_025_stuck_on_old_timeline_primary_data/archives/000000010000000000000001\"\n\nWal file copying will not create a problem for us, but I noticed that\nit is failing in copying the history files as well and that is\ncreating a problem.\n\n2021-06-10 22:56:28.940 EDT [60c2d0db.1208:1] LOG: archive command\nfailed with exit code 1\n2021-06-10 22:56:28.940 EDT [60c2d0db.1208:2] DETAIL: The failed\narchive command was: /usr/bin/perl\n\"/home/pgrunner/bf/root/HEAD/pgsql/src/test/recovery/t/cp_history_files\"\n\"pg_wal\\\\00000002.history\"\n\"/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_025_stuck_on_old_timeline_primary_data/archives/00000002.history\"\n\nI have noticed that the archive command is failing in some other test\ncase too (002_archiving_standby2.log), see below logs.\n\n==~_~===-=-===~_~==\npgsql.build/src/test/recovery/tmp_check/log/002_archiving_standby2.log\n==~_~===-=-===~_~==\n...\n\n 0 file(s) copied.\n2021-06-10 22:44:34.467 EDT [60c2ce10.1270:1] LOG: archive command\nfailed with exit code 1\n2021-06-10 22:44:34.467 EDT [60c2ce10.1270:2] DETAIL: The failed\narchive command was: copy \"pg_wal\\\\00000003.history\"\n\"c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_primary_data/archives\\\\00000003.history\"\nThe system cannot find the path specified.\n 0 file(s) copied.\n2021-06-10 22:44:35.478 EDT [60c2ce10.1270:3] LOG: archive command\nfailed with exit code 1\n2021-06-10 22:44:35.478 EDT [60c2ce10.1270:4] DETAIL: The failed\narchive command was: copy \"pg_wal\\\\00000003.history\"\n\"c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_primary_data/archives\\\\00000003.history\"\n2021-06-10 22:44:36.113 EDT [60c2ce0c.283c:5] LOG: received immediate\nshutdown request\n2021-06-10 22:44:36.129 EDT [60c2ce0c.283c:6] LOG: database system is shut down\n\nI am not able to figure out why the archive command is failing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Jun 2021 15:19:15 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> writes:\n> On Fri, Jun 11, 2021 at 11:45 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>>> ==~_~===-=-===~_~== pgsql.build/src/test/recovery/tmp_check/log/025_stuck_on_old_timeline_primary.log ==~_~===-=-===~_~==\n>>> ...\n>>> The system cannot find the path specified.\n>>> 2021-06-10 22:56:17.754 EDT [60c2d0cf.54c:1] LOG: archive command failed with exit code 1\n>>> 2021-06-10 22:56:17.754 EDT [60c2d0cf.54c:2] DETAIL: The failed archive command was: /usr/bin/perl \"/home/pgrunner/bf/root/HEAD/pgsql/src/test/recovery/t/cp_history_files\" \"pg_wal\\\\000000010000000000000001\" \"/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_025_stuck_on_old_timeline_primary_data/archives/000000010000000000000001\"\n\n> Wal file copying will not create a problem for us, but I noticed that\n> it is failing in copying the history files as well and that is\n> creating a problem.\n\nI think jacana uses msys[2?], so this likely indicates a problem\nin path sanitization for the archive command. Andrew, any advice?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 10:46:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> ==~_~===-=-===~_~== pgsql.build/src/bin/pg_verifybackup/tmp_check/log/003_corruption_primary.log ==~_~===-=-===~_~==\n>> ...\n>> 2021-06-08 16:17:41.706 CEST [51792:9] 003_corruption.pl LOG: received replication command: START_REPLICATION SLOT \"pg_basebackup_51792\" 0/B000000 TIMELINE 1\n>> 2021-06-08 16:17:41.706 CEST [51792:10] 003_corruption.pl STATEMENT: START_REPLICATION SLOT \"pg_basebackup_51792\" 0/B000000 TIMELINE 1\n>> (log ends here)\n\n> There seems like some hardware failure?\n\nconchuela has definitely evinced flakiness before. Not sure what's\nup with it, but I have no problem with writing off non-repeatable\nfailures from that machine. In any case, it's now passed half a\ndozen times in a row on HEAD, so I think we can say that it's okay\nwith this test. That leaves jacana, which I'm betting has a\nWindows portability issue with the new test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 11:23:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Fri, Jun 11, 2021 at 10:46:45AM -0400, Tom Lane wrote:\n> I think jacana uses msys[2?], so this likely indicates a problem\n> in path sanitization for the archive command. Andrew, any advice?\n\nErr, something around TestLib::perl2host()?\n--\nMichael", "msg_date": "Sat, 12 Jun 2021 16:48:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/12/21 3:48 AM, Michael Paquier wrote:\n> On Fri, Jun 11, 2021 at 10:46:45AM -0400, Tom Lane wrote:\n>> I think jacana uses msys[2?], so this likely indicates a problem\n>> in path sanitization for the archive command. Andrew, any advice?\n> Err, something around TestLib::perl2host()?\n\n\nI'm working on a fix for this. Yes it includes perl2host, but that's not\nenough :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 12 Jun 2021 07:31:59 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/12/21 7:31 AM, Andrew Dunstan wrote:\n> On 6/12/21 3:48 AM, Michael Paquier wrote:\n>> On Fri, Jun 11, 2021 at 10:46:45AM -0400, Tom Lane wrote:\n>>> I think jacana uses msys[2?], so this likely indicates a problem\n>>> in path sanitization for the archive command. Andrew, any advice?\n>> Err, something around TestLib::perl2host()?\n>\n> I'm working on a fix for this. Yes it includes perl2host, but that's not\n> enough :-)\n>\n>\n\nI have pushed a fix, tested on a replica of fairywren/drongo,\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 12 Jun 2021 09:05:57 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I have pushed a fix, tested on a replica of fairywren/drongo,\n\nThis bit seems a bit random:\n\n # WAL segment, this is enough to guarantee that the history file was\n # archived.\n my $archive_wait_query =\n- \"SELECT '$walfile_to_be_archived' <= last_archived_wal FROM pg_stat_archiver;\";\n+ \"SELECT coalesce('$walfile_to_be_archived' <= last_archived_wal, false) \" .\n+ \"FROM pg_stat_archiver\";\n $node_standby->poll_query_until('postgres', $archive_wait_query)\n or die \"Timed out while waiting for WAL segment to be archived\";\n my $last_archived_wal_file = $walfile_to_be_archived;\n\nI wonder whether that is a workaround for the poll_query_until bug\nI proposed to fix at [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2130215.1623450521%40sss.pgh.pa.us\n\n\n", "msg_date": "Sat, 12 Jun 2021 10:20:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/12/21 10:20 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I have pushed a fix, tested on a replica of fairywren/drongo,\n> This bit seems a bit random:\n>\n> # WAL segment, this is enough to guarantee that the history file was\n> # archived.\n> my $archive_wait_query =\n> - \"SELECT '$walfile_to_be_archived' <= last_archived_wal FROM pg_stat_archiver;\";\n> + \"SELECT coalesce('$walfile_to_be_archived' <= last_archived_wal, false) \" .\n> + \"FROM pg_stat_archiver\";\n> $node_standby->poll_query_until('postgres', $archive_wait_query)\n> or die \"Timed out while waiting for WAL segment to be archived\";\n> my $last_archived_wal_file = $walfile_to_be_archived;\n>\n> I wonder whether that is a workaround for the poll_query_until bug\n> I proposed to fix at [1].\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/2130215.1623450521%40sss.pgh.pa.us\n\n\n\nNo, it's because I found it annoying and confusing that there was an\ninvisible result when last_archived_wal is null.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 12 Jun 2021 11:47:34 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/12/21 10:20 AM, Tom Lane wrote:\n>> I wonder whether that is a workaround for the poll_query_until bug\n>> I proposed to fix at [1].\n\n> No, it's because I found it annoying and confusing that there was an\n> invisible result when last_archived_wal is null.\n\nOK. But it makes me itch a bit that this one wait-for-wal-to-be-\nprocessed query looks different from all the other ones.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 13:07:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/12/21 1:07 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/12/21 10:20 AM, Tom Lane wrote:\n>>> I wonder whether that is a workaround for the poll_query_until bug\n>>> I proposed to fix at [1].\n>> No, it's because I found it annoying and confusing that there was an\n>> invisible result when last_archived_wal is null.\n> OK. But it makes me itch a bit that this one wait-for-wal-to-be-\n> processed query looks different from all the other ones.\n>\n> \t\t\t\n\n\nI'm happy to bring the other two queries that look like this into line\nwith this one if you like.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 12 Jun 2021 13:44:44 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/12/21 1:07 PM, Tom Lane wrote:\n>> OK. But it makes me itch a bit that this one wait-for-wal-to-be-\n>> processed query looks different from all the other ones.\n\n> I'm happy to bring the other two queries that look like this into line\n> with this one if you like.\n\nI see a lot more than two --- grepping for poll_query_until with\na test involving a LSN comparison finds a bunch. Are we sure that\nthere are only three in which the LSN could be null? How much\ndoes it really matter if it is?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Jun 2021 13:54:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/12/21 1:54 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/12/21 1:07 PM, Tom Lane wrote:\n>>> OK. But it makes me itch a bit that this one wait-for-wal-to-be-\n>>> processed query looks different from all the other ones.\n>> I'm happy to bring the other two queries that look like this into line\n>> with this one if you like.\n> I see a lot more than two --- grepping for poll_query_until with\n> a test involving a LSN comparison finds a bunch. Are we sure that\n> there are only three in which the LSN could be null? \n\n\nWell, I'm counting places that specifically compare it with\npg_stat_archiver.last_archived_wal.\n\n\n\n> How much\n> does it really matter if it is?\n>\n> \t\t\t\n\n\nIt makes it harder to tell if there was any result at all when there's a\nfailure. If it bugs you that much I can revert just that line. Now that\nI have fixed the immediate issue it matters less. I'm not prepared to\nput in a lot of effort here, though.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 12 Jun 2021 17:29:09 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\n\nOn 2021-06-10 01:09, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Got it. I have now committed the patch to all branches, after adapting\n>> your changes just a little bit.\n>> Thanks to you and Kyotaro-san for all the time spent on this. What a slog!\n> \n> conchuela failed its first encounter with this test case:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-06-09%2021%3A12%3A25\n> \n> That machine has a certain, er, history of flakiness; so this may\n> not mean anything. Still, we'd better keep an eye out to see if\n> the test needs more stabilization.\n\nYes, the flakiness is caused by the very weird filesystem (HAMMERFS) \nthat has some weird garbage collection handling that sometimes fills up \nthe disk and then never recovers automatically.\n\nI have tried to put in the cleanup-utility for HAMMERFS in cron to run \nat a schedule but it's isn't 100% fool proof.\n\nSo I am going to upgrade to a newer version of DragonflyBSD in the near \nfuture.\n\n/Mikael\n\n\n", "msg_date": "Sun, 13 Jun 2021 19:05:10 +0200", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Sat, Jun 12, 2021 at 10:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > I have pushed a fix, tested on a replica of fairywren/drongo,\n>\n> This bit seems a bit random:\n>\n> # WAL segment, this is enough to guarantee that the history file was\n> # archived.\n> my $archive_wait_query =\n> - \"SELECT '$walfile_to_be_archived' <= last_archived_wal FROM pg_stat_archiver;\";\n> + \"SELECT coalesce('$walfile_to_be_archived' <= last_archived_wal, false) \" .\n> + \"FROM pg_stat_archiver\";\n> $node_standby->poll_query_until('postgres', $archive_wait_query)\n> or die \"Timed out while waiting for WAL segment to be archived\";\n> my $last_archived_wal_file = $walfile_to_be_archived;\n>\n> I wonder whether that is a workaround for the poll_query_until bug\n> I proposed to fix at [1].\n\nI found that a bit random too, but it wasn't the only part of the\npatch I found a bit random. Like, what can this possibly be doing?\n\n+if ($^O eq 'msys')\n+{\n+ $perlbin = TestLib::perl2host(dirname($^X)) . '\\\\' . basename($^X);\n+}\n\nThe idea here is apparently that on msys, the directory name that is\npart of $^X needs to be passed through perl2host, but the file name\nthat is part of the same $^X needs to NOT be passed through perl2host.\nIs $^X really that broken? If so, I think some comments are in order.\n\n+local $ENV{PERL_BADLANG}=0;\n\nSimilarly here. There's not a single other reference to PERL_BADLANG\nin the repository, so if we need this one here, there should be a\ncomment explaining why this is different from all the places where we\ndon't need it.\n\nOn those occasions when I commit TAP test cases, I do try to think\nabout whether they are going to be portable, but I find these kinds of\nchanges indistinguishable from magic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 11:52:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/14/21 11:52 AM, Robert Haas wrote:\n> On Sat, Jun 12, 2021 at 10:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> I have pushed a fix, tested on a replica of fairywren/drongo,\n>> This bit seems a bit random:\n>>\n>> # WAL segment, this is enough to guarantee that the history file was\n>> # archived.\n>> my $archive_wait_query =\n>> - \"SELECT '$walfile_to_be_archived' <= last_archived_wal FROM pg_stat_archiver;\";\n>> + \"SELECT coalesce('$walfile_to_be_archived' <= last_archived_wal, false) \" .\n>> + \"FROM pg_stat_archiver\";\n>> $node_standby->poll_query_until('postgres', $archive_wait_query)\n>> or die \"Timed out while waiting for WAL segment to be archived\";\n>> my $last_archived_wal_file = $walfile_to_be_archived;\n>>\n>> I wonder whether that is a workaround for the poll_query_until bug\n>> I proposed to fix at [1].\n\n\n\nThis has been reverted.\n\n\n> I found that a bit random too, but it wasn't the only part of the\n> patch I found a bit random. Like, what can this possibly be doing?\n>\n> +if ($^O eq 'msys')\n> +{\n> + $perlbin = TestLib::perl2host(dirname($^X)) . '\\\\' . basename($^X);\n> +}\n>\n> The idea here is apparently that on msys, the directory name that is\n> part of $^X needs to be passed through perl2host, but the file name\n> that is part of the same $^X needs to NOT be passed through perl2host.\n> Is $^X really that broken? If so, I think some comments are in order.\n\n\n$^X is not at all broken.\n\n\nThe explanation here is pretty simple - the argument to perl2host is\nmeant to be a directory. If we're going to accomodate plain files then\nwe have some more work to do in TestLib.\n\n\n> +local $ENV{PERL_BADLANG}=0;\n>\n> Similarly here. There's not a single other reference to PERL_BADLANG\n> in the repository, so if we need this one here, there should be a\n> comment explaining why this is different from all the places where we\n> don't need it.\n\n\nHere's why this is different: this is the only place that we invoke the\nmsys perl in this way (i.e. from a non-msys aware environment - the\nbinaries we build are not msys-aware). We need to do that if for no\nother reason than that it might well be the only perl available. Doing\nso makes it complain loudly about missing locale info. Setting this\nvariable makes it shut up. I can add a comment on that if you like.\n\n\n> On those occasions when I commit TAP test cases, I do try to think\n> about whether they are going to be portable, but I find these kinds of\n> changes indistinguishable from magic.\n\n\n\nPart of the trouble is that I've been living and breathing some of these\nissues so much recently that I forget that what might be fairly obvious\nto me isn't to others. I assure you there is not the faintest touch of\npixy dust involved.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 12:56:46 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Mon, Jun 14, 2021 at 12:56 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> $^X is not at all broken.\n>\n> The explanation here is pretty simple - the argument to perl2host is\n> meant to be a directory. If we're going to accomodate plain files then\n> we have some more work to do in TestLib.\n\nThis explanation seems to contradict the documentation in TestLib.pm,\nwhich makes no mention of any such restriction.\n\n> > +local $ENV{PERL_BADLANG}=0;\n> >\n> > Similarly here. There's not a single other reference to PERL_BADLANG\n> > in the repository, so if we need this one here, there should be a\n> > comment explaining why this is different from all the places where we\n> > don't need it.\n>\n> Here's why this is different: this is the only place that we invoke the\n> msys perl in this way (i.e. from a non-msys aware environment - the\n> binaries we build are not msys-aware). We need to do that if for no\n> other reason than that it might well be the only perl available. Doing\n> so makes it complain loudly about missing locale info. Setting this\n> variable makes it shut up. I can add a comment on that if you like.\n\nYes, please, but perhaps you'd like to post patches for discussion\nfirst instead of committing directly.\n\n> Part of the trouble is that I've been living and breathing some of these\n> issues so much recently that I forget that what might be fairly obvious\n> to me isn't to others. I assure you there is not the faintest touch of\n> pixy dust involved.\n\nEvery pixie with whom I've spoken today says otherwise!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 13:11:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/14/21 1:11 PM, Robert Haas wrote:\n> On Mon, Jun 14, 2021 at 12:56 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> $^X is not at all broken.\n>>\n>> The explanation here is pretty simple - the argument to perl2host is\n>> meant to be a directory. If we're going to accomodate plain files then\n>> we have some more work to do in TestLib.\n> This explanation seems to contradict the documentation in TestLib.pm,\n> which makes no mention of any such restriction.\n\n\nHeres a snippet:\n\n\n sub perl2host\n {\n my ($subject) = @_;\n ...\n if (chdir $subject)\n \n\nLast time I looked you can't chdir to anything except a directory.\n\n\n>\n>>> +local $ENV{PERL_BADLANG}=0;\n>>>\n>>> Similarly here. There's not a single other reference to PERL_BADLANG\n>>> in the repository, so if we need this one here, there should be a\n>>> comment explaining why this is different from all the places where we\n>>> don't need it.\n>> Here's why this is different: this is the only place that we invoke the\n>> msys perl in this way (i.e. from a non-msys aware environment - the\n>> binaries we build are not msys-aware). We need to do that if for no\n>> other reason than that it might well be the only perl available. Doing\n>> so makes it complain loudly about missing locale info. Setting this\n>> variable makes it shut up. I can add a comment on that if you like.\n> Yes, please, but perhaps you'd like to post patches for discussion\n> first instead of committing directly.\n\n\nI was trying to get the buildfarm green again. There have been plenty of\ntimes when small patches directly for such fixes have been committed\ndirectly. And that's the only circumstance when I do.\n\n\n\ncheers\n\n\nandrew\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 13:50:26 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Mon, Jun 14, 2021 at 1:50 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> Heres a snippet:\n>\n> sub perl2host\n> {\n> my ($subject) = @_;\n> ...\n> if (chdir $subject)\n>\n> Last time I looked you can't chdir to anything except a directory.\n\nOK, but like I said, you can't tell that from the documentation. The\ndocumentation says: \"Translate a virtual file name to a host file\nname. Currently, this is a no-op except for the case of Perl=msys and\nhost=mingw32. The subject need not exist, but its parent or\ngrandparent directory must exist unless cygpath is available.\" If you\nlook just at that, there's nothing that would lead you to believe that\nit has to be a directory name.\n\n> I was trying to get the buildfarm green again. There have been plenty of\n> times when small patches directly for such fixes have been committed\n> directly. And that's the only circumstance when I do.\n\nI wasn't intending to criticize your work on this. I really appreciate\nit, in fact, as I also said to you off-list. I do think that there\nwere some small things in those patches where a little bit of quick\ndiscussion might have been useful: e.g. should the archive_command\nchange have gone in in the first place? Do we need any comments to\nexplain the fixes? But it's not like it's a big deal either. I'm\ncertainly not disagreeing with the goodness of making the buildfarm\ngreen as expediently as possible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 15:19:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/14/21 1:50 PM, Andrew Dunstan wrote:\n> On 6/14/21 1:11 PM, Robert Haas wrote:\n>> On Mon, Jun 14, 2021 at 12:56 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> $^X is not at all broken.\n>>>\n>>> The explanation here is pretty simple - the argument to perl2host is\n>>> meant to be a directory. If we're going to accomodate plain files then\n>>> we have some more work to do in TestLib.\n>> This explanation seems to contradict the documentation in TestLib.pm,\n>> which makes no mention of any such restriction.\n>\n> Heres a snippet:\n>\n>\n> sub perl2host\n> {\n> my ($subject) = @_;\n> ...\n> if (chdir $subject)\n> \n>\n> Last time I looked you can't chdir to anything except a directory.\n\n\n\nActually, I take it back, it does work for a file. I'll change it. I\nprobably did this when something else wasn't working.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 15:32:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "\nOn 6/14/21 3:32 PM, Andrew Dunstan wrote:\n> On 6/14/21 1:50 PM, Andrew Dunstan wrote:\n>> On 6/14/21 1:11 PM, Robert Haas wrote:\n>>> On Mon, Jun 14, 2021 at 12:56 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> $^X is not at all broken.\n>>>>\n>>>> The explanation here is pretty simple - the argument to perl2host is\n>>>> meant to be a directory. If we're going to accomodate plain files then\n>>>> we have some more work to do in TestLib.\n>>> This explanation seems to contradict the documentation in TestLib.pm,\n>>> which makes no mention of any such restriction.\n>> Heres a snippet:\n>>\n>>\n>> sub perl2host\n>> {\n>> my ($subject) = @_;\n>> ...\n>> if (chdir $subject)\n>> \n>>\n>> Last time I looked you can't chdir to anything except a directory.\n>\n>\n> Actually, I take it back, it does work for a file. I'll change it. I\n> probably did this when something else wasn't working.\n\n\n\n\nSo, will you feel happier with this applied? I haven't tested it yet but\nI'm confident it will work.\n\n\ndiff --git a/src/test/recovery/t/025_stuck_on_old_timeline.pl b/src/test/recovery/t/025_stuck_on_old_timeline.pl\nindex e4e58cb8ab..3e19bc4c50 100644\n--- a/src/test/recovery/t/025_stuck_on_old_timeline.pl\n+++ b/src/test/recovery/t/025_stuck_on_old_timeline.pl\n@@ -24,11 +24,11 @@ my $node_primary = get_new_node('primary');\n # the timeline history file reaches the archive but before any of the WAL files\n # get there.\n $node_primary->init(allows_streaming => 1, has_archiving => 1);\n-my $perlbin = $^X;\n-if ($^O eq 'msys')\n-{\n- $perlbin = TestLib::perl2host(dirname($^X)) . '\\\\' . basename($^X);\n-}\n+\n+# Note: consistent use of forward slashes here avoids any escaping problems\n+# that arise from use of backslashes. That means we need to double-quote all\n+# the paths in the archive_command\n+my $perlbin = TestLib::perl2host(^X);\n $perlbin =~ s!\\\\!/!g if $TestLib::windows_os;\n my $archivedir_primary = $node_primary->archive_dir;\n $archivedir_primary =~ s!\\\\!/!g if $TestLib::windows_os;\n@@ -36,6 +36,8 @@ $node_primary->append_conf('postgresql.conf', qq(\n archive_command = '\"$perlbin\" \"$FindBin::RealBin/cp_history_files\" \"%p\" \"$archivedir_primary/%f\"'\n wal_keep_size=128MB\n ));\n+# make sure that Msys perl doesn't complain about difficulty in setting locale\n+# when called this way.\n local $ENV{PERL_BADLANG}=0;\n $node_primary->start;\n \n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Jun 2021 15:46:57 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "At Fri, 11 Jun 2021 15:18:03 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Fri, Jun 11, 2021 at 03:32:28PM +0900, Kyotaro Horiguchi wrote:\n> I disagree, cp is probably the worst command that can be used for this purpose.\n> On top on the previous problems already mentioned, you also have the fact that\n> the copy isn't atomic. It means that any concurrent restore_command (or\n> anything that would consume the archived files) will happily process a half\n> copied WAL file, and in case of any error during the copy you end up with a\n> file for which you don't know if it contains valid data or not. I don't see\n> any case where you would actually want to use that, unless maybe if you want to\n> benchmark how long it takes before you lose some data.\n\nActually there's large room for losing data with cp. Yes, we would\nneed additional redundancy of storage and periodical integrity\ninspection of the storage and archives on maybe need copies at the\nother sites on the other side of the Earth. But they are too-much for\nsome kind of users. They have the right and responsibility to decide\nhow durable/reliable their archive needs to be. (Putting aside some\nhardware/geological requirements :p) If we mandate some\ncharacteristics on the archive_command, we should take them into core.\nI remember I saw some discussions on archive command on this line but\nI think it had ended at the point something like that \"we cannot\ndesign one-fits-all interface comforming the requirements\" or\nsomething (sorry, I don't remember in its detail..)\n\n> I don't know, I'm assuming that barman also provides one, such as wal-e and\n> wal-g (assuming that the distant providers do their part of the job correctly).\n\nWell. rman used rsync/ssh in its documentation in the past and now\nlooks like providing barman-wal-archive so it seems that you're right\nin that point. So, do we recommend them in our documentation? (I'm\nnot sure they are actually comform the requirement, though..)\n\n> Maybe there are other tools too. But as long as we don't document what exactly\n> are the requirements, it's not really a surprise that most people don't\n> implement them.\n\nI strongly agree to describe the requirements.\n\nMy point is that if all of them are really mandatory, it is mandatory\nfor us to officially provide or at least recommend the minimal\nimplement(s) that covers all of them. If we recommended some external\ntools, that would mean that we ensure that the tools qualify the\nrequirements.\n\nIf we write an example with a pseudo tool name, requiring some\ncharacteristics on the tool, then not telling about the minimal tools,\nI think that it is equivalent to that we are inhibiting certain users\nfrom using archive_command even if they really don't want such level\nof durability.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:20:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Fri, 11 Jun 2021 16:08:33 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Jun 11, 2021 at 03:32:28PM +0900, Kyotaro Horiguchi wrote:\n> > I think cp can be an example as far as we explain the limitations. (On\n> > the other hand \"test !-f\" cannot since it actually prevents server\n> > from working correctly.)\n> \n> Disagreed. I think that we should not try to change this area until\n> we can document a reliable solution, and a simple \"cp\" is not that.\n\nIsn't removing cp from the documentation a change in this area? I\nbasically agree to not to change anything but the current example\n\"test ! -f <fn> && cp ..\" and relevant description has been known to\nbe problematic in a certain situation.\n\n- Do we leave it alone igonring the possible problem?\n\n- Just add a description about \"the problem\"?\n\n- Just remove \"test ! -f\" and the relevant description?\n\n- Remove \"test ! -f\" and rewrite the relevant description?\n\n(- or not remove \"test ! -f\" and rewrite the relevant description?)\n\n- Write the full (known) requirements and use a pseudo tool-name in\n the example?\n\n - provide a minimal implement of the command?\n\n - recommend some external tools (that we can guarantee that they\n comform the requriements)?\n\n - not recommend any tools?\n\n> Hmm. A simple command that could be used as reference is for example\n> \"dd\" that flushes the file by itself, or we could just revisit the\n> discussions about having a pg_copy command, or we could document a\n> small utility in perl that does the job.\n\nI think we should do that if pg_copy comforms the mandatory\nrequirements but maybe it's in the future. Showing the minimal\nimplement in perl looks good.\n\nHowever, my main point here is \"what should we do for now?\". Not\nabout an ideal solution.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:36:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Tue, Jun 15, 2021 at 10:20:37AM +0900, Kyotaro Horiguchi wrote:\n> \n> Actually there's large room for losing data with cp. Yes, we would\n> need additional redundancy of storage and periodical integrity\n> inspection of the storage and archives on maybe need copies at the\n> other sites on the other side of the Earth. But they are too-much for\n> some kind of users. They have the right and responsibility to decide\n> how durable/reliable their archive needs to be. (Putting aside some\n> hardware/geological requirements :p)\n\nNote that most of those considerations are orthogonal to what a proper\narchive_command should be responsible for.\n\nYes users are responsible to decide they want valid and durable backup or\nnot, but we should assume a sensible default behavior, which is a valid and\ndurable archive_command. We don't document a default fsync = off with later\nrecommendation explaining why you shouldn't do that, and I think it should be\nthe same for the archive_command. The problem with the current documentation\nis that many users will just blindly copy/paste whatever is in the\ndocumentation without reading any further.\n\nAs an example, a few hours ago some french user on the french bulletin board\nsaid that he fixed his \"postmaster.pid already exists\" error with a\npg_resetxlog -f, referring to some article explaining how to start postgres in\ncase of \"PANIC: could not locate a valid checkpoint record\". Arguably\nthat article didn't bother to document what are the implication for executing\npg_resetxlog, but given that the user original problem had literally nothing to\ndo with what was documented, I really doubt that it would have changed\nanything.\n\n> If we mandate some\n> characteristics on the archive_command, we should take them into core.\n\nI agree.\n\n> I remember I saw some discussions on archive command on this line but\n> I think it had ended at the point something like that \"we cannot\n> design one-fits-all interface comforming the requirements\" or\n> something (sorry, I don't remember in its detail..)\n\nI also agree, but this problem is solved by making archive_command\ncustomisable. Providing something that can reliably work in some general and\nlimited cases would be a huge improvement.\n\n> Well. rman used rsync/ssh in its documentation in the past and now\n> looks like providing barman-wal-archive so it seems that you're right\n> in that point. So, do we recommend them in our documentation? (I'm\n> not sure they are actually comform the requirement, though..)\n\nWe could maybe bless some third party backup solutions, but this will probably\nlead to a lot more discussions, so it's better to discuss that in a different\nthread. Note that I don't have a deep knowledge of any of those tools so I\ndon't have an opinion.\n\n> If we write an example with a pseudo tool name, requiring some\n> characteristics on the tool, then not telling about the minimal tools,\n> I think that it is equivalent to that we are inhibiting certain users\n> from using archive_command even if they really don't want such level\n> of durability.\n\nI already saw customers complaining about losing backups because their\narchive_command didn't ensure that the copy was durable. Some users may not\ncare about losing their backups in such case, but I really think that the\nmajority of users expect a backup to be valid, durable and everything without\neven thinking that it may not be the case. It should be the default behavior,\nnot optional.\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:48:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Fri, 11 Jun 2021 10:46:45 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I think jacana uses msys[2?], so this likely indicates a problem\n> in path sanitization for the archive command. Andrew, any advice?\n\nThanks for fixing it.\n\n# I haven't still succeed to run TAP tests on MSYS2 environment. I\n# cannot install IPC::Run for msys perl..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:16:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On 6/15/21 2:16 AM, Kyotaro Horiguchi wrote:\n> At Fri, 11 Jun 2021 10:46:45 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> I think jacana uses msys[2?], so this likely indicates a problem\n>> in path sanitization for the archive command. Andrew, any advice?\n> Thanks for fixing it.\n>\n> # I haven't still succeed to run TAP tests on MSYS2 environment. I\n> # cannot install IPC::Run for msys perl..\n>\n> regards.\n>\n\n\nUnpack the attached somewhere and point your PERL5LIB at it. That's all\nI do.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 15 Jun 2021 07:54:49 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Mon, Jun 14, 2021 at 3:47 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> So, will you feel happier with this applied? I haven't tested it yet but\n> I'm confident it will work.\n\nI'm not all that unhappy now, but yeah, that looks like an improvement\nto me. I'm still afraid that I will keep writing tests that blow up on\nWindows but that's a bigger problem than we can hope to fix on this\nthread, and I do think this discussion has helped.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Jun 2021 08:19:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "Greetings,\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> At Fri, 11 Jun 2021 16:08:33 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > On Fri, Jun 11, 2021 at 03:32:28PM +0900, Kyotaro Horiguchi wrote:\n> > > I think cp can be an example as far as we explain the limitations. (On\n> > > the other hand \"test !-f\" cannot since it actually prevents server\n> > > from working correctly.)\n> > \n> > Disagreed. I think that we should not try to change this area until\n> > we can document a reliable solution, and a simple \"cp\" is not that.\n> \n> Isn't removing cp from the documentation a change in this area? I\n> basically agree to not to change anything but the current example\n> \"test ! -f <fn> && cp ..\" and relevant description has been known to\n> be problematic in a certain situation.\n\n[...]\n\n> - Write the full (known) requirements and use a pseudo tool-name in\n> the example?\n\nI'm generally in favor of just using a pseudo tool-name and then perhaps\nproviding a link to a new place on .Org where people can ask to have\ntheir PG backup solution listed, or something along those lines.\n\n> - provide a minimal implement of the command?\n\nHaving been down this road for a rather long time, I can't accept this\nas a serious suggestion. No, not even with Perl. Been there, done\nthat, not going back.\n\n> - recommend some external tools (that we can guarantee that they\n> comform the requriements)?\n\nThe requirements are things which are learned over years and changes\nover time. Trying to document them and keep up with them would be a\npretty serious project all on its own. There are external projects who\nspend serious time and energy doing their best to provide the tooling\nneeded here and we should be promoting those, not trying to pretend like\nthis is a simple thing which anyone could write a short perl script to\naccomplish.\n\n> - not recommend any tools?\n\nThis is the approach that has been tried and it's, objectively, failed\nmiserably. Our users are ending up with invalid and unusable backups,\ncorrupted WAL segments, inability to use PITR, and various other issues\nbecause we've been trying to pretend that this isn't a hard problem. We\nreally need to stop that and accept that it's hard and promote the tools\nwhich have been explicitly written to address that hard problem.\n\n> > Hmm. A simple command that could be used as reference is for example\n> > \"dd\" that flushes the file by itself, or we could just revisit the\n> > discussions about having a pg_copy command, or we could document a\n> > small utility in perl that does the job.\n> \n> I think we should do that if pg_copy comforms the mandatory\n> requirements but maybe it's in the future. Showing the minimal\n> implement in perl looks good.\n\nAlready tried doing it in perl. No, it's not simple and it's also\nentirely vaporware today and implies that we're going to develop this\ntool, improve it in the future as we realize it needs to be improved,\nand maintain it as part of core forever. If we want to actually adopt\nand pull in a backup tool to be part of core then we should talk about\nthings which actually exist, such as the various existing projects that\nhave been written to specifically work to address all the requirements\nwhich are understood today, not say \"well, we can just write a simple\nperl script to do it\" because it's not actually that simple.\n\nProviding yet another half solution would be doubling-down on the failed\napproach to document a \"simple\" solution and would be a disservice to\nour users.\n\nThanks,\n\nStephen", "msg_date": "Tue, 15 Jun 2021 11:33:10 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Tue, Jun 15, 2021 at 11:33:10AM -0400, Stephen Frost wrote:\n> \n> The requirements are things which are learned over years and changes\n> over time. Trying to document them and keep up with them would be a\n> pretty serious project all on its own. There are external projects who\n> spend serious time and energy doing their best to provide the tooling\n> needed here and we should be promoting those, not trying to pretend like\n> this is a simple thing which anyone could write a short perl script to\n> accomplish.\n\nThe fact that this is such a complex problem is the very reason why we should\nspend a lot of energy documenting the various requirements. Otherwise, how\ncould anyone implement a valid program for that and how could anyone validate\nthat a solution claiming to do its job actually does its job?\n\n> Already tried doing it in perl. No, it's not simple and it's also\n> entirely vaporware today and implies that we're going to develop this\n> tool, improve it in the future as we realize it needs to be improved,\n> and maintain it as part of core forever. If we want to actually adopt\n> and pull in a backup tool to be part of core then we should talk about\n> things which actually exist, such as the various existing projects that\n> have been written to specifically work to address all the requirements\n> which are understood today, not say \"well, we can just write a simple\n> perl script to do it\" because it's not actually that simple.\n\nAdopting a full backup solution seems like a bit extreme. On the other hand,\nhaving some real core implementation of an archive_command for the most general\nuse cases (local copy, distant copy over ssh...) could make sense. This would\nremove that burden for some, probably most, of the 3rd party backup tools, and\nwould also ensure that the various requirements are properly documented since\nit would be the implementation reference.\n\n\n", "msg_date": "Wed, 16 Jun 2021 00:29:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Greetings,\n\n* Julien Rouhaud (rjuju123@gmail.com) wrote:\n> On Tue, Jun 15, 2021 at 11:33:10AM -0400, Stephen Frost wrote:\n> > The requirements are things which are learned over years and changes\n> > over time. Trying to document them and keep up with them would be a\n> > pretty serious project all on its own. There are external projects who\n> > spend serious time and energy doing their best to provide the tooling\n> > needed here and we should be promoting those, not trying to pretend like\n> > this is a simple thing which anyone could write a short perl script to\n> > accomplish.\n> \n> The fact that this is such a complex problem is the very reason why we should\n> spend a lot of energy documenting the various requirements. Otherwise, how\n> could anyone implement a valid program for that and how could anyone validate\n> that a solution claiming to do its job actually does its job?\n\nReading the code.\n\n> > Already tried doing it in perl. No, it's not simple and it's also\n> > entirely vaporware today and implies that we're going to develop this\n> > tool, improve it in the future as we realize it needs to be improved,\n> > and maintain it as part of core forever. If we want to actually adopt\n> > and pull in a backup tool to be part of core then we should talk about\n> > things which actually exist, such as the various existing projects that\n> > have been written to specifically work to address all the requirements\n> > which are understood today, not say \"well, we can just write a simple\n> > perl script to do it\" because it's not actually that simple.\n> \n> Adopting a full backup solution seems like a bit extreme. On the other hand,\n> having some real core implementation of an archive_command for the most general\n> use cases (local copy, distant copy over ssh...) could make sense. This would\n> remove that burden for some, probably most, of the 3rd party backup tools, and\n> would also ensure that the various requirements are properly documented since\n> it would be the implementation reference.\n\nHaving a database platform that hasn't got a full backup solution is a\npretty awkward position to be in.\n\nI'd like to see something a bit more specific than handwaving about how\ncore could provide something in this area which would remove the burden\nfrom other tools. Would also be good to know who is going to write that\nand maintain it.\n\nThanks,\n\nStephen", "msg_date": "Tue, 15 Jun 2021 14:28:04 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Tue, Jun 15, 2021 at 02:28:04PM -0400, Stephen Frost wrote:\n> \n> * Julien Rouhaud (rjuju123@gmail.com) wrote:\n> > On Tue, Jun 15, 2021 at 11:33:10AM -0400, Stephen Frost wrote:\n> > \n> > The fact that this is such a complex problem is the very reason why we should\n> > spend a lot of energy documenting the various requirements. Otherwise, how\n> > could anyone implement a valid program for that and how could anyone validate\n> > that a solution claiming to do its job actually does its job?\n> \n> Reading the code.\n\nOh, if it's as simple as that then surely documenting the various requirements\nwon't be an issue.\n\n\n", "msg_date": "Wed, 16 Jun 2021 09:11:22 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Greetings,\n\nOn Tue, Jun 15, 2021 at 21:11 Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Tue, Jun 15, 2021 at 02:28:04PM -0400, Stephen Frost wrote:\n> >\n> > * Julien Rouhaud (rjuju123@gmail.com) wrote:\n> > > On Tue, Jun 15, 2021 at 11:33:10AM -0400, Stephen Frost wrote:\n> > >\n> > > The fact that this is such a complex problem is the very reason why we\n> should\n> > > spend a lot of energy documenting the various requirements.\n> Otherwise, how\n> > > could anyone implement a valid program for that and how could anyone\n> validate\n> > > that a solution claiming to do its job actually does its job?\n> >\n> > Reading the code.\n>\n> Oh, if it's as simple as that then surely documenting the various\n> requirements\n> won't be an issue.\n\n\nAs I suggested previously- this is similar to the hooks that we provide. We\ndon’t extensively document them because if you’re writing an extension\nwhich uses a hook, you’re going to be (or should be..) reading the code too.\n\nConsider that, really, an archive command should refuse to allow archiving\nof WAL on a timeline which doesn’t have a corresponding history file in the\narchive for that timeline (excluding timeline 1). Also, a backup tool\nshould compare the result of pg_start_backup to what’s in the control file,\nusing a fresh read, after start backup returns to make sure that the\nstorage is sane and not, say, cache’ing pages independently (such as might\nhappen with a separate NFS mount..). Oh, and if a replica is involved, a\ncheck should be done to see if the replica has changed timelines and an\nappropriate message thrown if that happens complaining that the backup was\naborted due to the promotion of the replica…\n\nTo be clear- these aren’t checks that pgbackrest has today and I’m not\ntrying to make it out as if pgbackrest is the only solution and the only\ntool that “does everything and is correct” because we aren’t there yet and\nI’m not sure we ever will be “all correct” or “done”.\n\nThese, however, are ones we have planned to add because of things we’ve\nseen and thought of, most of them in just the past few months.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Jun 15, 2021 at 21:11 Julien Rouhaud <rjuju123@gmail.com> wrote:On Tue, Jun 15, 2021 at 02:28:04PM -0400, Stephen Frost wrote:\n> \n> * Julien Rouhaud (rjuju123@gmail.com) wrote:\n> > On Tue, Jun 15, 2021 at 11:33:10AM -0400, Stephen Frost wrote:\n> > \n> > The fact that this is such a complex problem is the very reason why we should\n> > spend a lot of energy documenting the various requirements.  Otherwise, how\n> > could anyone implement a valid program for that and how could anyone validate\n> > that a solution claiming to do its job actually does its job?\n> \n> Reading the code.\n\nOh, if it's as simple as that then surely documenting the various requirements\nwon't be an issue.As I suggested previously- this is similar to the hooks that we provide. We don’t extensively document them because if you’re writing an extension which uses a hook, you’re going to be (or should be..) reading the code too.Consider that, really, an archive command should refuse to allow archiving of WAL on a timeline which doesn’t have a corresponding history file in the archive for that timeline (excluding timeline 1). Also, a backup tool should compare the result of pg_start_backup to what’s in the control file, using a fresh read, after start backup returns to make sure that the storage is sane and not, say, cache’ing pages independently (such as might happen with a separate NFS mount..).  Oh, and if a replica is involved, a check should be done to see if the replica has changed timelines and an appropriate message thrown if that happens complaining that the backup was aborted due to the promotion of the replica…To be clear- these aren’t checks that pgbackrest has today and I’m not trying to make it out as if pgbackrest is the only solution and the only tool that “does everything and is correct” because we aren’t there yet and I’m not sure we ever will be “all correct” or “done”.These, however, are ones we have planned to add because of things we’ve seen and thought of, most of them in just the past few months.Thanks,Stephen", "msg_date": "Tue, 15 Jun 2021 23:00:57 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Thanks for the opinions.\n\nAt Tue, 15 Jun 2021 11:33:10 -0400, Stephen Frost <sfrost@snowman.net> wrote in \n> Greetings,\n> \n> * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > At Fri, 11 Jun 2021 16:08:33 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > > On Fri, Jun 11, 2021 at 03:32:28PM +0900, Kyotaro Horiguchi wrote:\n> > > > I think cp can be an example as far as we explain the limitations. (On\n> > > > the other hand \"test !-f\" cannot since it actually prevents server\n> > > > from working correctly.)\n> > > \n> > > Disagreed. I think that we should not try to change this area until\n> > > we can document a reliable solution, and a simple \"cp\" is not that.\n> > \n> > Isn't removing cp from the documentation a change in this area? I\n> > basically agree to not to change anything but the current example\n> > \"test ! -f <fn> && cp ..\" and relevant description has been known to\n> > be problematic in a certain situation.\n> \n> [...]\n> \n> > - Write the full (known) requirements and use a pseudo tool-name in\n> > the example?\n> \n> I'm generally in favor of just using a pseudo tool-name and then perhaps\n> providing a link to a new place on .Org where people can ask to have\n> their PG backup solution listed, or something along those lines.\n\nLooks fine.\n\n> > - provide a minimal implement of the command?\n> \n> Having been down this road for a rather long time, I can't accept this\n> as a serious suggestion. No, not even with Perl. Been there, done\n> that, not going back.\n>\n> > - recommend some external tools (that we can guarantee that they\n> > comform the requriements)?\n> \n> The requirements are things which are learned over years and changes\n> over time. Trying to document them and keep up with them would be a\n> pretty serious project all on its own. There are external projects who\n> spend serious time and energy doing their best to provide the tooling\n> needed here and we should be promoting those, not trying to pretend like\n> this is a simple thing which anyone could write a short perl script to\n> accomplish.\n\nI agree that no simple solution could be really perfect. The reason I\nthink that a simple cp can be a candidate of the example might be\nbased on the assumption that anyone who is going to build a database\nsystem ought to know their requirements including the\ndurability/reliability of archives/backups and the limitaions of\nadopted methods/technologies. However, as Julien mentioned, if\nthere's actually a problem that relatively.. ahem, ill-advised users\n(sorry in advance if it's rude) uses the 'cp' only for the reason that\nit is shown in the example without a thought and inadvertently loses\narchives, it might be better that we don't suggest a concrete command\nfor archive_command.\n\n> > - not recommend any tools?\n> \n> This is the approach that has been tried and it's, objectively, failed\n> miserably. Our users are ending up with invalid and unusable backups,\n> corrupted WAL segments, inability to use PITR, and various other issues\n> because we've been trying to pretend that this isn't a hard problem. We\n> really need to stop that and accept that it's hard and promote the tools\n> which have been explicitly written to address that hard problem.\n\nI can sympathize that but is there any difference with system backups?\nOne can just copy $HOME to another directory in the same drive then\ncall it a day. Another uses dd to make a image backup. Others need\ndurability or guarantee for integrity or even encryption so acquire or\npurchase a tool that conforms their requirements. Or someone creates\ntheir own backup solution that meets their requirements.\n\nOn the other hand, what OS distributors offer a long list for\nrequirements or a recipe for perfect backups? (Yeah, I'm saying this\nbased on nothing, just from a prejudice.)\n\nIf the system is serious, who don't know enough about backup ought to\nconsult professionals before building an inadequate backup system and\nlose their data.\n\n> > > Hmm. A simple command that could be used as reference is for example\n> > > \"dd\" that flushes the file by itself, or we could just revisit the\n> > > discussions about having a pg_copy command, or we could document a\n> > > small utility in perl that does the job.\n> > \n> > I think we should do that if pg_copy comforms the mandatory\n> > requirements but maybe it's in the future. Showing the minimal\n> > implement in perl looks good.\n> \n> Already tried doing it in perl. No, it's not simple and it's also\n> entirely vaporware today and implies that we're going to develop this\n> tool, improve it in the future as we realize it needs to be improved,\n> and maintain it as part of core forever. If we want to actually adopt\n> and pull in a backup tool to be part of core then we should talk about\n> things which actually exist, such as the various existing projects that\n> have been written to specifically work to address all the requirements\n> which are understood today, not say \"well, we can just write a simple\n> perl script to do it\" because it's not actually that simple.\n> \n> Providing yet another half solution would be doubling-down on the failed\n> approach to document a \"simple\" solution and would be a disservice to\n> our users.\n\nOk, if we follow the direction that we are responsible for ensuring\nthat every user has reliable backups, I don't come up with proper\ndescription about that.\n\nWe could list several \"requirement\" like \"do sync after copy\", \"take a\nchecksum for all files then check it periodically\" or other things but\nwhat is more important things to list here, I think, is \"how we run\nthe archive_command\".\n\nDoesn't the following work for now?\n\n(No example)\n\n- \"%f is replace by ... %p is .., %r is ... in archive_command\"\n\n- We call the archive_command for every wal segment which is finished.\n\n- We may call the archive_command for the same file more than once.\n\n- We may call the archive_command for different files with the same\n name. In this case server is working incorrectly and need a\n check. Don't overwrite with the new content.\n\n- We don't offer any durability or integrity on the archived\n files. All of them is up to you. You can use some existing\n solutions for archiving. See the following links.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:04:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Wed, 16 Jun 2021 12:04:03 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Ok, if we follow the direction that we are responsible for ensuring\n> that every user has reliable backups, I don't come up with proper\n> description about that.\n> \n> We could list several \"requirement\" like \"do sync after copy\", \"take a\n> checksum for all files then check it periodically\" or other things but\n> what is more important things to list here, I think, is \"how we run\n> the archive_command\".\n> \n> Doesn't the following work for now?\n> \n> (No example)\n> \n> - \"%f is replace by ... %p is .., %r is ... in archive_command\"\n> \n> - We call the archive_command for every wal segment which is finished.\n> \n> - We may call the archive_command for the same file more than once.\n> \n> - We may call the archive_command for different files with the same\n> name. In this case server is working incorrectly and need a\n> check. Don't overwrite with the new content.\n> \n> - We don't offer any durability or integrity on the archived\n> files. All of them is up to you. You can use some existing\n> solutions for archiving. See the following links.\n\nOf course, there should be some descriptions about error handling\nalong with.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:07:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Tue, Jun 15, 2021 at 11:00:57PM -0400, Stephen Frost wrote:\n> \n> As I suggested previously- this is similar to the hooks that we provide. We\n> don’t extensively document them because if you’re writing an extension\n> which uses a hook, you’re going to be (or should be..) reading the code too.\n\nI disagree, hooks allows developers to implement some new or additional\nbehavior which by definition can't be documented. And it's also relying on the\nC api which by definition allows you to do anything with the server. There are\nalso of course some requirements but they're quite obvious (like a planner_hook\nshould return a valid plan and such).\n\nOn the other hand the archive_command is there to do only one clear thing:\nsafely backup a WAL file. And I don't think that what makes that backup \"safe\"\nis open to discussion. Sure, you can chose to ignore some of it if you think\nyou can afford to do it, but it doesn't change the fact that it's still a\nrequirement which should be documented.\n\n> Consider that, really, an archive command should refuse to allow archiving\n> of WAL on a timeline which doesn’t have a corresponding history file in the\n> archive for that timeline (excluding timeline 1).\n\nYes, that's a clear requirement that should be documented.\n\n> Also, a backup tool\n> should compare the result of pg_start_backup to what’s in the control file,\n> using a fresh read, after start backup returns to make sure that the\n> storage is sane and not, say, cache’ing pages independently (such as might\n> happen with a separate NFS mount..). Oh, and if a replica is involved, a\n> check should be done to see if the replica has changed timelines and an\n> appropriate message thrown if that happens complaining that the backup was\n> aborted due to the promotion of the replica…\n\nI agree, but unless I'm missing something it's unrelated to what an\narchive_command should be in charge of?\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:20:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Wed, 16 Jun 2021 11:20:55 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \r\n> On Tue, Jun 15, 2021 at 11:00:57PM -0400, Stephen Frost wrote:\r\n> > \r\n> > As I suggested previously- this is similar to the hooks that we provide. We\r\n> > don’t extensively document them because if you’re writing an extension\r\n> > which uses a hook, you’re going to be (or should be..) reading the code too.\r\n> \r\n> I disagree, hooks allows developers to implement some new or additional\r\n> behavior which by definition can't be documented. And it's also relying on the\r\n> C api which by definition allows you to do anything with the server. There are\r\n> also of course some requirements but they're quite obvious (like a planner_hook\r\n> should return a valid plan and such).\r\n> \r\n> On the other hand the archive_command is there to do only one clear thing:\r\n> safely backup a WAL file. And I don't think that what makes that backup \"safe\"\r\n> is open to discussion. Sure, you can chose to ignore some of it if you think\r\n> you can afford to do it, but it doesn't change the fact that it's still a\r\n> requirement which should be documented.\r\n\r\nI agree to Julien, however, I want to discuss (also) on what to do for\r\n14 now. If we decide not to touch the document for the version. that\r\ndiscussion would end. What do you think about that? I think it's\r\nimpossible to write the full-document for the requirements *for 14*.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Wed, 16 Jun 2021 13:10:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Wed, Jun 16, 2021 at 01:10:16PM +0900, Kyotaro Horiguchi wrote:\n> \n> I agree to Julien, however, I want to discuss (also) on what to do for\n> 14 now. If we decide not to touch the document for the version. that\n> discussion would end. What do you think about that? I think it's\n> impossible to write the full-document for the requirements *for 14*.\n\nMy personal take on that is that this is a bug in the documentation and the\nlist of requirements should be backported. Hopefully this can be done before\nv14 is released, but if not I don't think that it should be a blocker to make\nprogress.\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:36:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Greetings,\n\nOn Tue, Jun 15, 2021 at 23:21 Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Tue, Jun 15, 2021 at 11:00:57PM -0400, Stephen Frost wrote:\n> >\n> > As I suggested previously- this is similar to the hooks that we provide.\n> We\n> > don’t extensively document them because if you’re writing an extension\n> > which uses a hook, you’re going to be (or should be..) reading the code\n> too.\n>\n> I disagree, hooks allows developers to implement some new or additional\n> behavior which by definition can't be documented. And it's also relying\n> on the\n> C api which by definition allows you to do anything with the server.\n> There are\n> also of course some requirements but they're quite obvious (like a\n> planner_hook\n> should return a valid plan and such).\n\n\nThe archive command is technically invoked using the shell, but the\ninterpretation of the exit code, for example, is only discussed in the C\ncode, but it’s far from the only consideration that someone developing an\narchive command needs to understand.\n\nOn the other hand the archive_command is there to do only one clear thing:\n> safely backup a WAL file. And I don't think that what makes that backup\n> \"safe\"\n> is open to discussion. Sure, you can chose to ignore some of it if you\n> think\n> you can afford to do it, but it doesn't change the fact that it's still a\n> requirement which should be documented.\n\n\nThe notion that an archive command can be distanced from backups is really\nnot reasonable in my opinion.\n\n> Consider that, really, an archive command should refuse to allow archiving\n> > of WAL on a timeline which doesn’t have a corresponding history file in\n> the\n> > archive for that timeline (excluding timeline 1).\n>\n> Yes, that's a clear requirement that should be documented.\n\n\nIs it a clear requirement that pgbackrest and every other organization that\nhas developed an archive command has missed? Are you able to point to a\ntool which has such a check today?\n\nThis is not a trivial problem any more than PG’s use of fsync is trivial\nand we clearly should have understood how Linux and fsync work decades ago\nand made sure to always crash on any fsync failure and not believe that a\nlater fsync would return a failure if an earlier one did and the problem\ndidn’t resolve itself properly.\n\n> Also, a backup tool\n> > should compare the result of pg_start_backup to what’s in the control\n> file,\n> > using a fresh read, after start backup returns to make sure that the\n> > storage is sane and not, say, cache’ing pages independently (such as\n> might\n> > happen with a separate NFS mount..). Oh, and if a replica is involved, a\n> > check should be done to see if the replica has changed timelines and an\n> > appropriate message thrown if that happens complaining that the backup\n> was\n> > aborted due to the promotion of the replica…\n>\n> I agree, but unless I'm missing something it's unrelated to what an\n> archive_command should be in charge of?\n\n\nI’m certainly not moved by this argument as it seems to be willfully\nmissing the point. Further, if we are going to claim that we must document\narchive command to such level then surely we need to also document all the\nrequirements for pg_start_backup and pg_stop_backup too, so this strikes me\nas entirely relevant.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Jun 15, 2021 at 23:21 Julien Rouhaud <rjuju123@gmail.com> wrote:On Tue, Jun 15, 2021 at 11:00:57PM -0400, Stephen Frost wrote:\n> \n> As I suggested previously- this is similar to the hooks that we provide. We\n> don’t extensively document them because if you’re writing an extension\n> which uses a hook, you’re going to be (or should be..) reading the code too.\n\nI disagree, hooks allows developers to implement some new or additional\nbehavior which by definition can't be documented.  And it's also relying on the\nC api which by definition allows you to do anything with the server.  There are\nalso of course some requirements but they're quite obvious (like a planner_hook\nshould return a valid plan and such).The archive command is technically invoked using the shell, but the interpretation of the exit code, for example, is only discussed in the C code, but it’s far from the only consideration that someone developing an archive command needs to understand.\nOn the other hand the archive_command is there to do only one clear thing:\nsafely backup a WAL file.  And I don't think that what makes that backup \"safe\"\nis open to discussion.  Sure, you can chose to ignore some of it if you think\nyou can afford to do it, but it doesn't change the fact that it's still a\nrequirement which should be documented.The notion that an archive command can be distanced from backups is really not reasonable in my opinion. \n> Consider that, really, an archive command should refuse to allow archiving\n> of WAL on a timeline which doesn’t have a corresponding history file in the\n> archive for that timeline (excluding timeline 1).\n\nYes, that's a clear requirement that should be documented.Is it a clear requirement that pgbackrest and every other organization that has developed an archive command has missed? Are you able to point to a tool which has such a check today?This is not a trivial problem any more than PG’s use of fsync is trivial and we clearly should have understood how Linux and fsync work decades ago and made sure to always crash on any fsync failure and not believe that a later fsync would return a failure if an earlier one did and the problem didn’t resolve itself properly.\n> Also, a backup tool\n> should compare the result of pg_start_backup to what’s in the control file,\n> using a fresh read, after start backup returns to make sure that the\n> storage is sane and not, say, cache’ing pages independently (such as might\n> happen with a separate NFS mount..).  Oh, and if a replica is involved, a\n> check should be done to see if the replica has changed timelines and an\n> appropriate message thrown if that happens complaining that the backup was\n> aborted due to the promotion of the replica…\n\nI agree, but unless I'm missing something it's unrelated to what an\narchive_command should be in charge of?I’m certainly not moved by this argument as it seems to be willfully missing the point.  Further, if we are going to claim that we must document archive command to such level then surely we need to also document all the requirements for pg_start_backup and pg_stop_backup too, so this strikes me as entirely relevant.Thanks,Stephen", "msg_date": "Wed, 16 Jun 2021 01:17:11 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Tue, 15 Jun 2021 07:54:49 -0400, Andrew Dunstan <andrew@dunslane.net> wrote in \n> \n> On 6/15/21 2:16 AM, Kyotaro Horiguchi wrote:\n> > At Fri, 11 Jun 2021 10:46:45 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> >> I think jacana uses msys[2?], so this likely indicates a problem\n> >> in path sanitization for the archive command. Andrew, any advice?\n> > Thanks for fixing it.\n> >\n> > # I haven't still succeed to run TAP tests on MSYS2 environment. I\n> > # cannot install IPC::Run for msys perl..\n> >\n> > regards.\n> >\n> \n> \n> Unpack the attached somewhere and point your PERL5LIB at it. That's all\n> I do.\n\nThanks a lot, Andrew. I get to run the TAP test with it and saw the\nsame error with jacana.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Jun 2021 14:20:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in recovery?" }, { "msg_contents": "On Wed, Jun 16, 2021 at 01:17:11AM -0400, Stephen Frost wrote:\n> \n> The archive command is technically invoked using the shell, but the\n> interpretation of the exit code, for example, is only discussed in the C\n> code, but it’s far from the only consideration that someone developing an\n> archive command needs to understand.\n\nThe expectations for the return code are documented. There are some subtleties\nfor when the command is interrupted by a signal, which are also documented.\nWhy shouldn't we document the rest of the expectation of what such a command\nshould do?\n\n> The notion that an archive command can be distanced from backups is really\n> not reasonable in my opinion.\n\nAs far as I know you can use archiving for replication purpose only. In such\ncase you certainly will have different usage of the archived files compared to\nbackups, and different verifications. But the requirements on what makes an\narchive_command safe won't change.\n\n> > Consider that, really, an archive command should refuse to allow archiving\n> > > of WAL on a timeline which doesn’t have a corresponding history file in\n> > the\n> > > archive for that timeline (excluding timeline 1).\n> >\n> > Yes, that's a clear requirement that should be documented.\n> \n> \n> Is it a clear requirement that pgbackrest and every other organization that\n> has developed an archive command has missed? Are you able to point to a\n> tool which has such a check today?\n\nI don't know, as I don't have any knowledge of what barman, BART, pgbackrest,\npg_probackup or any other backup solution does in any detail. I was only saying\nthat what you said makes sense and should be part of the documentation,\nassuming that this is indeed a requirement.\n\n> > Also, a backup tool\n> > > should compare the result of pg_start_backup to what’s in the control\n> > file,\n> > > using a fresh read, after start backup returns to make sure that the\n> > > storage is sane and not, say, cache’ing pages independently (such as\n> > might\n> > > happen with a separate NFS mount..). Oh, and if a replica is involved, a\n> > > check should be done to see if the replica has changed timelines and an\n> > > appropriate message thrown if that happens complaining that the backup\n> > was\n> > > aborted due to the promotion of the replica…\n> >\n> > I agree, but unless I'm missing something it's unrelated to what an\n> > archive_command should be in charge of?\n> \n> I’m certainly not moved by this argument as it seems to be willfully\n> missing the point. Further, if we are going to claim that we must document\n> archive command to such level then surely we need to also document all the\n> requirements for pg_start_backup and pg_stop_backup too, so this strikes me\n> as entirely relevant.\n\nSo what was the point? I'm not saying that doing backup is trivial and/or\nshould not be properly documented, nor that we shouldn't improve\npg_start_backup or pg_stop_backup documentation, I'm just saying that those\ndoesn't change what makes an archive_command safe.\n\n\n", "msg_date": "Wed, 16 Jun 2021 13:38:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "At Wed, 16 Jun 2021 12:36:19 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Wed, Jun 16, 2021 at 01:10:16PM +0900, Kyotaro Horiguchi wrote:\n> > \n> > I agree to Julien, however, I want to discuss (also) on what to do for\n> > 14 now. If we decide not to touch the document for the version. that\n> > discussion would end. What do you think about that? I think it's\n> > impossible to write the full-document for the requirements *for 14*.\n> \n> My personal take on that is that this is a bug in the documentation and the\n> list of requirements should be backported. Hopefully this can be done before\n> v14 is released, but if not I don't think that it should be a blocker to make\n> progress.\n\nI understand that, we discuss how we fix the documentation and we\ndon't change it for the version 14 if any conclusion cannot be made\nuntil the deadline.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Jun 2021 17:00:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Greetings,\n\n* Julien Rouhaud (rjuju123@gmail.com) wrote:\n> On Wed, Jun 16, 2021 at 01:17:11AM -0400, Stephen Frost wrote:\n> > > Consider that, really, an archive command should refuse to allow archiving\n> > > > of WAL on a timeline which doesn’t have a corresponding history file in\n> > > the\n> > > > archive for that timeline (excluding timeline 1).\n> > >\n> > > Yes, that's a clear requirement that should be documented.\n> > \n> > \n> > Is it a clear requirement that pgbackrest and every other organization that\n> > has developed an archive command has missed? Are you able to point to a\n> > tool which has such a check today?\n> \n> I don't know, as I don't have any knowledge of what barman, BART, pgbackrest,\n> pg_probackup or any other backup solution does in any detail. I was only saying\n> that what you said makes sense and should be part of the documentation,\n> assuming that this is indeed a requirement.\n\nThis is exactly it. I don't agree that we can, or should, treat every\nsensible thing that we realize about what the archive command or the\nbackup tool should be doing as some bug in our documentation that has to\nbe backpatched.\n\nIf you're serious about continuing on this path, it strikes me that the\nnext step would be to go review all of the above mentioned tools,\nidentify all of the things that they do and the checks that they have,\nand then craft a documentation patch to add all of those- for both\narchive command and pg_start/stop_backup.\n\nI don't think it'd be as big as the rest of the PG documentation, but\nI'm not sure.\n\nThanks,\n\nStephen", "msg_date": "Wed, 16 Jun 2021 09:19:51 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "On Wed, Jun 16, 2021 at 9:19 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> This is exactly it. I don't agree that we can, or should, treat every\n> sensible thing that we realize about what the archive command or the\n> backup tool should be doing as some bug in our documentation that has to\n> be backpatched.\n> If you're serious about continuing on this path, it strikes me that the\n> next step would be to go review all of the above mentioned tools,\n> identify all of the things that they do and the checks that they have,\n> and then craft a documentation patch to add all of those- for both\n> archive command and pg_start/stop_backup.\n\n1) I'm not saying that every single check that every single tools\ncurrently does is a requirement for a safe command and/or should be\ndocumented\n2) I don't think that there are thousands and thousands of\nrequirements, as you seem to imply\n3) I still don't understand why you think that having a partial\nknowledge of what makes an archive_command safe scattered in the\nsource code of many third party tools is a good thing\n\nBut what better alternative are you suggesting? Say that no ones\nknows what an archive_command should do and let people put a link to\ntheir backup solution in the hope that they will eventually converge\nto a safe solution that no one will be able to validate?\n\n\n", "msg_date": "Wed, 16 Jun 2021 21:43:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" }, { "msg_contents": "Greetings,\n\n* Julien Rouhaud (rjuju123@gmail.com) wrote:\n> On Wed, Jun 16, 2021 at 9:19 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > This is exactly it. I don't agree that we can, or should, treat every\n> > sensible thing that we realize about what the archive command or the\n> > backup tool should be doing as some bug in our documentation that has to\n> > be backpatched.\n> > If you're serious about continuing on this path, it strikes me that the\n> > next step would be to go review all of the above mentioned tools,\n> > identify all of the things that they do and the checks that they have,\n> > and then craft a documentation patch to add all of those- for both\n> > archive command and pg_start/stop_backup.\n> \n> 1) I'm not saying that every single check that every single tools\n> currently does is a requirement for a safe command and/or should be\n> documented\n\nThat's true- you're agreeing that there's even checks beyond those that\nare currently implemented which should also be done. That's exactly\nwhat I was responding to.\n\n> 2) I don't think that there are thousands and thousands of\n> requirements, as you seem to imply\n\nYou've not reviewed any of the tools which have been written and so I'm\nnot sure what you're basing your belief on. I've done reviews of the\nvarious tools and have been rather involved in the development of one of\nthem. I do think there's lots of requirements and it's not some static\nlist which could be just written down once and then never touched or\nthought about again.\n\nConsider pg_dump- do we document everything that a logical export tool\nshould do? That someone who wants to implement pg_dump should make sure\nthat the tool runs around and takes out a share lock on all of the\ntables to be exported? No, of course we don't, because we provide a\ntool to do that and if people actually want to understand how it works,\nwe point them to the source code. Had we started out with a backup tool\nin core, the same would be true for that. Instead, we didn't, and such\ntools were developed outside of core (and frankly have largely had to\nplay catch-up to try and figure out all the things that are needed to do\nit well and likely always will be since they aren't part of core).\n\n> 3) I still don't understand why you think that having a partial\n> knowledge of what makes an archive_command safe scattered in the\n> source code of many third party tools is a good thing\n\nHaving partial knowledge of what makes an archive_command safe in the\nofficial documentation is somehow better..? What would that lead to-\nother people seriously developing a backup solution for PG? No, I\nseriously doubt that, as those who are seriously developing such\nsolutions couldn't trust to only what we've got documented anyway but\nwould have to go looking through the source code and would need to\ndevelop a deep understanding of how WAL works, what happens when PG is\nstarted up to perform PITR but with archiving disabled and how that\nimpacts what ends up being archived (hint: the server will switch\ntimelines but won't actually archive a history file because archiving is\ndisabled- a restart which then enables archiving will then start pushing\nWAL on a timeline where there's no history file; do that twice from an\nolder backup and not you've got the same WAL files trying to be pushed\ninto the repo which are actually on materially different timelines even\nthough the same timeline has been chosen multiple times...), how\ntimelines work, and all the rest.\n\nWe already have partial documentation about what should go into\ndeveloping an archive_command and what it's lead to are people ignoring\nthat and instead copying the example that's explicitly called out as not\nsufficient. That's the actual problem that needs to be addressed here.\n\nLet's rip out the example and instead promote tools which have been\nwritten to specifically address this and which are actively maintained.\nIf someone actually comes asking about how to develop their own backup\nsolution for PG, we should suggest that they review the PG code related\nto WAL, timelines, how promotion works, etc, and probably point them at\nthe OSS projects which already work to tackle this issue, because to\ndevelop a proper tool you need to actually understand all of that.\n\n> But what better alternative are you suggesting? Say that no ones\n> knows what an archive_command should do and let people put a link to\n> their backup solution in the hope that they will eventually converge\n> to a safe solution that no one will be able to validate?\n\nThere are people who do know, today, what an archive command should do\nand we should be promoting the tools that they've developed which do, in\nfact, implement those checks already, at least the ones we've thought of\nso far.\n\nInstead, the suggestion being made here is to write a detailed design\ndocument for how to develop a backup tool (and, no, I don't agree that\nwe can \"just\" focus on archive command) for PG and then to maintain it\nand update it and backpatch every change to it that we think of.\n\nThanks,\n\nStephen", "msg_date": "Wed, 16 Jun 2021 10:24:17 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Duplicate history file?" } ]
[ { "msg_contents": "In an attempt to slice off as much non-NSS specific changes as possible from\nthe larger libnss patch proposed in [0], the attached patch contains the ssl\ntest harness refactoring to support multiple TLS libraries.\n\nThe changes are mostly a refactoring to hide library specific setup in their\nown modules, but also extend set_server_cert() to support password command\nwhich cleans up the TAP tests from hands-on setup and teardown. \n\ncheers ./daniel\n\n[0] https://postgr.es/m/FAB21FC8-0F62-434F-AA78-6BD9336D630A@yesql.se", "msg_date": "Thu, 21 Jan 2021 10:42:02 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "Attached is a v2 which addresses the comments raised on the main NSS thread, as\nwell as introduces named parameters for the server cert function to make the\ntest code easier to read.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Thu, 25 Mar 2021 00:02:00 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "On 2021-Mar-25, Daniel Gustafsson wrote:\n\n> Attached is a v2 which addresses the comments raised on the main NSS thread, as\n> well as introduces named parameters for the server cert function to make the\n> test code easier to read.\n\nI don't like this patch. I think your SSL::Server::OpenSSL and\nSSL::Server::NSS packages should be doing \"use parent SSL::Server\";\nhaving SSL::Server grow a line to \"use\" its subclass\nSSL::Server::OpenSSL and import its get_new_openssl_backend() method\nseems to go against the grain.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Wed, 24 Mar 2021 20:26:30 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "> On 25 Mar 2021, at 00:26, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2021-Mar-25, Daniel Gustafsson wrote:\n> \n>> Attached is a v2 which addresses the comments raised on the main NSS thread, as\n>> well as introduces named parameters for the server cert function to make the\n>> test code easier to read.\n> \n> I don't like this patch. I think your SSL::Server::OpenSSL and\n> SSL::Server::NSS packages should be doing \"use parent SSL::Server\";\n> having SSL::Server grow a line to \"use\" its subclass\n> SSL::Server::OpenSSL and import its get_new_openssl_backend() method\n> seems to go against the grain.\n\nI'm far from skilled at Perl module inheritance but that makes sense, I'll take\na stab at that after some sleep and coffee.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 25 Mar 2021 00:49:47 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "\nOn 3/24/21 7:49 PM, Daniel Gustafsson wrote:\n>> On 25 Mar 2021, at 00:26, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> On 2021-Mar-25, Daniel Gustafsson wrote:\n>>\n>>> Attached is a v2 which addresses the comments raised on the main NSS thread, as\n>>> well as introduces named parameters for the server cert function to make the\n>>> test code easier to read.\n>> I don't like this patch. I think your SSL::Server::OpenSSL and\n>> SSL::Server::NSS packages should be doing \"use parent SSL::Server\";\n>> having SSL::Server grow a line to \"use\" its subclass\n>> SSL::Server::OpenSSL and import its get_new_openssl_backend() method\n>> seems to go against the grain.\n> I'm far from skilled at Perl module inheritance but that makes sense, I'll take\n> a stab at that after some sleep and coffee.\n>\n\nThe thing is that SSLServer isn't currently constructed in an OO\nfashion. Typically, OO modules in perl don't export anything, and all\naccess is via the class (for the constructor or static methods) or\ninstances, as in\n\n    my $instance = MyClass->new();\n    $instance->mymethod();\n\nIn such a module you should not see lines using Exporter or defining\n@Export.\n\nSo probably the first step in this process would be to recast SSLServer\nas an OO type module, without subclassing it, and then create a subclass\nalong the lines Alvarro suggests.\n\nIf this is all strange to you, I can help a bit.\n\nIncidentally, I'm not sure why we need to break SSLServer into\nSSL::Server - are we expecting to create other children of the SSL\nnamespace?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 25 Mar 2021 09:25:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "On Thu, Mar 25, 2021 at 09:25:11AM -0400, Andrew Dunstan wrote:\n> The thing is that SSLServer isn't currently constructed in an OO\n> fashion. Typically, OO modules in perl don't export anything, and all\n> access is via the class (for the constructor or static methods) or\n> instances, as in\n> \n>     my $instance = MyClass->new();\n>     $instance->mymethod();\n> \n> In such a module you should not see lines using Exporter or defining\n> @Export.\n> \n> So probably the first step in this process would be to recast SSLServer\n> as an OO type module, without subclassing it, and then create a subclass\n> along the lines Alvarro suggests.\n\nIt seems that it does not make sense to transform all the contents of\nSSLServer to become an OO module. So it looks necessary to me to\nsplit things, with one part being the OO module managing the server\nconfiguration. So, first, we have some helper routines that should\nnot be within the module:\n- copy_files()\n- test_connect_fails()\n- test_connect_ok()\nThe test_*() ones are just wrappers for psql able to use a customized\nconnection string. It seems to me that it would make sense to move\nthose two into PostgresNode::psql itself and extend it to be able to\nhandle custom connection strings? copy_files() is more generic than\nthat. Wouldn't it make sense to move that to TestLib.pm instead?\n\nSecond, the routines managing the server setup itself:\n- a new() routine to create and register a node removing the\nduplicated initialization setup in 001 and 002.\n- switch_server_cert(), with a split on set_server_cert() as that\nlooks cleaner.\n- configure_hba_for_ssl()\n- install_certificates() (present inside Daniel's patch)\n- Something to copy the keys from the tree.\n\nPatch v2 from upthread does mostly that, but it seems to me that we\nshould integrate better with PostgresNode to manage the backend node,\nno?\n\n> Incidentally, I'm not sure why we need to break SSLServer into\n> SSL::Server - are we expecting to create other children of the SSL\n> namespace?\n\nAgreed.\n--\nMichael", "msg_date": "Tue, 30 Mar 2021 15:50:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "On Tue, Mar 30, 2021 at 03:50:28PM +0900, Michael Paquier wrote:\n> The test_*() ones are just wrappers for psql able to use a customized\n> connection string. It seems to me that it would make sense to move\n> those two into PostgresNode::psql itself and extend it to be able to\n> handle custom connection strings?\n\nLooking at this part, I think that this is a win in terms of future\nchanges for SSLServer.pm as it would become a facility only in charge\nof managing the backend's SSL configuration. This has also the\nadvantage to make the error handling with psql more consistent with\nthe other tests.\n\nSo, attached is a patch to do this simplification. The bulk of the\nchanges is within the tests themselves to adapt to the merge of\n$common_connstr and $connstr for the new routines of PostgresNode.pm,\nand I have done things this way to ease the patch lookup. Thoughts?\n--\nMichael", "msg_date": "Tue, 30 Mar 2021 18:53:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "\nOn 3/30/21 5:53 AM, Michael Paquier wrote:\n> On Tue, Mar 30, 2021 at 03:50:28PM +0900, Michael Paquier wrote:\n>> The test_*() ones are just wrappers for psql able to use a customized\n>> connection string. It seems to me that it would make sense to move\n>> those two into PostgresNode::psql itself and extend it to be able to\n>> handle custom connection strings?\n> Looking at this part, I think that this is a win in terms of future\n> changes for SSLServer.pm as it would become a facility only in charge\n> of managing the backend's SSL configuration. This has also the\n> advantage to make the error handling with psql more consistent with\n> the other tests.\n>\n> So, attached is a patch to do this simplification. The bulk of the\n> changes is within the tests themselves to adapt to the merge of\n> $common_connstr and $connstr for the new routines of PostgresNode.pm,\n> and I have done things this way to ease the patch lookup. Thoughts?\n\n\n\nLooks reasonable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 30 Mar 2021 09:45:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "On 2021-Mar-30, Michael Paquier wrote:\n\n> On Tue, Mar 30, 2021 at 03:50:28PM +0900, Michael Paquier wrote:\n> > The test_*() ones are just wrappers for psql able to use a customized\n> > connection string. It seems to me that it would make sense to move\n> > those two into PostgresNode::psql itself and extend it to be able to\n> > handle custom connection strings?\n> \n> Looking at this part, I think that this is a win in terms of future\n> changes for SSLServer.pm as it would become a facility only in charge\n> of managing the backend's SSL configuration. This has also the\n> advantage to make the error handling with psql more consistent with\n> the other tests.\n> \n> So, attached is a patch to do this simplification. The bulk of the\n> changes is within the tests themselves to adapt to the merge of\n> $common_connstr and $connstr for the new routines of PostgresNode.pm,\n> and I have done things this way to ease the patch lookup. Thoughts?\n\nI agree this seems a win.\n\nThe only complain I have is that \"the given node\" is nonsensical in\nPostgresNode. I suggest to delete the word \"given\". Also \"This is\nexpected to fail with a message that matches the regular expression\n$expected_stderr\".\n\nThe POD doc for connect_fails uses order: ($connstr, $testname, $expected_stderr)\nbut the routine has:\n + my ($self, $connstr, $expected_stderr, $testname) = @_;\n\nthese should match.\n\n(There's quite an inconsistency in the existing test code about\nexpected_stderr being a string or a regex; and some regexes are quite\ngeneric: just qr/SSL error/. Not this patch responsibility to fix that.)\n\nAs I understand, our perlcriticrc no longer requires 'return' at the end\nof routines (commit 0516f94d18c5), so you can omit that.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Tue, 30 Mar 2021 12:15:07 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "> On 30 Mar 2021, at 11:53, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Mar 30, 2021 at 03:50:28PM +0900, Michael Paquier wrote:\n>> The test_*() ones are just wrappers for psql able to use a customized\n>> connection string. It seems to me that it would make sense to move\n>> those two into PostgresNode::psql itself and extend it to be able to\n>> handle custom connection strings?\n> \n> Looking at this part, I think that this is a win in terms of future\n> changes for SSLServer.pm as it would become a facility only in charge\n> of managing the backend's SSL configuration. This has also the\n> advantage to make the error handling with psql more consistent with\n> the other tests.\n> \n> So, attached is a patch to do this simplification. The bulk of the\n> changes is within the tests themselves to adapt to the merge of\n> $common_connstr and $connstr for the new routines of PostgresNode.pm,\n> and I have done things this way to ease the patch lookup. Thoughts?\n\nLGTM with the findings that Alvaro reported.\n\n+$node->connect_ok($common_connstr . \" \" . \"user=ssltestuser\",\n\nThis double concatenation could be a single concat, or just use scalar value\ninterpolation in the string to make it even more readable. As it isn't using\nthe same line broken pattern of the others the concat looks a bit weird as a\nresult.\n\nThanks for picking it up, as I have very limited time for hacking right now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n", "msg_date": "Tue, 30 Mar 2021 23:59:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "On 2021-Mar-30, Daniel Gustafsson wrote:\n\n> +$node->connect_ok($common_connstr . \" \" . \"user=ssltestuser\",\n> \n> This double concatenation could be a single concat, or just use scalar value\n> interpolation in the string to make it even more readable. As it isn't using\n> the same line broken pattern of the others the concat looks a bit weird as a\n> result.\n\n+1 for using a single scalar.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Tue, 30 Mar 2021 19:14:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "On Tue, Mar 30, 2021 at 07:14:55PM -0300, Alvaro Herrera wrote:\n> On 2021-Mar-30, Daniel Gustafsson wrote:\n>> This double concatenation could be a single concat, or just use scalar value\n>> interpolation in the string to make it even more readable. As it isn't using\n>> the same line broken pattern of the others the concat looks a bit weird as a\n>> result.\n> \n> +1 for using a single scalar.\n\nAgreed. I posted things this way to make a lookup at the diffs easier\nfor the eye, but that was not intended for the final patch.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 10:01:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "On Tue, Mar 30, 2021 at 12:15:07PM -0300, Alvaro Herrera wrote:\n> The only complain I have is that \"the given node\" is nonsensical in\n> PostgresNode. I suggest to delete the word \"given\". Also \"This is\n> expected to fail with a message that matches the regular expression\n> $expected_stderr\".\n\nYour suggestions are better, indeed.\n\n> The POD doc for connect_fails uses order: ($connstr, $testname, $expected_stderr)\n> but the routine has:\n> + my ($self, $connstr, $expected_stderr, $testname) = @_;\n> \n> these should match.\n\nFixed.\n\n> (There's quite an inconsistency in the existing test code about\n> expected_stderr being a string or a regex; and some regexes are quite\n> generic: just qr/SSL error/. Not this patch responsibility to fix that.)\n\nJacob has just raised this as an issue for an integration with NLS,\nbecause it may be possible that things fail with \"SSL error\" but a\ndifferent error pattern, causing false positives:\nhttps://www.postgresql.org/message-id/e0f0484a1815b26bb99ef9ddc7a110dfd6425931.camel@vmware.com\n\nI agree that those matches should be much more picky. We may need to\nbe careful across all versions of OpenSSL supported though :/\n\n> As I understand, our perlcriticrc no longer requires 'return' at the end\n> of routines (commit 0516f94d18c5), so you can omit that.\n\nFixed. Thanks.\n\nWith all the comments addressed, with updates to use a single scalar\nfor all the connection strings and with a proper indentation, I finish\nwith the attached. Does that look fine?\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 10:43:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" }, { "msg_contents": "On Wed, Mar 31, 2021 at 10:43:00AM +0900, Michael Paquier wrote:\n> Jacob has just raised this as an issue for an integration with NLS,\n> because it may be possible that things fail with \"SSL error\" but a\n> different error pattern, causing false positives:\n> https://www.postgresql.org/message-id/e0f0484a1815b26bb99ef9ddc7a110dfd6425931.camel@vmware.com\n> \n> I agree that those matches should be much more picky. We may need to\n> be careful across all versions of OpenSSL supported though :/\n\nAs I got my eyes on that, I am going to begin a new thread with a patch.\n\n> With all the comments addressed, with updates to use a single scalar\n> for all the connection strings and with a proper indentation, I finish\n> with the attached. Does that look fine?\n\nHearing nothing, I have applied this cleanup patch. I am not sure if\nI will be able to tackle the remaining issues, aka switching\nSSLServer.pm to become an OO module and plug OpenSSL-specific things\non top of that.\n--\nMichael", "msg_date": "Thu, 1 Apr 2021 10:02:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Refactor SSL test framework to support multiple TLS libraries" } ]
[ { "msg_contents": "Hi,\n\nCreating/altering subscription is successful when we specify a\npublication which does not exist in the publisher. I felt we should\nthrow an error in this case, that will help the user to check if there\nis any typo in the create subscription command or to create the\npublication before the subscription is created.\nIf the above analysis looks correct, then please find a patch that\nchecks if the specified publications are present in the publisher and\nthrows an error if any of the publications is missing in the\npublisher.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 21 Jan 2021 18:55:37 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> Creating/altering subscription is successful when we specify a\n> publication which does not exist in the publisher. I felt we should\n> throw an error in this case, that will help the user to check if there\n> is any typo in the create subscription command or to create the\n> publication before the subscription is created.\n> If the above analysis looks correct, then please find a patch that\n> checks if the specified publications are present in the publisher and\n> throws an error if any of the publications is missing in the\n> publisher.\n> Thoughts?\n\nI was having similar thoughts (while working on the logical\nreplication bug on alter publication...drop table behaviour) on why\ncreate subscription succeeds without checking the publication\nexistence. I checked in documentation, to find if there's a strong\nreason for that, but I couldn't. Maybe it's because of the principle\n\"first let users create subscriptions, later the publications can be\ncreated on the publisher system\", similar to this behaviour\n\"publications can be created without any tables attached to it\ninitially, later they can be added\".\n\nOthers may have better thoughts.\n\nIf we do check publication existence for CREATE SUBSCRIPTION, I think\nwe should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n\nI wonder, why isn't dropping a publication from a list of publications\nthat are with subscription is not allowed?\n\nSome comments so far on the patch:\n\n1) I see most of the code in the new function check_publications() and\nexisting fetch_table_list() is the same. Can we have a generic\nfunction, with maybe a flag to separate out the logic specific for\nchecking publication and fetching table list from the publisher.\n2) Can't we know whether the publications exist on the publisher with\nthe existing (or modifying it a bit if required) query in\nfetch_table_list(), so that we can avoid making another connection to\nthe publisher system from the subscriber?\n3) If multiple publications are specified in the CREATE SUBSCRIPTION\nquery, IIUC, with your patch, the query fails even if at least one of\nthe publications doesn't exist. Should we throw a warning in this case\nand allow the subscription be created for other existing\npublications?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 22:21:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "\nOn Fri, 22 Jan 2021 at 00:51, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> Creating/altering subscription is successful when we specify a\n>> publication which does not exist in the publisher. I felt we should\n>> throw an error in this case, that will help the user to check if there\n>> is any typo in the create subscription command or to create the\n>> publication before the subscription is created.\n>> If the above analysis looks correct, then please find a patch that\n>> checks if the specified publications are present in the publisher and\n>> throws an error if any of the publications is missing in the\n>> publisher.\n>> Thoughts?\n>\n> I was having similar thoughts (while working on the logical\n> replication bug on alter publication...drop table behaviour) on why\n> create subscription succeeds without checking the publication\n> existence. I checked in documentation, to find if there's a strong\n> reason for that, but I couldn't. Maybe it's because of the principle\n> \"first let users create subscriptions, later the publications can be\n> created on the publisher system\", similar to this behaviour\n> \"publications can be created without any tables attached to it\n> initially, later they can be added\".\n>\n> Others may have better thoughts.\n>\n> If we do check publication existence for CREATE SUBSCRIPTION, I think\n> we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n>\n\nAgreed. Current patch do not check publication existence for\nALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).\n\n> I wonder, why isn't dropping a publication from a list of publications\n> that are with subscription is not allowed?\n>\n> Some comments so far on the patch:\n>\n> 1) I see most of the code in the new function check_publications() and\n> existing fetch_table_list() is the same. Can we have a generic\n> function, with maybe a flag to separate out the logic specific for\n> checking publication and fetching table list from the publisher.\n\n+1\n\n> 2) Can't we know whether the publications exist on the publisher with\n> the existing (or modifying it a bit if required) query in\n> fetch_table_list(), so that we can avoid making another connection to\n> the publisher system from the subscriber?\n\nIIUC, the patch does not make another connection, it just execute a new\nquery in already connection. If we want to check publication existence\nfor ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false)\nwe should make another connection.\n\n> 3) If multiple publications are specified in the CREATE SUBSCRIPTION\n> query, IIUC, with your patch, the query fails even if at least one of\n> the publications doesn't exist. Should we throw a warning in this case\n> and allow the subscription be created for other existing\n> publications?\n>\n\n+1. If all the publications do not exist, we should throw an error.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 22 Jan 2021 12:44:30 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Fri, Jan 22, 2021 at 10:14 AM japin <japinli@hotmail.com> wrote:\n> > 2) Can't we know whether the publications exist on the publisher with\n> > the existing (or modifying it a bit if required) query in\n> > fetch_table_list(), so that we can avoid making another connection to\n> > the publisher system from the subscriber?\n>\n> IIUC, the patch does not make another connection, it just execute a new\n> query in already connection. If we want to check publication existence\n> for ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false)\n> we should make another connection.\n\nActually, I meant that we can avoid submitting another SQL query to\nthe publisher if we could manage to submit a single query that first\nchecks if a given publication exists in pg_publication and if yes\nreturns the tables associated with it from pg_publication_tables. Can\nwe modify the existing query in fetch_table_list that gets only the\ntable list from pg_publcation_tables to see if the given publication\nexists in the pg_publication?\n\nYes you are right, if we were to check the existence of publications\nprovided with ALTER SUBSCRIPTION statements, we need to do\nwalrcv_connect, walrcv_exec. We could just call a common function from\nthere.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jan 2021 12:12:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Fri, Jan 22, 2021 at 12:14 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jan 22, 2021 at 10:14 AM japin <japinli@hotmail.com> wrote:\n> > > 2) Can't we know whether the publications exist on the publisher with\n> > > the existing (or modifying it a bit if required) query in\n> > > fetch_table_list(), so that we can avoid making another connection to\n> > > the publisher system from the subscriber?\n> >\n> > IIUC, the patch does not make another connection, it just execute a new\n> > query in already connection. If we want to check publication existence\n> > for ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false)\n> > we should make another connection.\n>\n> Actually, I meant that we can avoid submitting another SQL query to\n> the publisher if we could manage to submit a single query that first\n> checks if a given publication exists in pg_publication and if yes\n> returns the tables associated with it from pg_publication_tables. Can\n> we modify the existing query in fetch_table_list that gets only the\n> table list from pg_publcation_tables to see if the given publication\n> exists in the pg_publication?\n>\nWhen I was implementing this, I had given it a thought on this. To do\nthat we might need some function/procedure to do this. I felt this\napproach is more simpler and chose this approach.\nThoughts?\n\n> Yes you are right, if we were to check the existence of publications\n> provided with ALTER SUBSCRIPTION statements, we need to do\n> walrcv_connect, walrcv_exec. We could just call a common function from\n> there.\n>\nYes I agree this should be done in ALTER SUBSCRIPTION SET PUBLICATION\ncase also, currently we do if refresh is enabled, it should also be\ndone in ALTER SUBSCRIPTION mysub SET PUBLICATION mypub WITH (REFRESH =\nFALSE) also. I will include this in my next version of the patch.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jan 2021 18:07:39 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Fri, Jan 22, 2021 at 10:14 AM japin <japinli@hotmail.com> wrote:\n>\n>\n> On Fri, 22 Jan 2021 at 00:51, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> Creating/altering subscription is successful when we specify a\n> >> publication which does not exist in the publisher. I felt we should\n> >> throw an error in this case, that will help the user to check if there\n> >> is any typo in the create subscription command or to create the\n> >> publication before the subscription is created.\n> >> If the above analysis looks correct, then please find a patch that\n> >> checks if the specified publications are present in the publisher and\n> >> throws an error if any of the publications is missing in the\n> >> publisher.\n> >> Thoughts?\n> >\n> > I was having similar thoughts (while working on the logical\n> > replication bug on alter publication...drop table behaviour) on why\n> > create subscription succeeds without checking the publication\n> > existence. I checked in documentation, to find if there's a strong\n> > reason for that, but I couldn't. Maybe it's because of the principle\n> > \"first let users create subscriptions, later the publications can be\n> > created on the publisher system\", similar to this behaviour\n> > \"publications can be created without any tables attached to it\n> > initially, later they can be added\".\n> >\n> > Others may have better thoughts.\n> >\n> > If we do check publication existence for CREATE SUBSCRIPTION, I think\n> > we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n> >\n>\n> Agreed. Current patch do not check publication existence for\n> ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).\n>\n> > I wonder, why isn't dropping a publication from a list of publications\n> > that are with subscription is not allowed?\n> >\n> > Some comments so far on the patch:\n> >\n> > 1) I see most of the code in the new function check_publications() and\n> > existing fetch_table_list() is the same. Can we have a generic\n> > function, with maybe a flag to separate out the logic specific for\n> > checking publication and fetching table list from the publisher.\n>\n> +1\n>\n> > 2) Can't we know whether the publications exist on the publisher with\n> > the existing (or modifying it a bit if required) query in\n> > fetch_table_list(), so that we can avoid making another connection to\n> > the publisher system from the subscriber?\n>\n> IIUC, the patch does not make another connection, it just execute a new\n> query in already connection. If we want to check publication existence\n> for ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false)\n> we should make another connection.\n>\n> > 3) If multiple publications are specified in the CREATE SUBSCRIPTION\n> > query, IIUC, with your patch, the query fails even if at least one of\n> > the publications doesn't exist. Should we throw a warning in this case\n> > and allow the subscription be created for other existing\n> > publications?\n> >\n>\n> +1. If all the publications do not exist, we should throw an error.\n\nI also felt if any of the publications are not there, we should throw an error.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Jan 2021 18:09:10 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, Jan 21, 2021 at 10:21 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Creating/altering subscription is successful when we specify a\n> > publication which does not exist in the publisher. I felt we should\n> > throw an error in this case, that will help the user to check if there\n> > is any typo in the create subscription command or to create the\n> > publication before the subscription is created.\n> > If the above analysis looks correct, then please find a patch that\n> > checks if the specified publications are present in the publisher and\n> > throws an error if any of the publications is missing in the\n> > publisher.\n> > Thoughts?\n>\n> I was having similar thoughts (while working on the logical\n> replication bug on alter publication...drop table behaviour) on why\n> create subscription succeeds without checking the publication\n> existence. I checked in documentation, to find if there's a strong\n> reason for that, but I couldn't. Maybe it's because of the principle\n> \"first let users create subscriptions, later the publications can be\n> created on the publisher system\", similar to this behaviour\n> \"publications can be created without any tables attached to it\n> initially, later they can be added\".\n>\n> Others may have better thoughts.\n>\n> If we do check publication existence for CREATE SUBSCRIPTION, I think\n> we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n>\n> I wonder, why isn't dropping a publication from a list of publications\n> that are with subscription is not allowed?\n>\n> Some comments so far on the patch:\n>\n> 1) I see most of the code in the new function check_publications() and\n> existing fetch_table_list() is the same. Can we have a generic\n> function, with maybe a flag to separate out the logic specific for\n> checking publication and fetching table list from the publisher.\n\nI have made the common code between the check_publications and\nfetch_table_list into a common function\nget_appended_publications_query. I felt the rest of the code is better\noff kept as it is.\nThe Attached patch has the changes for the same and also the change to\ncheck publication exists during alter subscription set publication.\nThoughts?\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 25 Jan 2021 13:10:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 1:10 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Jan 21, 2021 at 10:21 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Creating/altering subscription is successful when we specify a\n> > > publication which does not exist in the publisher. I felt we should\n> > > throw an error in this case, that will help the user to check if there\n> > > is any typo in the create subscription command or to create the\n> > > publication before the subscription is created.\n> > > If the above analysis looks correct, then please find a patch that\n> > > checks if the specified publications are present in the publisher and\n> > > throws an error if any of the publications is missing in the\n> > > publisher.\n> > > Thoughts?\n> >\n> > I was having similar thoughts (while working on the logical\n> > replication bug on alter publication...drop table behaviour) on why\n> > create subscription succeeds without checking the publication\n> > existence. I checked in documentation, to find if there's a strong\n> > reason for that, but I couldn't. Maybe it's because of the principle\n> > \"first let users create subscriptions, later the publications can be\n> > created on the publisher system\", similar to this behaviour\n> > \"publications can be created without any tables attached to it\n> > initially, later they can be added\".\n> >\n> > Others may have better thoughts.\n> >\n> > If we do check publication existence for CREATE SUBSCRIPTION, I think\n> > we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n> >\n> > I wonder, why isn't dropping a publication from a list of publications\n> > that are with subscription is not allowed?\n> >\n> > Some comments so far on the patch:\n> >\n> > 1) I see most of the code in the new function check_publications() and\n> > existing fetch_table_list() is the same. Can we have a generic\n> > function, with maybe a flag to separate out the logic specific for\n> > checking publication and fetching table list from the publisher.\n>\n> I have made the common code between the check_publications and\n> fetch_table_list into a common function\n> get_appended_publications_query. I felt the rest of the code is better\n> off kept as it is.\n> The Attached patch has the changes for the same and also the change to\n> check publication exists during alter subscription set publication.\n> Thoughts?\n>\n\nSo basically, the create subscription will throw an error if the\npublication does not exist. So will you throw an error if we try to\ndrop the publication which is subscribed by some subscription? I mean\nbasically, you are creating a dependency that if you are creating a\nsubscription then there must be a publication that is not completely\ninsane but then we will have to disallow dropping the publication as\nwell. Am I missing something?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:42:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 2:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jan 25, 2021 at 1:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, Jan 21, 2021 at 10:21 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > Creating/altering subscription is successful when we specify a\n> > > > publication which does not exist in the publisher. I felt we should\n> > > > throw an error in this case, that will help the user to check if there\n> > > > is any typo in the create subscription command or to create the\n> > > > publication before the subscription is created.\n> > > > If the above analysis looks correct, then please find a patch that\n> > > > checks if the specified publications are present in the publisher and\n> > > > throws an error if any of the publications is missing in the\n> > > > publisher.\n> > > > Thoughts?\n> > >\n> > > I was having similar thoughts (while working on the logical\n> > > replication bug on alter publication...drop table behaviour) on why\n> > > create subscription succeeds without checking the publication\n> > > existence. I checked in documentation, to find if there's a strong\n> > > reason for that, but I couldn't. Maybe it's because of the principle\n> > > \"first let users create subscriptions, later the publications can be\n> > > created on the publisher system\", similar to this behaviour\n> > > \"publications can be created without any tables attached to it\n> > > initially, later they can be added\".\n> > >\n> > > Others may have better thoughts.\n> > >\n> > > If we do check publication existence for CREATE SUBSCRIPTION, I think\n> > > we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n> > >\n> > > I wonder, why isn't dropping a publication from a list of publications\n> > > that are with subscription is not allowed?\n> > >\n> > > Some comments so far on the patch:\n> > >\n> > > 1) I see most of the code in the new function check_publications() and\n> > > existing fetch_table_list() is the same. Can we have a generic\n> > > function, with maybe a flag to separate out the logic specific for\n> > > checking publication and fetching table list from the publisher.\n> >\n> > I have made the common code between the check_publications and\n> > fetch_table_list into a common function\n> > get_appended_publications_query. I felt the rest of the code is better\n> > off kept as it is.\n> > The Attached patch has the changes for the same and also the change to\n> > check publication exists during alter subscription set publication.\n> > Thoughts?\n> >\n>\n> So basically, the create subscription will throw an error if the\n> publication does not exist. So will you throw an error if we try to\n> drop the publication which is subscribed by some subscription? I mean\n> basically, you are creating a dependency that if you are creating a\n> subscription then there must be a publication that is not completely\n> insane but then we will have to disallow dropping the publication as\n> well. Am I missing something?\n\nDo you mean DROP PUBLICATION non_existent_publication;?\n\nOr\n\nDo you mean when we drop publications from a subscription? If yes, do\nwe have a way to drop a publication from the subscription? See below\none of my earlier questions on this.\n\"I wonder, why isn't dropping a publication from a list of\npublications that are with subscription is not allowed?\"\nAt least, I see no ALTER SUBSCRIPTION ... DROP PUBLICATION mypub1 or\nsomething similar?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:48:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 2:48 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jan 25, 2021 at 2:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Jan 25, 2021 at 1:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Thu, Jan 21, 2021 at 10:21 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > Creating/altering subscription is successful when we specify a\n> > > > > publication which does not exist in the publisher. I felt we should\n> > > > > throw an error in this case, that will help the user to check if there\n> > > > > is any typo in the create subscription command or to create the\n> > > > > publication before the subscription is created.\n> > > > > If the above analysis looks correct, then please find a patch that\n> > > > > checks if the specified publications are present in the publisher and\n> > > > > throws an error if any of the publications is missing in the\n> > > > > publisher.\n> > > > > Thoughts?\n> > > >\n> > > > I was having similar thoughts (while working on the logical\n> > > > replication bug on alter publication...drop table behaviour) on why\n> > > > create subscription succeeds without checking the publication\n> > > > existence. I checked in documentation, to find if there's a strong\n> > > > reason for that, but I couldn't. Maybe it's because of the principle\n> > > > \"first let users create subscriptions, later the publications can be\n> > > > created on the publisher system\", similar to this behaviour\n> > > > \"publications can be created without any tables attached to it\n> > > > initially, later they can be added\".\n> > > >\n> > > > Others may have better thoughts.\n> > > >\n> > > > If we do check publication existence for CREATE SUBSCRIPTION, I think\n> > > > we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n> > > >\n> > > > I wonder, why isn't dropping a publication from a list of publications\n> > > > that are with subscription is not allowed?\n> > > >\n> > > > Some comments so far on the patch:\n> > > >\n> > > > 1) I see most of the code in the new function check_publications() and\n> > > > existing fetch_table_list() is the same. Can we have a generic\n> > > > function, with maybe a flag to separate out the logic specific for\n> > > > checking publication and fetching table list from the publisher.\n> > >\n> > > I have made the common code between the check_publications and\n> > > fetch_table_list into a common function\n> > > get_appended_publications_query. I felt the rest of the code is better\n> > > off kept as it is.\n> > > The Attached patch has the changes for the same and also the change to\n> > > check publication exists during alter subscription set publication.\n> > > Thoughts?\n> > >\n> >\n> > So basically, the create subscription will throw an error if the\n> > publication does not exist. So will you throw an error if we try to\n> > drop the publication which is subscribed by some subscription? I mean\n> > basically, you are creating a dependency that if you are creating a\n> > subscription then there must be a publication that is not completely\n> > insane but then we will have to disallow dropping the publication as\n> > well. Am I missing something?\n>\n> Do you mean DROP PUBLICATION non_existent_publication;?\n>\n> Or\n>\n> Do you mean when we drop publications from a subscription?\n\nI mean it doesn’t seem right to disallow to create the subscription if\nthe publisher doesn't exist, and my reasoning was even though the\npublisher exists while creating the subscription you might drop it\nlater right?. So basically, now also we can create the same scenario\nthat a subscription may exist for the publication which does not\nexist.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 15:07:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 3:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > So basically, the create subscription will throw an error if the\n> > > publication does not exist. So will you throw an error if we try to\n> > > drop the publication which is subscribed by some subscription? I mean\n> > > basically, you are creating a dependency that if you are creating a\n> > > subscription then there must be a publication that is not completely\n> > > insane but then we will have to disallow dropping the publication as\n> > > well. Am I missing something?\n> >\n> > Do you mean DROP PUBLICATION non_existent_publication;?\n> >\n> > Or\n> >\n> > Do you mean when we drop publications from a subscription?\n>\n> I mean it doesn’t seem right to disallow to create the subscription if\n> the publisher doesn't exist, and my reasoning was even though the\n> publisher exists while creating the subscription you might drop it\n> later right?. So basically, now also we can create the same scenario\n> that a subscription may exist for the publication which does not\n> exist.\n\nYes, the above scenario can be created even now. If a publication is\ndropped in the publisher system, then it will not replicate/publish\nthe changes for that publication (publication_invalidation_cb,\nrel_sync_cache_publication_cb, LoadPublications in\nget_rel_sync_entry), so subscriber doesn't receive them. But the\nsubscription can still contain that dropped publication in it's list\nof publications.\n\nThe patch proposed in this thread, just checks while creation/altering\nof the subscription on the subscriber system whether or not the\npublication exists on the publisher system. This is one way\ndependency. But given the above scenario, there can exist another\ndependency i.e. publisher dropping the publisher at any time. So to\nmake it a complete solution i.e. not allowing non-existent\npublications from the list of publications in the subscription, we\nneed to detect when the publications are dropped in the publisher and\nwe should, may be on a next connection to the subscriber, also look at\nthe subscription for that dropped publication, if exists remove it.\nBut that's an overkill and impractical I feel. Thoughts?\n\nI also feel the best way to remove the confusion is to document why we\nallow creating subscriptions even when the specified publications\ndon't exist on the publisher system? Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 15:38:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 3:38 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jan 25, 2021 at 3:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > So basically, the create subscription will throw an error if the\n> > > > publication does not exist. So will you throw an error if we try to\n> > > > drop the publication which is subscribed by some subscription? I mean\n> > > > basically, you are creating a dependency that if you are creating a\n> > > > subscription then there must be a publication that is not completely\n> > > > insane but then we will have to disallow dropping the publication as\n> > > > well. Am I missing something?\n> > >\n> > > Do you mean DROP PUBLICATION non_existent_publication;?\n> > >\n> > > Or\n> > >\n> > > Do you mean when we drop publications from a subscription?\n> >\n> > I mean it doesn’t seem right to disallow to create the subscription if\n> > the publisher doesn't exist, and my reasoning was even though the\n> > publisher exists while creating the subscription you might drop it\n> > later right?. So basically, now also we can create the same scenario\n> > that a subscription may exist for the publication which does not\n> > exist.\n>\n> Yes, the above scenario can be created even now. If a publication is\n> dropped in the publisher system, then it will not replicate/publish\n> the changes for that publication (publication_invalidation_cb,\n> rel_sync_cache_publication_cb, LoadPublications in\n> get_rel_sync_entry), so subscriber doesn't receive them. But the\n> subscription can still contain that dropped publication in it's list\n> of publications.\n>\n> The patch proposed in this thread, just checks while creation/altering\n> of the subscription on the subscriber system whether or not the\n> publication exists on the publisher system. This is one way\n> dependency. But given the above scenario, there can exist another\n> dependency i.e. publisher dropping the publisher at any time. So to\n> make it a complete solution i.e. not allowing non-existent\n> publications from the list of publications in the subscription, we\n> need to detect when the publications are dropped in the publisher and\n> we should, may be on a next connection to the subscriber, also look at\n> the subscription for that dropped publication, if exists remove it.\n> But that's an overkill and impractical I feel. Thoughts?\n>\n> I also feel the best way to remove the confusion is to document why we\n> allow creating subscriptions even when the specified publications\n> don't exist on the publisher system? Thoughts?\n\nYes, that was my point that there is no point in solving it in some\ncases and it can exist in other cases. So I am fine with documenting\nthe behavior.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 15:54:19 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "\nOn Mon, 25 Jan 2021 at 17:18, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Mon, Jan 25, 2021 at 2:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Mon, Jan 25, 2021 at 1:10 PM vignesh C <vignesh21@gmail.com> wrote:\n>> >\n>> > On Thu, Jan 21, 2021 at 10:21 PM Bharath Rupireddy\n>> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> > >\n>> > > On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n>> > > >\n>> > > > Hi,\n>> > > >\n>> > > > Creating/altering subscription is successful when we specify a\n>> > > > publication which does not exist in the publisher. I felt we should\n>> > > > throw an error in this case, that will help the user to check if there\n>> > > > is any typo in the create subscription command or to create the\n>> > > > publication before the subscription is created.\n>> > > > If the above analysis looks correct, then please find a patch that\n>> > > > checks if the specified publications are present in the publisher and\n>> > > > throws an error if any of the publications is missing in the\n>> > > > publisher.\n>> > > > Thoughts?\n>> > >\n>> > > I was having similar thoughts (while working on the logical\n>> > > replication bug on alter publication...drop table behaviour) on why\n>> > > create subscription succeeds without checking the publication\n>> > > existence. I checked in documentation, to find if there's a strong\n>> > > reason for that, but I couldn't. Maybe it's because of the principle\n>> > > \"first let users create subscriptions, later the publications can be\n>> > > created on the publisher system\", similar to this behaviour\n>> > > \"publications can be created without any tables attached to it\n>> > > initially, later they can be added\".\n>> > >\n>> > > Others may have better thoughts.\n>> > >\n>> > > If we do check publication existence for CREATE SUBSCRIPTION, I think\n>> > > we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n>> > >\n>> > > I wonder, why isn't dropping a publication from a list of publications\n>> > > that are with subscription is not allowed?\n>> > >\n>> > > Some comments so far on the patch:\n>> > >\n>> > > 1) I see most of the code in the new function check_publications() and\n>> > > existing fetch_table_list() is the same. Can we have a generic\n>> > > function, with maybe a flag to separate out the logic specific for\n>> > > checking publication and fetching table list from the publisher.\n>> >\n>> > I have made the common code between the check_publications and\n>> > fetch_table_list into a common function\n>> > get_appended_publications_query. I felt the rest of the code is better\n>> > off kept as it is.\n>> > The Attached patch has the changes for the same and also the change to\n>> > check publication exists during alter subscription set publication.\n>> > Thoughts?\n>> >\n>>\n>> So basically, the create subscription will throw an error if the\n>> publication does not exist. So will you throw an error if we try to\n>> drop the publication which is subscribed by some subscription? I mean\n>> basically, you are creating a dependency that if you are creating a\n>> subscription then there must be a publication that is not completely\n>> insane but then we will have to disallow dropping the publication as\n>> well. Am I missing something?\n>\n> Do you mean DROP PUBLICATION non_existent_publication;?\n>\n> Or\n>\n> Do you mean when we drop publications from a subscription? If yes, do\n> we have a way to drop a publication from the subscription? See below\n> one of my earlier questions on this.\n> \"I wonder, why isn't dropping a publication from a list of\n> publications that are with subscription is not allowed?\"\n> At least, I see no ALTER SUBSCRIPTION ... DROP PUBLICATION mypub1 or\n> something similar?\n>\n\nWhy we do not support ALTER SUBSCRIPTION...ADD/DROP PUBLICATION? When we\nhave multiple publications in subscription, but I want to add/drop a single\npublication, it is conveient. The ALTER SUBSCRIPTION...SET PUBLICATION...\nshould supply the completely publications.\n\nSorry, this question is unrelated with this subject.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 25 Jan 2021 19:48:24 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 5:18 PM japin <japinli@hotmail.com> wrote:\n> > Do you mean when we drop publications from a subscription? If yes, do\n> > we have a way to drop a publication from the subscription? See below\n> > one of my earlier questions on this.\n> > \"I wonder, why isn't dropping a publication from a list of\n> > publications that are with subscription is not allowed?\"\n> > At least, I see no ALTER SUBSCRIPTION ... DROP PUBLICATION mypub1 or\n> > something similar?\n> >\n>\n> Why we do not support ALTER SUBSCRIPTION...ADD/DROP PUBLICATION? When we\n> have multiple publications in subscription, but I want to add/drop a single\n> publication, it is conveient. The ALTER SUBSCRIPTION...SET PUBLICATION...\n> should supply the completely publications.\n\nLooks like the way to drop/add publication from the list of\npublications in subscription requires users to specify all the list of\npublications currently exists +/- the new publication that needs to be\nadded/dropped:\n\nCREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432\ndbname=postgres' PUBLICATION mypub1, mypub2, mypu3, mypub4, mypub5;\npostgres=# select subpublications from pg_subscription;\n subpublications\n-------------------------------------\n {mypub1,mypub2,mypu3,mypub4,mypub5}\n(1 row)\n\nSay, I want to drop mypub4:\n\nALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypu3, mypub5;\npostgres=# select subpublications from pg_subscription;\n subpublications\n------------------------------\n {mypub1,mypub2,mypu3,mypub5}\n\nSay, I want toa dd mypub4 and mypub6:\nALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypu3,\nmypub5, mypub4, mypub6;\npostgres=# select subpublications from pg_subscription;\n subpublications\n--------------------------------------------\n {mypub1,mypub2,mypu3,mypub5,mypub4,mypub6}\n(1 row)\n\nIt will be good to have something like:\n\nALTER SUBSCRIPTION mysub1 ADD PUBLICATION mypub1, mypub3; which will\nthe publications to subscription if not added previously.\n\nALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub1, mypub3; which will\ndrop the publications from subscription if they exist in the\nsubscription's list of publications.\n\nBut I'm really not sure why the above syntax was not added earlier. We\nmay be missing something here.\n\n> Sorry, this question is unrelated with this subject.\n\nYes, IMO it can definitely be discussed in another thread. It will be\ngood to get a separate opinion for this.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 19:25:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 5:18 PM japin <japinli@hotmail.com> wrote:\n>\n>\n> On Mon, 25 Jan 2021 at 17:18, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Mon, Jan 25, 2021 at 2:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>\n> >> On Mon, Jan 25, 2021 at 1:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> >\n> >> > On Thu, Jan 21, 2021 at 10:21 PM Bharath Rupireddy\n> >> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> > >\n> >> > > On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> > > >\n> >> > > > Hi,\n> >> > > >\n> >> > > > Creating/altering subscription is successful when we specify a\n> >> > > > publication which does not exist in the publisher. I felt we should\n> >> > > > throw an error in this case, that will help the user to check if there\n> >> > > > is any typo in the create subscription command or to create the\n> >> > > > publication before the subscription is created.\n> >> > > > If the above analysis looks correct, then please find a patch that\n> >> > > > checks if the specified publications are present in the publisher and\n> >> > > > throws an error if any of the publications is missing in the\n> >> > > > publisher.\n> >> > > > Thoughts?\n> >> > >\n> >> > > I was having similar thoughts (while working on the logical\n> >> > > replication bug on alter publication...drop table behaviour) on why\n> >> > > create subscription succeeds without checking the publication\n> >> > > existence. I checked in documentation, to find if there's a strong\n> >> > > reason for that, but I couldn't. Maybe it's because of the principle\n> >> > > \"first let users create subscriptions, later the publications can be\n> >> > > created on the publisher system\", similar to this behaviour\n> >> > > \"publications can be created without any tables attached to it\n> >> > > initially, later they can be added\".\n> >> > >\n> >> > > Others may have better thoughts.\n> >> > >\n> >> > > If we do check publication existence for CREATE SUBSCRIPTION, I think\n> >> > > we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n> >> > >\n> >> > > I wonder, why isn't dropping a publication from a list of publications\n> >> > > that are with subscription is not allowed?\n> >> > >\n> >> > > Some comments so far on the patch:\n> >> > >\n> >> > > 1) I see most of the code in the new function check_publications() and\n> >> > > existing fetch_table_list() is the same. Can we have a generic\n> >> > > function, with maybe a flag to separate out the logic specific for\n> >> > > checking publication and fetching table list from the publisher.\n> >> >\n> >> > I have made the common code between the check_publications and\n> >> > fetch_table_list into a common function\n> >> > get_appended_publications_query. I felt the rest of the code is better\n> >> > off kept as it is.\n> >> > The Attached patch has the changes for the same and also the change to\n> >> > check publication exists during alter subscription set publication.\n> >> > Thoughts?\n> >> >\n> >>\n> >> So basically, the create subscription will throw an error if the\n> >> publication does not exist. So will you throw an error if we try to\n> >> drop the publication which is subscribed by some subscription? I mean\n> >> basically, you are creating a dependency that if you are creating a\n> >> subscription then there must be a publication that is not completely\n> >> insane but then we will have to disallow dropping the publication as\n> >> well. Am I missing something?\n> >\n> > Do you mean DROP PUBLICATION non_existent_publication;?\n> >\n> > Or\n> >\n> > Do you mean when we drop publications from a subscription? If yes, do\n> > we have a way to drop a publication from the subscription? See below\n> > one of my earlier questions on this.\n> > \"I wonder, why isn't dropping a publication from a list of\n> > publications that are with subscription is not allowed?\"\n> > At least, I see no ALTER SUBSCRIPTION ... DROP PUBLICATION mypub1 or\n> > something similar?\n> >\n>\n> Why we do not support ALTER SUBSCRIPTION...ADD/DROP PUBLICATION? When we\n> have multiple publications in subscription, but I want to add/drop a single\n> publication, it is conveient. The ALTER SUBSCRIPTION...SET PUBLICATION...\n> should supply the completely publications.\n>\n> Sorry, this question is unrelated with this subject.\n\nPlease start a new thread for this, let's discuss this separately.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 22:31:54 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 3:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jan 25, 2021 at 2:48 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Jan 25, 2021 at 2:42 PM Dilip Kumar <dilipbalaut@gmail.com>\nwrote:\n> > >\n> > > On Mon, Jan 25, 2021 at 1:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Thu, Jan 21, 2021 at 10:21 PM Bharath Rupireddy\n> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com>\nwrote:\n> > > > > >\n> > > > > > Hi,\n> > > > > >\n> > > > > > Creating/altering subscription is successful when we specify a\n> > > > > > publication which does not exist in the publisher. I felt we\nshould\n> > > > > > throw an error in this case, that will help the user to check\nif there\n> > > > > > is any typo in the create subscription command or to create the\n> > > > > > publication before the subscription is created.\n> > > > > > If the above analysis looks correct, then please find a patch\nthat\n> > > > > > checks if the specified publications are present in the\npublisher and\n> > > > > > throws an error if any of the publications is missing in the\n> > > > > > publisher.\n> > > > > > Thoughts?\n> > > > >\n> > > > > I was having similar thoughts (while working on the logical\n> > > > > replication bug on alter publication...drop table behaviour) on\nwhy\n> > > > > create subscription succeeds without checking the publication\n> > > > > existence. I checked in documentation, to find if there's a strong\n> > > > > reason for that, but I couldn't. Maybe it's because of the\nprinciple\n> > > > > \"first let users create subscriptions, later the publications can\nbe\n> > > > > created on the publisher system\", similar to this behaviour\n> > > > > \"publications can be created without any tables attached to it\n> > > > > initially, later they can be added\".\n> > > > >\n> > > > > Others may have better thoughts.\n> > > > >\n> > > > > If we do check publication existence for CREATE SUBSCRIPTION, I\nthink\n> > > > > we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.\n> > > > >\n> > > > > I wonder, why isn't dropping a publication from a list of\npublications\n> > > > > that are with subscription is not allowed?\n> > > > >\n> > > > > Some comments so far on the patch:\n> > > > >\n> > > > > 1) I see most of the code in the new function\ncheck_publications() and\n> > > > > existing fetch_table_list() is the same. Can we have a generic\n> > > > > function, with maybe a flag to separate out the logic specific for\n> > > > > checking publication and fetching table list from the publisher.\n> > > >\n> > > > I have made the common code between the check_publications and\n> > > > fetch_table_list into a common function\n> > > > get_appended_publications_query. I felt the rest of the code is\nbetter\n> > > > off kept as it is.\n> > > > The Attached patch has the changes for the same and also the change\nto\n> > > > check publication exists during alter subscription set publication.\n> > > > Thoughts?\n> > > >\n> > >\n> > > So basically, the create subscription will throw an error if the\n> > > publication does not exist. So will you throw an error if we try to\n> > > drop the publication which is subscribed by some subscription? I mean\n> > > basically, you are creating a dependency that if you are creating a\n> > > subscription then there must be a publication that is not completely\n> > > insane but then we will have to disallow dropping the publication as\n> > > well. Am I missing something?\n> >\n> > Do you mean DROP PUBLICATION non_existent_publication;?\n> >\n> > Or\n> >\n> > Do you mean when we drop publications from a subscription?\n>\n> I mean it doesn’t seem right to disallow to create the subscription if\n> the publisher doesn't exist, and my reasoning was even though the\n> publisher exists while creating the subscription you might drop it\n> later right?. So basically, now also we can create the same scenario\n> that a subscription may exist for the publication which does not\n> exist.\n>\n\nI would like to defer on documentation for this.\nI feel we should have the behavior similar to publication tables as given\nbelow, then it will be consistent and easier for the users:\n\nThis is the behavior in case of table:\n\n*Step 1:*PUBLISHER SIDE:\ncreate table t1(c1 int);\ncreate table t2(c1 int);\nCREATE PUBLICATION mypub1 for table t1,t2;\n-- All above commands succeeds\n\n*Step 2:*SUBSCRIBER SIDE:\n-- Create subscription without creating tables will result in error:\n\n*CREATE SUBSCRIPTION mysub1 CONNECTION 'dbname=source_rep host=localhost\nuser=vignesh port=5432' PUBLICATION mypub1;ERROR: relation \"public.t2\"\ndoes not exist*\ncreate table t1(c1 int);\ncreate table t2(c1 int);\n\nCREATE SUBSCRIPTION mysub1 CONNECTION 'dbname=source_rep host=localhost\nuser=vignesh port=5432' PUBLICATION mypub1;\n\npostgres=# select * from pg_subscription;\n oid | subdbid | subname | subowner | subenabled | subbinary | substream\n| subconninfo | subslotname |\nsubsynccommit | subpublications\n-------+---------+---------+----------+------------+-----------+-----------+---------------------------------------------------------+-------------+---------------+-----------------\n 16392 | 13756 | mysub1 | 10 | t | f | f\n| dbname=source_rep host=localhost user=vignesh port=5432 | mysub1 |\noff | {mypub1}\n(1 row)\n\n\n\n\n\n*postgres=# select *,srrelid::oid::regclass from\npg_subscription_rel; srsubid | srrelid | srsubstate | srsublsn | srrelid\n---------+---------+------------+-----------+--------- 16392 | 16389 |\nr | 0/1608BD0 | t2 16392 | 16384 | r | 0/1608BD0 | t1*\n\n(2 rows)\n\n*Step 3:*PUBLISHER:\ndrop table t2;\ncreate table t3;\nCREATE PUBLICATION mypub2 for table t1,t3;\n\nStep 4:\nSUBSCRIBER:\npostgres=# select *,srrelid::oid::regclass from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn | srrelid\n---------+---------+------------+-----------+---------\n 16392 | 16389 | r | 0/1608BD0 | t2\n 16392 | 16384 | r | 0/1608BD0 | t1\n\n(2 rows)\n\n\n\n\n\n\n\n\n\n*postgres=# alter subscription mysub1 refresh publication ;ALTER\nSUBSCRIPTION-- Subscription relation will be updated.postgres=# select\n*,srrelid::oid::regclass from pg_subscription_rel; srsubid | srrelid |\nsrsubstate | srsublsn | srrelid\n---------+---------+------------+-----------+--------- 16392 | 16384 |\nr | 0/1608BD0 | t1(1 row)*\n\n\n\n\n*-- Alter subscription fails while setting publication having a table that\ndoes not existpostgres=# alter subscription mysub1 set publication\nmysub2;ERROR: relation \"public.t3\" does not exist*\n\nTo maintain consistency, we should have similar behavior in case of\npublication too.\nIf a publication which does not exist is specified during create\nsubscription, then we should throw an error similar to step 2 behavior.\nSimilarly if a publication which does not exist is specified during alter\nsubscription, then we should throw an error similar to step 4 behavior. If\npublication is dropped after subscription is created, this should be\nremoved when an alter subscription subname refresh publication is performed\nsimilar to step 4.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Jan 25, 2021 at 3:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:>> On Mon, Jan 25, 2021 at 2:48 PM Bharath Rupireddy> <bharath.rupireddyforpostgres@gmail.com> wrote:> >> > On Mon, Jan 25, 2021 at 2:42 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:> > >> > > On Mon, Jan 25, 2021 at 1:10 PM vignesh C <vignesh21@gmail.com> wrote:> > > >> > > > On Thu, Jan 21, 2021 at 10:21 PM Bharath Rupireddy> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:> > > > >> > > > > On Thu, Jan 21, 2021 at 6:56 PM vignesh C <vignesh21@gmail.com> wrote:> > > > > >> > > > > > Hi,> > > > > >> > > > > > Creating/altering subscription is successful when we specify a> > > > > > publication which does not exist in the publisher. I felt we should> > > > > > throw an error in this case, that will help the user to check if there> > > > > > is any typo in the create subscription command or to create the> > > > > > publication before the subscription is created.> > > > > > If the above analysis looks correct, then please find a patch that> > > > > > checks if the specified publications are present in the publisher and> > > > > > throws an error if any of the publications is missing in the> > > > > > publisher.> > > > > > Thoughts?> > > > >> > > > > I was having similar thoughts (while working on  the logical> > > > > replication bug on alter publication...drop table behaviour) on why> > > > > create subscription succeeds without checking the publication> > > > > existence. I checked in documentation, to find if there's a strong> > > > > reason for that, but I couldn't. Maybe it's because of the principle> > > > > \"first let users create subscriptions, later the publications can be> > > > > created on the publisher system\", similar to this behaviour> > > > > \"publications can be created without any tables attached to it> > > > > initially, later they can be added\".> > > > >> > > > > Others may have better thoughts.> > > > >> > > > > If we do check publication existence for CREATE SUBSCRIPTION, I think> > > > > we should also do it for ALTER SUBSCRIPTION ... SET PUBLICATION.> > > > >> > > > > I wonder, why isn't dropping a publication from a list of publications> > > > > that are with subscription is not allowed?> > > > >> > > > > Some comments so far on the patch:> > > > >> > > > > 1) I see most of the code in the new function check_publications() and> > > > > existing fetch_table_list() is the same. Can we have a generic> > > > > function, with maybe a flag to separate out the logic specific for> > > > > checking publication and fetching table list from the publisher.> > > >> > > > I have made the common code between the check_publications and> > > > fetch_table_list into a common function> > > > get_appended_publications_query. I felt the rest of the code is better> > > > off kept as it is.> > > > The Attached patch has the changes for the same and also the change to> > > > check publication exists during alter subscription set publication.> > > > Thoughts?> > > >> > >> > > So basically, the create subscription will throw an error if the> > > publication does not exist.  So will you throw an error if we try to> > > drop the publication which is subscribed by some subscription?  I mean> > > basically, you are creating a dependency that if you are creating a> > > subscription then there must be a publication that is not completely> > > insane but then we will have to disallow dropping the publication as> > > well.  Am I missing something?> >> > Do you mean DROP PUBLICATION non_existent_publication;?> >> > Or> >> > Do you mean when we drop publications from a subscription?>> I mean it doesn’t seem right to disallow to create the subscription if> the publisher doesn't exist, and my reasoning was even though the> publisher exists while creating the subscription you might drop it> later right?.  So basically, now also we can create the same scenario> that a subscription may exist for the publication which does not> exist.>I would like to defer on documentation for this.I feel we should have the behavior similar to publication tables as given below, then it will be consistent and easier for the users:This is the behavior in case of table:Step 1:PUBLISHER SIDE:create table t1(c1 int);create table t2(c1 int);CREATE PUBLICATION mypub1 for table t1,t2;-- All above commands succeedsStep 2:SUBSCRIBER SIDE:-- Create subscription without creating tables will result in error:CREATE SUBSCRIPTION mysub1 CONNECTION 'dbname=source_rep host=localhost user=vignesh port=5432' PUBLICATION mypub1;ERROR:  relation \"public.t2\" does not existcreate table t1(c1 int);create table t2(c1 int);CREATE SUBSCRIPTION mysub1 CONNECTION 'dbname=source_rep host=localhost user=vignesh port=5432' PUBLICATION mypub1;postgres=# select * from pg_subscription;  oid  | subdbid | subname | subowner | subenabled | subbinary | substream |                       subconninfo                       | subslotname | subsynccommit | subpublications -------+---------+---------+----------+------------+-----------+-----------+---------------------------------------------------------+-------------+---------------+----------------- 16392 |   13756 | mysub1  |       10 | t          | f         | f         | dbname=source_rep host=localhost user=vignesh port=5432 | mysub1      | off           | {mypub1}(1 row)postgres=# select *,srrelid::oid::regclass from pg_subscription_rel; srsubid | srrelid | srsubstate | srsublsn  | srrelid ---------+---------+------------+-----------+---------   16392 |   16389 | r          | 0/1608BD0 | t2   16392 |   16384 | r          | 0/1608BD0 | t1(2 rows)Step 3:PUBLISHER:drop table t2;create table t3;CREATE PUBLICATION mypub2 for table t1,t3;Step 4:SUBSCRIBER:postgres=# select *,srrelid::oid::regclass from pg_subscription_rel; srsubid | srrelid | srsubstate | srsublsn  | srrelid ---------+---------+------------+-----------+---------   16392 |   16389 | r          | 0/1608BD0 | t2   16392 |   16384 | r          | 0/1608BD0 | t1(2 rows)postgres=#  alter subscription mysub1 refresh publication ;ALTER SUBSCRIPTION-- Subscription relation will be updated.postgres=# select *,srrelid::oid::regclass from pg_subscription_rel; srsubid | srrelid | srsubstate | srsublsn  | srrelid ---------+---------+------------+-----------+---------   16392 |   16384 | r          | 0/1608BD0 | t1(1 row)-- Alter subscription fails while setting publication having a table that does not existpostgres=#  alter subscription mysub1 set publication mysub2;ERROR:  relation \"public.t3\" does not existTo maintain consistency, we should have similar behavior in case of publication too.If a publication which does not exist is specified during create subscription, then we should throw an error similar to step 2 behavior. Similarly if a publication which does not exist is specified during alter subscription, then we should throw an error similar to step 4 behavior. If publication is dropped after subscription is created, this should be removed when an alter subscription subname refresh publication is performed similar to step 4.Thoughts?Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 25 Jan 2021 22:32:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "\nOn Mon, 25 Jan 2021 at 21:55, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Mon, Jan 25, 2021 at 5:18 PM japin <japinli@hotmail.com> wrote:\n>> > Do you mean when we drop publications from a subscription? If yes, do\n>> > we have a way to drop a publication from the subscription? See below\n>> > one of my earlier questions on this.\n>> > \"I wonder, why isn't dropping a publication from a list of\n>> > publications that are with subscription is not allowed?\"\n>> > At least, I see no ALTER SUBSCRIPTION ... DROP PUBLICATION mypub1 or\n>> > something similar?\n>> >\n>>\n>> Why we do not support ALTER SUBSCRIPTION...ADD/DROP PUBLICATION? When we\n>> have multiple publications in subscription, but I want to add/drop a single\n>> publication, it is conveient. The ALTER SUBSCRIPTION...SET PUBLICATION...\n>> should supply the completely publications.\n>\n> Looks like the way to drop/add publication from the list of\n> publications in subscription requires users to specify all the list of\n> publications currently exists +/- the new publication that needs to be\n> added/dropped:\n>\n> CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432\n> dbname=postgres' PUBLICATION mypub1, mypub2, mypu3, mypub4, mypub5;\n> postgres=# select subpublications from pg_subscription;\n> subpublications\n> -------------------------------------\n> {mypub1,mypub2,mypu3,mypub4,mypub5}\n> (1 row)\n>\n> Say, I want to drop mypub4:\n>\n> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypu3, mypub5;\n> postgres=# select subpublications from pg_subscription;\n> subpublications\n> ------------------------------\n> {mypub1,mypub2,mypu3,mypub5}\n>\n> Say, I want toa dd mypub4 and mypub6:\n> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypu3,\n> mypub5, mypub4, mypub6;\n> postgres=# select subpublications from pg_subscription;\n> subpublications\n> --------------------------------------------\n> {mypub1,mypub2,mypu3,mypub5,mypub4,mypub6}\n> (1 row)\n>\n> It will be good to have something like:\n>\n> ALTER SUBSCRIPTION mysub1 ADD PUBLICATION mypub1, mypub3; which will\n> the publications to subscription if not added previously.\n>\n> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub1, mypub3; which will\n> drop the publications from subscription if they exist in the\n> subscription's list of publications.\n>\n> But I'm really not sure why the above syntax was not added earlier. We\n> may be missing something here.\n>\n>> Sorry, this question is unrelated with this subject.\n>\n> Yes, IMO it can definitely be discussed in another thread. It will be\n> good to get a separate opinion for this.\n>\n\nI started a new thread [1] for this, please have a look.\n\n[1] - https://www.postgresql.org/message-id/MEYP282MB166939D0D6C480B7FBE7EFFBB6BC0@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 26 Jan 2021 12:05:45 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:32 PM vignesh C <vignesh21@gmail.com> wrote:\n> > I mean it doesn’t seem right to disallow to create the subscription if\n> > the publisher doesn't exist, and my reasoning was even though the\n> > publisher exists while creating the subscription you might drop it\n> > later right?. So basically, now also we can create the same scenario\n> > that a subscription may exist for the publication which does not\n> > exist.\n> >\n>\n> I would like to defer on documentation for this.\n> I feel we should have the behavior similar to publication tables as given below, then it will be consistent and easier for the users:\n>\n> This is the behavior in case of table:\n> Step 1:\n> PUBLISHER SIDE:\n> create table t1(c1 int);\n> create table t2(c1 int);\n> CREATE PUBLICATION mypub1 for table t1,t2;\n> -- All above commands succeeds\n> Step 2:\n> SUBSCRIBER SIDE:\n> -- Create subscription without creating tables will result in error:\n> CREATE SUBSCRIPTION mysub1 CONNECTION 'dbname=source_rep host=localhost user=vignesh port=5432' PUBLICATION mypub1;\n> ERROR: relation \"public.t2\" does not exist\n> create table t1(c1 int);\n> create table t2(c1 int);\n>\n> CREATE SUBSCRIPTION mysub1 CONNECTION 'dbname=source_rep host=localhost user=vignesh port=5432' PUBLICATION mypub1;\n>\n> postgres=# select * from pg_subscription;\n> oid | subdbid | subname | subowner | subenabled | subbinary | substream | subconninfo | subslotname | subsynccommit | subpublications\n> -------+---------+---------+----------+------------+-----------+-----------+---------------------------------------------------------+-------------+---------------+-----------------\n> 16392 | 13756 | mysub1 | 10 | t | f | f | dbname=source_rep host=localhost user=vignesh port=5432 | mysub1 | off | {mypub1}\n> (1 row)\n>\n> postgres=# select *,srrelid::oid::regclass from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn | srrelid\n> ---------+---------+------------+-----------+---------\n> 16392 | 16389 | r | 0/1608BD0 | t2\n> 16392 | 16384 | r | 0/1608BD0 | t1\n>\n> (2 rows)\n> Step 3:\n> PUBLISHER:\n> drop table t2;\n> create table t3;\n> CREATE PUBLICATION mypub2 for table t1,t3;\n>\n> Step 4:\n> SUBSCRIBER:\n> postgres=# select *,srrelid::oid::regclass from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn | srrelid\n> ---------+---------+------------+-----------+---------\n> 16392 | 16389 | r | 0/1608BD0 | t2\n> 16392 | 16384 | r | 0/1608BD0 | t1\n>\n> (2 rows)\n>\n> postgres=# alter subscription mysub1 refresh publication ;\n> ALTER SUBSCRIPTION\n>\n> -- Subscription relation will be updated.\n> postgres=# select *,srrelid::oid::regclass from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn | srrelid\n> ---------+---------+------------+-----------+---------\n> 16392 | 16384 | r | 0/1608BD0 | t1\n> (1 row)\n>\n>\n> -- Alter subscription fails while setting publication having a table that does not exist\n> postgres=# alter subscription mysub1 set publication mysub2;\n> ERROR: relation \"public.t3\" does not exist\n>\n> To maintain consistency, we should have similar behavior in case of publication too.\n> If a publication which does not exist is specified during create subscription, then we should throw an error similar to step 2 behavior. Similarly if a publication which does not exist is specified during alter subscription, then we should throw an error similar to step 4 behavior. If publication is dropped after subscription is created, this should be removed when an alter subscription subname refresh publication is performed similar to step 4.\n> Thoughts?\n\nIIUC, your idea is to check if the publications (that are associated\nwith a subscription) are present in the publisher or not during ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION;. If that's the case, then I have\n\nscenario:\n1) subscription is created with pub1, pub2 and assume both the\npublications are present in the publisher\n2) pub1 and pub2 tables data is replicated properly\n3) pub2 is dropped on the publisher\n4) run alter subscription .. refresh publication on the subscriber, so\nthat the pub2 tables will be removed from the subscriber\n5) for some reason, user creates pub2 again on the publisher and want\nto replicated some tables\n6) run alter subscription .. refresh publication on the subscriber, so\nthat the pub2 tables will be added to the subscriber table list\n\nNow, if we remove the dropped publication pub2 in step 4 from the\nsubscription list(as per your above analysis and suggestion), then\nafter step 5, users will need to add the publication pub2 to the\nsubscription again. I feel this is a change in the current behaviour.\nThe existing behaviour on master doesn't mandate this as the dropped\npublications are not removed from the subscription list at all.\n\nTo not mandate any new behaviour, I would suggest to have a new option\nfor ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH\n(remove_dropped_publications = false). The new option\nremove_dropped_publications will have a default value false, when set\nto true it will check if the publications that are present in the\nsubscription list are actually existing on the publisher or not, if\nnot remove them from the list. And also in the documentation we need\nto clearly mention the consequence of this new option setting to true,\nthat is, the dropped publications if created again will need to be\nadded to the subscription list again.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Feb 2021 10:43:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Feb 3, 2021, at 2:13 AM, Bharath Rupireddy wrote:\n> On Mon, Jan 25, 2021 at 10:32 PM vignesh C <vignesh21@gmail.com> wrote:\n> \n> > If a publication which does not exist is specified during create subscription, then we should throw an error similar to step 2 behavior. Similarly if a publication which does not exist is specified during alter subscription, then we should throw an error similar to step 4 behavior. If publication is dropped after subscription is created, this should be removed when an alter subscription subname refresh publication is performed similar to step 4.\n> > Thoughts?\nPostgres implemented a replication mechanism that is decoupled which means that \nboth parties can perform \"uncoordinated\" actions. As you shown, the CREATE \nSUBSCRIPTION informing a non-existent publication is one of them. I think that\nbeing permissive in some situations can prevent you from painting yourself into\na corner. Even if you try to be strict on the subscriber side, the other side \n(publisher) can impose you additional complexity. \n \nYou are arguing that in the initial phase, the CREATE SUBSCRIPTION has a strict\nmechanism. It is a fair point. However, impose this strictness for the other \nSUBSCRIPTION commands should be carefully forethought. If we go that route, I \nsuggest to have the current behavior as an option. The reasons are: (i) it is \nbackward-compatible, (ii) it allows us some known flexibility (non-existent \npublication), and (iii) it would probably allow us to fix a scenario created by \nthe strict mode. This strict mode can be implemented via new parameter \nvalidate_publication (ENOCAFFEINE to propose a better name) that checks if the \npublication is available when you executed the CREATE SUBSCRIPTION. Similar \nparameter can be used in ALTER SUBSCRIPTION ... SET PUBLICATION and ALTER \nSUBSCRIPTION ... REFRESH PUBLICATION.\n\n> To not mandate any new behaviour, I would suggest to have a new option\n> for ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH\n> (remove_dropped_publications = false). The new option\n> remove_dropped_publications will have a default value false, when set\n> to true it will check if the publications that are present in the\n> subscription list are actually existing on the publisher or not, if\n> not remove them from the list. And also in the documentation we need\n> to clearly mention the consequence of this new option setting to true,\n> that is, the dropped publications if created again will need to be\n> added to the subscription list again.\nREFRESH PUBLICATION is not the right command to remove publications. There is a \ncommand for it: ALTER SUBSCRIPTION ... SET PUBLICATION.\n \nThe other alternative is to document that non-existent publication names can be \nin the subscription catalog and it is ignored while executing SUBSCRIPTION \ncommands. You could possibly propose a NOTICE/WARNING that informs the user \nthat the SUBSCRIPTION command contains non-existent publication.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 3, 2021, at 2:13 AM, Bharath Rupireddy wrote:On Mon, Jan 25, 2021 at 10:32 PM vignesh C <vignesh21@gmail.com> wrote:> If a publication which does not exist is specified during create subscription, then we should throw an error similar to step 2 behavior. Similarly if a publication which does not exist is specified during alter subscription, then we should throw an error similar to step 4 behavior. If publication is dropped after subscription is created, this should be removed when an alter subscription subname refresh publication is performed similar to step 4.> Thoughts?Postgres implemented a replication mechanism that is decoupled which means that both parties can perform \"uncoordinated\" actions. As you shown, the CREATE        SUBSCRIPTION informing a non-existent publication is one of them. I think thatbeing permissive in some situations can prevent you from painting yourself intoa corner. Even if you try to be strict on the subscriber side, the other side  (publisher) can impose you additional complexity.                                                                                                                    You are arguing that in the initial phase, the CREATE SUBSCRIPTION has a strictmechanism. It is a fair point. However, impose this strictness for the other      SUBSCRIPTION commands should be carefully forethought. If we go that route, I  suggest to have the current behavior as an option. The reasons are: (i) it is  backward-compatible, (ii) it allows us some known flexibility (non-existent       publication), and (iii) it would probably allow us to fix a scenario created by the strict mode. This strict mode can be implemented via new parameter            validate_publication (ENOCAFFEINE to propose a better name) that checks if the publication is available when you executed the CREATE SUBSCRIPTION. Similar    parameter can be used in ALTER SUBSCRIPTION ... SET PUBLICATION and ALTER         SUBSCRIPTION ... REFRESH PUBLICATION.To not mandate any new behaviour, I would suggest to have a new optionfor ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH(remove_dropped_publications = false). The new optionremove_dropped_publications will have a default value false, when setto true it will check if the publications that are present in thesubscription list are actually existing on the publisher or not, ifnot remove them from the list. And also in the documentation we needto clearly mention the consequence of this new option setting to true,that is, the dropped publications if created again will need to beadded to the subscription list again.REFRESH PUBLICATION is not the right command to remove publications. There is a command for it: ALTER SUBSCRIPTION ... SET PUBLICATION.                                                                                   The other alternative is to document that non-existent publication names can be in the subscription catalog and it is ignored while executing SUBSCRIPTION        commands. You could possibly propose a NOTICE/WARNING that informs the user       that the SUBSCRIPTION command contains non-existent publication.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 03 Mar 2021 00:29:12 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Identify_missing_publications_from_publisher_while_create/?=\n =?UTF-8?Q?alter_subscription.?=" }, { "msg_contents": "On Wed, Mar 3, 2021 at 8:59 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Feb 3, 2021, at 2:13 AM, Bharath Rupireddy wrote:\n>\n> On Mon, Jan 25, 2021 at 10:32 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> > If a publication which does not exist is specified during create subscription, then we should throw an error similar to step 2 behavior. Similarly if a publication which does not exist is specified during alter subscription, then we should throw an error similar to step 4 behavior. If publication is dropped after subscription is created, this should be removed when an alter subscription subname refresh publication is performed similar to step 4.\n> > Thoughts?\n>\n> Postgres implemented a replication mechanism that is decoupled which means that\n> both parties can perform \"uncoordinated\" actions. As you shown, the CREATE\n> SUBSCRIPTION informing a non-existent publication is one of them. I think that\n> being permissive in some situations can prevent you from painting yourself into\n> a corner. Even if you try to be strict on the subscriber side, the other side\n> (publisher) can impose you additional complexity.\n>\n> You are arguing that in the initial phase, the CREATE SUBSCRIPTION has a strict\n> mechanism. It is a fair point. However, impose this strictness for the other\n> SUBSCRIPTION commands should be carefully forethought. If we go that route, I\n> suggest to have the current behavior as an option. The reasons are: (i) it is\n> backward-compatible, (ii) it allows us some known flexibility (non-existent\n> publication), and (iii) it would probably allow us to fix a scenario created by\n> the strict mode. This strict mode can be implemented via new parameter\n> validate_publication (ENOCAFFEINE to propose a better name) that checks if the\n> publication is available when you executed the CREATE SUBSCRIPTION. Similar\n> parameter can be used in ALTER SUBSCRIPTION ... SET PUBLICATION and ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION.\n\nIIUC the validate_publication is kind of the feature enable/disable\nflag. With the default value being false, when set to true, it checks\nwhether the publications exists or not while CREATE/ALTER SUBSCRIPTION\nand throw error. If I'm right, then the validate_publication option\nlooks good to me. So, whoever wants to impose strict restrictions to\nCREATE/ALTER subscription can set it to true. All others will not see\nany new behaviour or such.\n\n> To not mandate any new behaviour, I would suggest to have a new option\n> for ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH\n> (remove_dropped_publications = false). The new option\n> remove_dropped_publications will have a default value false, when set\n> to true it will check if the publications that are present in the\n> subscription list are actually existing on the publisher or not, if\n> not remove them from the list. And also in the documentation we need\n> to clearly mention the consequence of this new option setting to true,\n> that is, the dropped publications if created again will need to be\n> added to the subscription list again.\n>\n> REFRESH PUBLICATION is not the right command to remove publications. There is a\n> command for it: ALTER SUBSCRIPTION ... SET PUBLICATION.\n>\n> The other alternative is to document that non-existent publication names can be\n> in the subscription catalog and it is ignored while executing SUBSCRIPTION\n> commands. You could possibly propose a NOTICE/WARNING that informs the user\n> that the SUBSCRIPTION command contains non-existent publication.\n\nI think, we can also have validate_publication option allowed for\nALTER SUBSCRIPTION SET PUBLICATION and REFRESH PUBLICATION commands\nwith the same behaviour i.e. error out when specified publications\ndon't exist in the publisher. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Mar 2021 13:04:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, Mar 4, 2021 at 1:04 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Mar 3, 2021 at 8:59 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Wed, Feb 3, 2021, at 2:13 AM, Bharath Rupireddy wrote:\n> >\n> > On Mon, Jan 25, 2021 at 10:32 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > > If a publication which does not exist is specified during create subscription, then we should throw an error similar to step 2 behavior. Similarly if a publication which does not exist is specified during alter subscription, then we should throw an error similar to step 4 behavior. If publication is dropped after subscription is created, this should be removed when an alter subscription subname refresh publication is performed similar to step 4.\n> > > Thoughts?\n> >\n> > Postgres implemented a replication mechanism that is decoupled which means that\n> > both parties can perform \"uncoordinated\" actions. As you shown, the CREATE\n> > SUBSCRIPTION informing a non-existent publication is one of them. I think that\n> > being permissive in some situations can prevent you from painting yourself into\n> > a corner. Even if you try to be strict on the subscriber side, the other side\n> > (publisher) can impose you additional complexity.\n> >\n> > You are arguing that in the initial phase, the CREATE SUBSCRIPTION has a strict\n> > mechanism. It is a fair point. However, impose this strictness for the other\n> > SUBSCRIPTION commands should be carefully forethought. If we go that route, I\n> > suggest to have the current behavior as an option. The reasons are: (i) it is\n> > backward-compatible, (ii) it allows us some known flexibility (non-existent\n> > publication), and (iii) it would probably allow us to fix a scenario created by\n> > the strict mode. This strict mode can be implemented via new parameter\n> > validate_publication (ENOCAFFEINE to propose a better name) that checks if the\n> > publication is available when you executed the CREATE SUBSCRIPTION. Similar\n> > parameter can be used in ALTER SUBSCRIPTION ... SET PUBLICATION and ALTER\n> > SUBSCRIPTION ... REFRESH PUBLICATION.\n>\n> IIUC the validate_publication is kind of the feature enable/disable\n> flag. With the default value being false, when set to true, it checks\n> whether the publications exists or not while CREATE/ALTER SUBSCRIPTION\n> and throw error. If I'm right, then the validate_publication option\n> looks good to me. So, whoever wants to impose strict restrictions to\n> CREATE/ALTER subscription can set it to true. All others will not see\n> any new behaviour or such.\n>\n> > To not mandate any new behaviour, I would suggest to have a new option\n> > for ALTER SUBSCRIPTION ... REFRESH PUBLICATION WITH\n> > (remove_dropped_publications = false). The new option\n> > remove_dropped_publications will have a default value false, when set\n> > to true it will check if the publications that are present in the\n> > subscription list are actually existing on the publisher or not, if\n> > not remove them from the list. And also in the documentation we need\n> > to clearly mention the consequence of this new option setting to true,\n> > that is, the dropped publications if created again will need to be\n> > added to the subscription list again.\n> >\n> > REFRESH PUBLICATION is not the right command to remove publications. There is a\n> > command for it: ALTER SUBSCRIPTION ... SET PUBLICATION.\n> >\n> > The other alternative is to document that non-existent publication names can be\n> > in the subscription catalog and it is ignored while executing SUBSCRIPTION\n> > commands. You could possibly propose a NOTICE/WARNING that informs the user\n> > that the SUBSCRIPTION command contains non-existent publication.\n>\n> I think, we can also have validate_publication option allowed for\n> ALTER SUBSCRIPTION SET PUBLICATION and REFRESH PUBLICATION commands\n> with the same behaviour i.e. error out when specified publications\n> don't exist in the publisher. Thoughts?\n\nSorry for the delayed reply, I was working on a few other projects so\nI was not able to reply quickly.\nSince we are getting the opinion that if we make the check\npublications by default it might affect the existing users, I'm fine\nwith having an option validate_option to check if the publication\nexists in the publisher, that way there is no change for the existing\nuser. I have made a patch in similar lines, attached patch has the\nchanges for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Wed, 7 Apr 2021 22:37:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Apr 7, 2021 at 10:37 PM vignesh C <vignesh21@gmail.com> wrote:\n> > I think, we can also have validate_publication option allowed for\n> > ALTER SUBSCRIPTION SET PUBLICATION and REFRESH PUBLICATION commands\n> > with the same behaviour i.e. error out when specified publications\n> > don't exist in the publisher. Thoughts?\n>\n> Sorry for the delayed reply, I was working on a few other projects so\n> I was not able to reply quickly.\n> Since we are getting the opinion that if we make the check\n> publications by default it might affect the existing users, I'm fine\n> with having an option validate_option to check if the publication\n> exists in the publisher, that way there is no change for the existing\n> user. I have made a patch in similar lines, attached patch has the\n> changes for the same.\n> Thoughts?\n\nHere are some comments on v3 patch:\n\n1) Please mention what's the default value of the option\n+ <varlistentry>\n+ <term><literal>validate_publication</literal>\n(<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Specifies whether the subscriber must verify if the specified\n+ publications are present in the publisher. By default, the subscriber\n+ will not check if the publications are present in the publisher.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\n2) How about\n+ Specifies whether the subscriber must verify the\npublications that are\n+ being subscribed to are present in the publisher. By default,\nthe subscriber\ninstead of\n+ Specifies whether the subscriber must verify if the specified\n+ publications are present in the publisher. By default, the subscriber\n\n3) I think we can make below common code into a single function with\nflags to differentiate processing for both, something like:\nStringInfoData *get_publist_str(List *publicaitons, bool use_quotes,\nbool is_fetch_table_list);\ncheck_publications:\n+ /* Convert the publications which does not exist into a string. */\n+ initStringInfo(&nonExistentPublications);\n+ foreach(lc, publicationsCopy)\n+ {\nand get_appended_publications_query:\n foreach(lc, publications)\n\nWith the new function that only prepares comma separated list of\npublications, you can get rid of get_appended_publications_query and\njust append the returned list to the query.\nfetch_table_list: get_publist_str(publications, true, true);\ncheck_publications: for select query preparation\nget_publist_str(publications, true, false); and for error string\npreparation get_publist_str(publications, false, false);\n\nAnd also let the new function get_publist_str allocate the string and\njust mention as a note in the function comment that the callers should\npfree the returned string.\n\n4) We could do following,\n ereport(ERROR,\n (errcode(ERRCODE_TOO_MANY_ARGUMENTS),\n errmsg_plural(\"publication %s does not exist in the publisher\",\n \"publications %s do not exist in the publisher\",\n list_length(publicationsCopy),\n nonExistentPublications.data)));\ninstead of\n+ ereport(ERROR,\n+ (errmsg(\"publication(s) %s does not exist in the publisher\",\n+ nonExistentPublications.data)));\n\n if (list_member(cte->ctecolnames,\nmakeString(cte->search_clause->search_seq_column)))\n\n5) I think it's better with\n+ * Check the specified publication(s) is(are) present in the publisher.\ninstead of\n+ * Verify that the specified publication(s) exists in the publisher.\n\n6) Instead of such a longer variable name \"nonExistentPublications\"\nhow about just \"pubnames\" and add a comment there saying \"going to\nerror out with the list of non-existent publications\" with that the\nvariable and that part of code's context is clear.\n\n7) You can just do\npublications = list_copy(publications);\ninstead of using another variable publicationsCopy\n publicationsCopy = list_copy(publications);\n\n8) If you have done StringInfoData *cmd = makeStringInfo();, then no\nneed of initStringInfo(cmd);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Apr 2021 12:13:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, Apr 8, 2021 at 12:13 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 10:37 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > I think, we can also have validate_publication option allowed for\n> > > ALTER SUBSCRIPTION SET PUBLICATION and REFRESH PUBLICATION commands\n> > > with the same behaviour i.e. error out when specified publications\n> > > don't exist in the publisher. Thoughts?\n> >\n> > Sorry for the delayed reply, I was working on a few other projects so\n> > I was not able to reply quickly.\n> > Since we are getting the opinion that if we make the check\n> > publications by default it might affect the existing users, I'm fine\n> > with having an option validate_option to check if the publication\n> > exists in the publisher, that way there is no change for the existing\n> > user. I have made a patch in similar lines, attached patch has the\n> > changes for the same.\n> > Thoughts?\n>\n> Here are some comments on v3 patch:\n>\n\nThanks for the comments\n\n> 1) Please mention what's the default value of the option\n> + <varlistentry>\n> + <term><literal>validate_publication</literal>\n> (<type>boolean</type>)</term>\n> + <listitem>\n> + <para>\n> + Specifies whether the subscriber must verify if the specified\n> + publications are present in the publisher. By default, the subscriber\n> + will not check if the publications are present in the publisher.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n\nModified.\n\n> 2) How about\n> + Specifies whether the subscriber must verify the\n> publications that are\n> + being subscribed to are present in the publisher. By default,\n> the subscriber\n> instead of\n> + Specifies whether the subscriber must verify if the specified\n> + publications are present in the publisher. By default, the subscriber\n>\n\nSlightly reworded and modified.\n\n> 3) I think we can make below common code into a single function with\n> flags to differentiate processing for both, something like:\n> StringInfoData *get_publist_str(List *publicaitons, bool use_quotes,\n> bool is_fetch_table_list);\n> check_publications:\n> + /* Convert the publications which does not exist into a string. */\n> + initStringInfo(&nonExistentPublications);\n> + foreach(lc, publicationsCopy)\n> + {\n> and get_appended_publications_query:\n> foreach(lc, publications)\n>\n> With the new function that only prepares comma separated list of\n> publications, you can get rid of get_appended_publications_query and\n> just append the returned list to the query.\n> fetch_table_list: get_publist_str(publications, true, true);\n> check_publications: for select query preparation\n> get_publist_str(publications, true, false); and for error string\n> preparation get_publist_str(publications, false, false);\n>\n> And also let the new function get_publist_str allocate the string and\n> just mention as a note in the function comment that the callers should\n> pfree the returned string.\n>\n\nI felt the existing code looks better, if we have a common function,\nwe will have to lot of if conditions as both the functions is not same\nto same, they operate on different data types and do the preparation\nappropriately. Like fetch_table_list get nspname & relname and\nconverts it to RangeVar and adds to the list other function prepares a\ntext and deletes the entries present from the list. So I did not fix\nthis. Thoughts?\n\n> 4) We could do following,\n> ereport(ERROR,\n> (errcode(ERRCODE_TOO_MANY_ARGUMENTS),\n> errmsg_plural(\"publication %s does not exist in the publisher\",\n> \"publications %s do not exist in the publisher\",\n> list_length(publicationsCopy),\n> nonExistentPublications.data)));\n> instead of\n> + ereport(ERROR,\n> + (errmsg(\"publication(s) %s does not exist in the publisher\",\n> + nonExistentPublications.data)));\n>\n> if (list_member(cte->ctecolnames,\n> makeString(cte->search_clause->search_seq_column)))\n>\n\nModified.\n\n> 5) I think it's better with\n> + * Check the specified publication(s) is(are) present in the publisher.\n> instead of\n> + * Verify that the specified publication(s) exists in the publisher.\n>\n\nModified.\n\n> 6) Instead of such a longer variable name \"nonExistentPublications\"\n> how about just \"pubnames\" and add a comment there saying \"going to\n> error out with the list of non-existent publications\" with that the\n> variable and that part of code's context is clear.\n>\n\nModified.\n\n> 7) You can just do\n> publications = list_copy(publications);\n> instead of using another variable publicationsCopy\n> publicationsCopy = list_copy(publications);\n\npublications is an input list to this function, I did not want this\nfunction to change this list. I felt existing is fine. Thoughts?\n\n> 8) If you have done StringInfoData *cmd = makeStringInfo();, then no\n> need of initStringInfo(cmd);\n\nModified.\n\nAttached v4 patch has the fixes for the comments.\n\nRegards,\nVignesh", "msg_date": "Tue, 13 Apr 2021 18:22:12 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Apr 13, 2021 at 6:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > 2) How about\n> > + Specifies whether the subscriber must verify the\n> > publications that are\n> > + being subscribed to are present in the publisher. By default,\n> > the subscriber\n> > instead of\n> > + Specifies whether the subscriber must verify if the specified\n> > + publications are present in the publisher. By default, the subscriber\n> >\n>\n> Slightly reworded and modified.\n\n+ <para>\n+ When true, the command will try to verify if the specified\n+ publications that are subscribed is present in the publisher.\n+ The default is <literal>false</literal>.\n+ </para>\n\n\"publications that are subscribed\" is not right as the subscriber is\nnot yet subscribed, it is \"trying to subscribing\", and it's not that\nthe command \"will try to verify\", it actually verifies. So you can\nmodify as follows:\n\n+ <para>\n+ When true, the command verifies if all the specified\npublications that are being subscribed to are present in the publisher\nand throws an error if any of the publication doesn't exist. The\ndefault is <literal>false</literal>.\n+ </para>\n\n> > 3) I think we can make below common code into a single function with\n> > flags to differentiate processing for both, something like:\n> > StringInfoData *get_publist_str(List *publicaitons, bool use_quotes,\n> > bool is_fetch_table_list);\n> > check_publications:\n> > + /* Convert the publications which does not exist into a string. */\n> > + initStringInfo(&nonExistentPublications);\n> > + foreach(lc, publicationsCopy)\n> > + {\n> > and get_appended_publications_query:\n> > foreach(lc, publications)\n> >\n> > With the new function that only prepares comma separated list of\n> > publications, you can get rid of get_appended_publications_query and\n> > just append the returned list to the query.\n> > fetch_table_list: get_publist_str(publications, true, true);\n> > check_publications: for select query preparation\n> > get_publist_str(publications, true, false); and for error string\n> > preparation get_publist_str(publications, false, false);\n> >\n> > And also let the new function get_publist_str allocate the string and\n> > just mention as a note in the function comment that the callers should\n> > pfree the returned string.\n> >\n>\n> I felt the existing code looks better, if we have a common function,\n> we will have to lot of if conditions as both the functions is not same\n> to same, they operate on different data types and do the preparation\n> appropriately. Like fetch_table_list get nspname & relname and\n> converts it to RangeVar and adds to the list other function prepares a\n> text and deletes the entries present from the list. So I did not fix\n> this. Thoughts?\n\nI was actually thinking we could move the following duplicate code\ninto a function:\n foreach(lc, publicationsCopy)\n {\n char *pubname = strVal(lfirst(lc));\n\n if (first)\n first = false;\n else\n appendStringInfoString(&pubnames, \", \");\n appendStringInfoString(&pubnames, \"\\\"\");\n appendStringInfoString(&pubnames, pubname);\n appendStringInfoString(&pubnames, \"\\\"\");\n }\nand\n foreach(lc, publications)\n {\n char *pubname = strVal(lfirst(lc));\n\n if (first)\n first = false;\n else\n appendStringInfoString(cmd, \", \");\n\n appendStringInfoString(cmd, quote_literal_cstr(pubname));\n }\nthat function can be:\nstatic void\nget_publications_str(List *publications, StringInfo dest, bool quote_literal)\n{\n ListCell *lc;\n bool first = true;\n\n Assert(list_length(publications) > 0);\n\n foreach(lc, publications)\n {\n char *pubname = strVal(lfirst(lc));\n\n if (first)\n first = false;\n else\n appendStringInfoString(dest, \", \");\n\n if (quote_literal)\n appendStringInfoString(pubnames, quote_literal_cstr(pubname));\n else\n {\n appendStringInfoString(&dest, \"\\\"\");\n appendStringInfoString(&dest, pubname);\n appendStringInfoString(&dest, \"\\\"\");\n }\n }\n}\n\nThis way, we can get rid of get_appended_publications_query and use\nthe above function to return the appended list of publications. We\nneed to just pass quote_literal as true while preparing the publist\nstring for publication query and append it to the query outside the\nfunction. While preparing publist str for error, pass quote_literal as\nfalse. Thoughts?\n\n> > 7) You can just do\n> > publications = list_copy(publications);\n> > instead of using another variable publicationsCopy\n> > publicationsCopy = list_copy(publications);\n>\n> publications is an input list to this function, I did not want this\n> function to change this list. I felt existing is fine. Thoughts?\n\nOkay.\n\nTypo - it's not \"subcription\" +# Create subcription for a publication\nwhich does not exist.\n\nI think we can remove extra { } by moving the comment above if clause\nmuch like you did in AlterSubscription_refresh. And it's not \"exists\",\nit is \"exist\" change in both AlterSubscription_refresh and\nCreateSubscription.\n+ if (validate_publication)\n+ {\n+ /* Verify specified publications exists in the publisher. */\n+ check_publications(wrconn, publications);\n+ }\n+\n\nMove /*no streaming */ to above NULL, NULL line:\n+ NULL, NULL,\n NULL, NULL); /* no streaming */\n\nCan we have a new function for below duplicate code? Something like:\nvoid connect_and_check_pubs(Subscription *sub, List *publications);?\n+ if (validate_publication)\n+ {\n+ /* Load the library providing us libpq calls. */\n+ load_file(\"libpqwalreceiver\", false);\n+\n+ /* Try to connect to the publisher. */\n+ wrconn = walrcv_connect(sub->conninfo, true,\nsub->name, &err);\n+ if (!wrconn)\n+ ereport(ERROR,\n+ (errmsg(\"could not connect to the\npublisher: %s\", err)));\n+\n+ /* Verify specified publications exists in the\npublisher. */\n+ check_publications(wrconn, stmt->publication);\n+\n+ /* We are done with the remote side, close connection. */\n+ walrcv_disconnect(wrconn);\n+ }\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Apr 2021 20:01:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Apr 13, 2021 at 8:01 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Apr 13, 2021 at 6:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > 2) How about\n> > > + Specifies whether the subscriber must verify the\n> > > publications that are\n> > > + being subscribed to are present in the publisher. By default,\n> > > the subscriber\n> > > instead of\n> > > + Specifies whether the subscriber must verify if the specified\n> > > + publications are present in the publisher. By default, the subscriber\n> > >\n> >\n> > Slightly reworded and modified.\n>\n> + <para>\n> + When true, the command will try to verify if the specified\n> + publications that are subscribed is present in the publisher.\n> + The default is <literal>false</literal>.\n> + </para>\n>\n> \"publications that are subscribed\" is not right as the subscriber is\n> not yet subscribed, it is \"trying to subscribing\", and it's not that\n> the command \"will try to verify\", it actually verifies. So you can\n> modify as follows:\n>\n> + <para>\n> + When true, the command verifies if all the specified\n> publications that are being subscribed to are present in the publisher\n> and throws an error if any of the publication doesn't exist. The\n> default is <literal>false</literal>.\n> + </para>\n>\n> > > 3) I think we can make below common code into a single function with\n> > > flags to differentiate processing for both, something like:\n> > > StringInfoData *get_publist_str(List *publicaitons, bool use_quotes,\n> > > bool is_fetch_table_list);\n> > > check_publications:\n> > > + /* Convert the publications which does not exist into a string. */\n> > > + initStringInfo(&nonExistentPublications);\n> > > + foreach(lc, publicationsCopy)\n> > > + {\n> > > and get_appended_publications_query:\n> > > foreach(lc, publications)\n> > >\n> > > With the new function that only prepares comma separated list of\n> > > publications, you can get rid of get_appended_publications_query and\n> > > just append the returned list to the query.\n> > > fetch_table_list: get_publist_str(publications, true, true);\n> > > check_publications: for select query preparation\n> > > get_publist_str(publications, true, false); and for error string\n> > > preparation get_publist_str(publications, false, false);\n> > >\n> > > And also let the new function get_publist_str allocate the string and\n> > > just mention as a note in the function comment that the callers should\n> > > pfree the returned string.\n> > >\n> >\n> > I felt the existing code looks better, if we have a common function,\n> > we will have to lot of if conditions as both the functions is not same\n> > to same, they operate on different data types and do the preparation\n> > appropriately. Like fetch_table_list get nspname & relname and\n> > converts it to RangeVar and adds to the list other function prepares a\n> > text and deletes the entries present from the list. So I did not fix\n> > this. Thoughts?\n>\n> I was actually thinking we could move the following duplicate code\n> into a function:\n> foreach(lc, publicationsCopy)\n> {\n> char *pubname = strVal(lfirst(lc));\n>\n> if (first)\n> first = false;\n> else\n> appendStringInfoString(&pubnames, \", \");\n> appendStringInfoString(&pubnames, \"\\\"\");\n> appendStringInfoString(&pubnames, pubname);\n> appendStringInfoString(&pubnames, \"\\\"\");\n> }\n> and\n> foreach(lc, publications)\n> {\n> char *pubname = strVal(lfirst(lc));\n>\n> if (first)\n> first = false;\n> else\n> appendStringInfoString(cmd, \", \");\n>\n> appendStringInfoString(cmd, quote_literal_cstr(pubname));\n> }\n> that function can be:\n> static void\n> get_publications_str(List *publications, StringInfo dest, bool quote_literal)\n> {\n> ListCell *lc;\n> bool first = true;\n>\n> Assert(list_length(publications) > 0);\n>\n> foreach(lc, publications)\n> {\n> char *pubname = strVal(lfirst(lc));\n>\n> if (first)\n> first = false;\n> else\n> appendStringInfoString(dest, \", \");\n>\n> if (quote_literal)\n> appendStringInfoString(pubnames, quote_literal_cstr(pubname));\n> else\n> {\n> appendStringInfoString(&dest, \"\\\"\");\n> appendStringInfoString(&dest, pubname);\n> appendStringInfoString(&dest, \"\\\"\");\n> }\n> }\n> }\n>\n> This way, we can get rid of get_appended_publications_query and use\n> the above function to return the appended list of publications. We\n> need to just pass quote_literal as true while preparing the publist\n> string for publication query and append it to the query outside the\n> function. While preparing publist str for error, pass quote_literal as\n> false. Thoughts?\n>\n\nModified.\n\n> > > 7) You can just do\n> > > publications = list_copy(publications);\n> > > instead of using another variable publicationsCopy\n> > > publicationsCopy = list_copy(publications);\n> >\n> > publications is an input list to this function, I did not want this\n> > function to change this list. I felt existing is fine. Thoughts?\n>\n> Okay.\n>\n> Typo - it's not \"subcription\" +# Create subcription for a publication\n> which does not exist.\n>\n\nModified\n\n> I think we can remove extra { } by moving the comment above if clause\n> much like you did in AlterSubscription_refresh. And it's not \"exists\",\n> it is \"exist\" change in both AlterSubscription_refresh and\n> CreateSubscription.\n> + if (validate_publication)\n> + {\n> + /* Verify specified publications exists in the publisher. */\n> + check_publications(wrconn, publications);\n> + }\n> +\n\nModified.\n\n>\n> Move /*no streaming */ to above NULL, NULL line:\n> + NULL, NULL,\n> NULL, NULL); /* no streaming */\n>\n\nModified.\n\n> Can we have a new function for below duplicate code? Something like:\n> void connect_and_check_pubs(Subscription *sub, List *publications);?\n> + if (validate_publication)\n> + {\n> + /* Load the library providing us libpq calls. */\n> + load_file(\"libpqwalreceiver\", false);\n> +\n> + /* Try to connect to the publisher. */\n> + wrconn = walrcv_connect(sub->conninfo, true,\n> sub->name, &err);\n> + if (!wrconn)\n> + ereport(ERROR,\n> + (errmsg(\"could not connect to the\n> publisher: %s\", err)));\n> +\n> + /* Verify specified publications exists in the\n> publisher. */\n> + check_publications(wrconn, stmt->publication);\n> +\n> + /* We are done with the remote side, close connection. */\n> + walrcv_disconnect(wrconn);\n> + }\nModified.\n\nThanks for the comments, Attached patch has the fixes for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Sat, 1 May 2021 12:49:42 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Sat, May 1, 2021 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for the comments, Attached patch has the fixes for the same.\n> Thoughts?\n\nFew more comments on v5:\n\n1) Deletion of below empty new line is spurious:\n-\n /*\n * Common option parsing function for CREATE and ALTER SUBSCRIPTION commands.\n *\n\n2) I think we could just do as below to save indentation of the code\nfor validate_publication == true.\nstatic void\n+connect_and_check_pubs(Subscription *sub, List *publications,\n+ bool validate_publication)\n+{\n+ char *err;\n+\n+ if (validate_pulication == false )\n+ return;\n+\n+ /* Load the library providing us libpq calls. */\n+ load_file(\"libpqwalreceiver\", false);\n\n3) To be consistent, either we pass in validate_publication to both\nconnect_and_check_pubs and check_publications, return immediately from\nthem if it is false or do the checks outside. I suggest to pass in the\nbool parameter to check_publications like you did for\nconnect_and_check_pubs. Or remove validate_publication from\nconnect_and_check_pubs and do the check outside.\n+ if (validate_publication)\n+ check_publications(wrconn, publications);\n+ if (check_pub)\n+ check_publications(wrconn, sub->publications);\n\n4) Below line of code is above 80-char limit:\n+ else if (strcmp(defel->defname, \"validate_publication\") == 0\n&& validate_publication)\n\n5) Instead of adding a new file 021_validate_publications.pl for\ntests, spawning a new test database which would make the overall\nregression slower, can't we add with the existing database nodes in\n0001_rep_changes.pl? I would suggest adding the tests in there even if\nthe number of tests are many, I don't mind.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 1 May 2021 19:58:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Sat, May 1, 2021 at 7:58 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, May 1, 2021 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, Attached patch has the fixes for the same.\n> > Thoughts?\n>\n> Few more comments on v5:\n>\n> 1) Deletion of below empty new line is spurious:\n> -\n> /*\n> * Common option parsing function for CREATE and ALTER SUBSCRIPTION commands.\n> *\n>\n\nModified.\n\n> 2) I think we could just do as below to save indentation of the code\n> for validate_publication == true.\n> static void\n> +connect_and_check_pubs(Subscription *sub, List *publications,\n> + bool validate_publication)\n> +{\n> + char *err;\n> +\n> + if (validate_pulication == false )\n> + return;\n> +\n> + /* Load the library providing us libpq calls. */\n> + load_file(\"libpqwalreceiver\", false);\n>\n\nModified.\n\n> 3) To be consistent, either we pass in validate_publication to both\n> connect_and_check_pubs and check_publications, return immediately from\n> them if it is false or do the checks outside. I suggest to pass in the\n> bool parameter to check_publications like you did for\n> connect_and_check_pubs. Or remove validate_publication from\n> connect_and_check_pubs and do the check outside.\n> + if (validate_publication)\n> + check_publications(wrconn, publications);\n> + if (check_pub)\n> + check_publications(wrconn, sub->publications);\n>\n\nModified.\n\n> 4) Below line of code is above 80-char limit:\n> + else if (strcmp(defel->defname, \"validate_publication\") == 0\n> && validate_publication)\n>\n\nModified\n\n> 5) Instead of adding a new file 021_validate_publications.pl for\n> tests, spawning a new test database which would make the overall\n> regression slower, can't we add with the existing database nodes in\n> 0001_rep_changes.pl? I would suggest adding the tests in there even if\n> the number of tests are many, I don't mind.\n\n001_rep_changes.pl has the changes mainly for checking the replicated\ndata. I did not find an appropriate file in the current tap tests, I\npreferred these tests to be in a separate file. Thoughts?\n\nThanks for the comments.\nThe Attached patch has the fixes for the same.\n\nRegards,\nVignesh", "msg_date": "Sun, 2 May 2021 22:04:27 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Sun, May 2, 2021 at 10:04 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the comments.\n> The Attached patch has the fixes for the same.\n\nI was reviewing the documentation part, I think in the below paragraph\nwe should include validate_publication as well?\n\n <varlistentry>\n <term><literal>connect</literal> (<type>boolean</type>)</term>\n <listitem>\n <para>\n Specifies whether the <command>CREATE SUBSCRIPTION</command>\n should connect to the publisher at all. Setting this to\n <literal>false</literal> will change default values of\n <literal>enabled</literal>, <literal>create_slot</literal> and\n <literal>copy_data</literal> to <literal>false</literal>.\n </para>\n\nI will review/test the other parts of the patch and let you know.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 10:48:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Sun, May 2, 2021 at 10:04 PM vignesh C <vignesh21@gmail.com> wrote:\n> > 5) Instead of adding a new file 021_validate_publications.pl for\n> > tests, spawning a new test database which would make the overall\n> > regression slower, can't we add with the existing database nodes in\n> > 0001_rep_changes.pl? I would suggest adding the tests in there even if\n> > the number of tests are many, I don't mind.\n>\n> 001_rep_changes.pl has the changes mainly for checking the replicated\n> data. I did not find an appropriate file in the current tap tests, I\n> preferred these tests to be in a separate file. Thoughts?\n\nIf 001_rep_changes.pl is not the right place, how about adding them\ninto 007_ddl.pl? That file seems to be only for DDL changes, and since\nthe feature tests cases are for CREATE/ALTER SUBSCRIPTION, it's the\nright place. I strongly feel that we don't need a new file for these\ntests.\n\nComment on the tests:\n1) Instead of \"pub_doesnt_exist\" name, how about \"non_existent_pub\" or\njust pub_non_existent\" or some other?\n+ \"CREATE SUBSCRIPTION testsub2 CONNECTION '$publisher_connstr'\nPUBLICATION pub_doesnt_exist WITH (VALIDATE_PUBLICATION = TRUE)\"\nThe error message with this name looks a bit odd to me.\n+ /ERROR: publication \"pub_doesnt_exist\" does not exist in\nthe publisher/,\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 11:11:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, May 3, 2021 at 10:48 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, May 2, 2021 at 10:04 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments.\n> > The Attached patch has the fixes for the same.\n>\n> I was reviewing the documentation part, I think in the below paragraph\n> we should include validate_publication as well?\n>\n> <varlistentry>\n> <term><literal>connect</literal> (<type>boolean</type>)</term>\n> <listitem>\n> <para>\n> Specifies whether the <command>CREATE SUBSCRIPTION</command>\n> should connect to the publisher at all. Setting this to\n> <literal>false</literal> will change default values of\n> <literal>enabled</literal>, <literal>create_slot</literal> and\n> <literal>copy_data</literal> to <literal>false</literal>.\n> </para>\n>\n> I will review/test the other parts of the patch and let you know.\n\nI have reviewed it and it mostly looks good to me. I have some minor\nsuggestions though.\n\n1.\n+/*\n+ * Check the specified publication(s) is(are) present in the publisher.\n+ */\n\nvs\n\n+\n+/*\n+ * Connect to the publisher and check if the publications exist.\n+ */\n\nI think the formatting of the comments are not uniform. Some places\nwe are using \"publication(s) is(are)\" whereas other places are just\n\"publications\".\n\n2. Add a error case for connect=false and VALIDATE_PUBLICATION = true\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 May 2021 13:45:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, May 3, 2021 at 1:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 10:48 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sun, May 2, 2021 at 10:04 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Thanks for the comments.\n> > > The Attached patch has the fixes for the same.\n> >\n> > I was reviewing the documentation part, I think in the below paragraph\n> > we should include validate_publication as well?\n> >\n> > <varlistentry>\n> > <term><literal>connect</literal> (<type>boolean</type>)</term>\n> > <listitem>\n> > <para>\n> > Specifies whether the <command>CREATE SUBSCRIPTION</command>\n> > should connect to the publisher at all. Setting this to\n> > <literal>false</literal> will change default values of\n> > <literal>enabled</literal>, <literal>create_slot</literal> and\n> > <literal>copy_data</literal> to <literal>false</literal>.\n> > </para>\n> >\n\nModified.\n\n> > I will review/test the other parts of the patch and let you know.\n>\n> I have reviewed it and it mostly looks good to me. I have some minor\n> suggestions though.\n>\n> 1.\n> +/*\n> + * Check the specified publication(s) is(are) present in the publisher.\n> + */\n>\n> vs\n>\n> +\n> +/*\n> + * Connect to the publisher and check if the publications exist.\n> + */\n>\n> I think the formatting of the comments are not uniform. Some places\n> we are using \"publication(s) is(are)\" whereas other places are just\n> \"publications\".\n>\n\nModified.\n\n> 2. Add a error case for connect=false and VALIDATE_PUBLICATION = true\n\nAdded.\n\nThanks for the comments, attached v7 patch has the fixes for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 3 May 2021 19:58:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, May 3, 2021 at 11:11 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, May 2, 2021 at 10:04 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > 5) Instead of adding a new file 021_validate_publications.pl for\n> > > tests, spawning a new test database which would make the overall\n> > > regression slower, can't we add with the existing database nodes in\n> > > 0001_rep_changes.pl? I would suggest adding the tests in there even if\n> > > the number of tests are many, I don't mind.\n> >\n> > 001_rep_changes.pl has the changes mainly for checking the replicated\n> > data. I did not find an appropriate file in the current tap tests, I\n> > preferred these tests to be in a separate file. Thoughts?\n>\n> If 001_rep_changes.pl is not the right place, how about adding them\n> into 007_ddl.pl? That file seems to be only for DDL changes, and since\n> the feature tests cases are for CREATE/ALTER SUBSCRIPTION, it's the\n> right place. I strongly feel that we don't need a new file for these\n> tests.\n>\n\nModified.\n\n> Comment on the tests:\n> 1) Instead of \"pub_doesnt_exist\" name, how about \"non_existent_pub\" or\n> just pub_non_existent\" or some other?\n> + \"CREATE SUBSCRIPTION testsub2 CONNECTION '$publisher_connstr'\n> PUBLICATION pub_doesnt_exist WITH (VALIDATE_PUBLICATION = TRUE)\"\n> The error message with this name looks a bit odd to me.\n> + /ERROR: publication \"pub_doesnt_exist\" does not exist in\n> the publisher/,\n\nModified.\n\nThanks for the comments, these comments are handle in the v7 patch\nposted in my earlier mail.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 3 May 2021 19:59:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Mon, May 3, 2021 at 7:59 PM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for the comments, these comments are handle in the v7 patch\n> posted in my earlier mail.\n\nThanks. Some comments on v7 patch:\n\n1) How about \"Add publication names from the list to a string.\"\n instead of\n * Append the list of publication to dest string.\n\n2) How about \"Connect to the publisher and see if the given\npublication(s) is(are) present.\"\ninstead of\n * Connect to the publisher and check if the publication(s) exist.\n\n3) Below comments are unnecessary as the functions/code following them\nwill tell what the code does.\n /* Verify specified publication(s) exist in the publisher. */\n /* We are done with the remote side, close connection. */\n\n /* Verify specified publication(s) exist in the publisher. */\n PG_TRY();\n {\n check_publications(wrconn, publications, true);\n }\n PG_FINALLY();\n {\n /* We are done with the remote side, close connection. */\n walrcv_disconnect(wrconn);\n }\n\n4) And also the comment below that's there before check_publications\nis unnecessary, as the function name and description would say it all.\n/* Verify specified publication(s) exist in the publisher. */\n\n5) A typo - it is \"do not exist\"\n# Multiple publications does not exist.\n\n6) Should we use \"m\" specified in all the test cases something like we\ndo for $stderr =~ m/threads are not supported on this platform/ or\nm/replication slot \"test_slot\" was not created in this database/?\n$stderr =~\n /ERROR: publication \"non_existent_pub\" does not exist in the\npublisher/,\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 14:37:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, May 4, 2021 at 2:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 7:59 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, these comments are handle in the v7 patch\n> > posted in my earlier mail.\n>\n> Thanks. Some comments on v7 patch:\n>\n> 1) How about \"Add publication names from the list to a string.\"\n> instead of\n> * Append the list of publication to dest string.\n>\n\nModified.\n\n> 2) How about \"Connect to the publisher and see if the given\n> publication(s) is(are) present.\"\n> instead of\n> * Connect to the publisher and check if the publication(s) exist.\n>\n\nModified.\n\n> 3) Below comments are unnecessary as the functions/code following them\n> will tell what the code does.\n> /* Verify specified publication(s) exist in the publisher. */\n> /* We are done with the remote side, close connection. */\n>\n> /* Verify specified publication(s) exist in the publisher. */\n> PG_TRY();\n> {\n> check_publications(wrconn, publications, true);\n> }\n> PG_FINALLY();\n> {\n> /* We are done with the remote side, close connection. */\n> walrcv_disconnect(wrconn);\n> }\n>\n\nModified.\n\n> 4) And also the comment below that's there before check_publications\n> is unnecessary, as the function name and description would say it all.\n> /* Verify specified publication(s) exist in the publisher. */\n>\n\nModified.\n\n> 5) A typo - it is \"do not exist\"\n> # Multiple publications does not exist.\n>\n\nModified.\n\n> 6) Should we use \"m\" specified in all the test cases something like we\n> do for $stderr =~ m/threads are not supported on this platform/ or\n> m/replication slot \"test_slot\" was not created in this database/?\n> $stderr =~\n> /ERROR: publication \"non_existent_pub\" does not exist in the\n> publisher/,\n\nModified.\n\nThanks for the comments, Attached patch has the fixes for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 4 May 2021 18:50:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, May 4, 2021 at 6:50 PM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for the comments, Attached patch has the fixes for the same.\n\nThanks! I took a final look over the v8 patch, it looks good to me and\nregression tests were passed with it. I have no further comments at\nthis moment. I will make it \"ready for committer\" if others have no\ncomments.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 May 2021 19:53:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "\nOn Tue, 04 May 2021 at 21:20, vignesh C <vignesh21@gmail.com> wrote:\n> On Tue, May 4, 2021 at 2:37 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Mon, May 3, 2021 at 7:59 PM vignesh C <vignesh21@gmail.com> wrote:\n>> > Thanks for the comments, these comments are handle in the v7 patch\n>> > posted in my earlier mail.\n>>\n>> Thanks. Some comments on v7 patch:\n>>\n>> 1) How about \"Add publication names from the list to a string.\"\n>> instead of\n>> * Append the list of publication to dest string.\n>>\n>\n> Modified.\n>\n>> 2) How about \"Connect to the publisher and see if the given\n>> publication(s) is(are) present.\"\n>> instead of\n>> * Connect to the publisher and check if the publication(s) exist.\n>>\n>\n> Modified.\n>\n>> 3) Below comments are unnecessary as the functions/code following them\n>> will tell what the code does.\n>> /* Verify specified publication(s) exist in the publisher. */\n>> /* We are done with the remote side, close connection. */\n>>\n>> /* Verify specified publication(s) exist in the publisher. */\n>> PG_TRY();\n>> {\n>> check_publications(wrconn, publications, true);\n>> }\n>> PG_FINALLY();\n>> {\n>> /* We are done with the remote side, close connection. */\n>> walrcv_disconnect(wrconn);\n>> }\n>>\n>\n> Modified.\n>\n>> 4) And also the comment below that's there before check_publications\n>> is unnecessary, as the function name and description would say it all.\n>> /* Verify specified publication(s) exist in the publisher. */\n>>\n>\n> Modified.\n>\n>> 5) A typo - it is \"do not exist\"\n>> # Multiple publications does not exist.\n>>\n>\n> Modified.\n>\n>> 6) Should we use \"m\" specified in all the test cases something like we\n>> do for $stderr =~ m/threads are not supported on this platform/ or\n>> m/replication slot \"test_slot\" was not created in this database/?\n>> $stderr =~\n>> /ERROR: publication \"non_existent_pub\" does not exist in the\n>> publisher/,\n>\n> Modified.\n>\n> Thanks for the comments, Attached patch has the fixes for the same.\n\nThanks for updating the patch. Some comments on v8 patch.\n\n1) How about use appendStringInfoChar() to replace the first and last one,\nsince it more faster.\n+ appendStringInfoString(dest, \"\\\"\");\n+ appendStringInfoString(dest, pubname);\n+ appendStringInfoString(dest, \"\\\"\");\n\n2) How about use if (!validate_publication) to keep the code style consistent?\n+ if (validate_publication == false)\n+ return;\n\n3) Should we free the memory when finish the check_publications()?\n+ publicationsCopy = list_copy(publications);\n\n4) It is better wrap the word \"streaming\" with quote. Also, should we add\n'no \"validate_publication\"' comment for validate_publication parameters?\n NULL, NULL, /* no \"binary\" */\n- NULL, NULL); /* no streaming */\n+ NULL, NULL, /* no streaming */\n+ NULL, NULL);\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 06 May 2021 11:37:59 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, May 6, 2021 at 9:08 AM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Tue, 04 May 2021 at 21:20, vignesh C <vignesh21@gmail.com> wrote:\n> > On Tue, May 4, 2021 at 2:37 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Mon, May 3, 2021 at 7:59 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> > Thanks for the comments, these comments are handle in the v7 patch\n> >> > posted in my earlier mail.\n> >>\n> >> Thanks. Some comments on v7 patch:\n> >>\n> >> 1) How about \"Add publication names from the list to a string.\"\n> >> instead of\n> >> * Append the list of publication to dest string.\n> >>\n> >\n> > Modified.\n> >\n> >> 2) How about \"Connect to the publisher and see if the given\n> >> publication(s) is(are) present.\"\n> >> instead of\n> >> * Connect to the publisher and check if the publication(s) exist.\n> >>\n> >\n> > Modified.\n> >\n> >> 3) Below comments are unnecessary as the functions/code following them\n> >> will tell what the code does.\n> >> /* Verify specified publication(s) exist in the publisher. */\n> >> /* We are done with the remote side, close connection. */\n> >>\n> >> /* Verify specified publication(s) exist in the publisher. */\n> >> PG_TRY();\n> >> {\n> >> check_publications(wrconn, publications, true);\n> >> }\n> >> PG_FINALLY();\n> >> {\n> >> /* We are done with the remote side, close connection. */\n> >> walrcv_disconnect(wrconn);\n> >> }\n> >>\n> >\n> > Modified.\n> >\n> >> 4) And also the comment below that's there before check_publications\n> >> is unnecessary, as the function name and description would say it all.\n> >> /* Verify specified publication(s) exist in the publisher. */\n> >>\n> >\n> > Modified.\n> >\n> >> 5) A typo - it is \"do not exist\"\n> >> # Multiple publications does not exist.\n> >>\n> >\n> > Modified.\n> >\n> >> 6) Should we use \"m\" specified in all the test cases something like we\n> >> do for $stderr =~ m/threads are not supported on this platform/ or\n> >> m/replication slot \"test_slot\" was not created in this database/?\n> >> $stderr =~\n> >> /ERROR: publication \"non_existent_pub\" does not exist in the\n> >> publisher/,\n> >\n> > Modified.\n> >\n> > Thanks for the comments, Attached patch has the fixes for the same.\n>\n> Thanks for updating the patch. Some comments on v8 patch.\n>\n> 1) How about use appendStringInfoChar() to replace the first and last one,\n> since it more faster.\n> + appendStringInfoString(dest, \"\\\"\");\n> + appendStringInfoString(dest, pubname);\n> + appendStringInfoString(dest, \"\\\"\");\n\nModified.\n\n> 2) How about use if (!validate_publication) to keep the code style consistent?\n> + if (validate_publication == false)\n> + return;\n\nModified.\n\n> 3) Should we free the memory when finish the check_publications()?\n> + publicationsCopy = list_copy(publications);\n\nI felt this list entries will be deleted in the success case, in error\ncase I felt no need to delete it as we will be exiting. Thoughts?\n\n> 4) It is better wrap the word \"streaming\" with quote. Also, should we add\n> 'no \"validate_publication\"' comment for validate_publication parameters?\n> NULL, NULL, /* no \"binary\" */\n> - NULL, NULL); /* no streaming */\n> + NULL, NULL, /* no streaming */\n> + NULL, NULL);\n\nModified.\n\nThanks for the comments, attached v9 patch has the fixes for the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 6 May 2021 19:22:31 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "\nOn Thu, 06 May 2021 at 21:52, vignesh C <vignesh21@gmail.com> wrote:\n> On Thu, May 6, 2021 at 9:08 AM Japin Li <japinli@hotmail.com> wrote:\n>> 3) Should we free the memory when finish the check_publications()?\n>> + publicationsCopy = list_copy(publications);\n>\n> I felt this list entries will be deleted in the success case, in error\n> case I felt no need to delete it as we will be exiting. Thoughts?\n>\n\nSorry for the noise! You are right. The v9 patch set looks good to me.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 07 May 2021 14:02:53 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, May 6, 2021 at 7:22 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nSome comments:\n1.\nI don't see any change in pg_dump.c, don't we need to dump this option?\n\n2.\n+ /* Try to connect to the publisher. */\n+ wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n+ if (!wrconn)\n+ ereport(ERROR,\n+ (errmsg(\"could not connect to the publisher: %s\", err)));\n\nInstead of using global wrconn, I think you should use a local variable?\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 May 2021 11:50:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Fri, May 7, 2021 at 11:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 7:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> Some comments:\n> 1.\n> I don't see any change in pg_dump.c, don't we need to dump this option?\n\nI don't think it is necessary there as the default value of the\nvalidate_publication is false, so even if the pg_dump has no mention\nof the option, then it is assumed to be false while restoring. Note\nthat the validate_publication option is transient (like with other\noptions such as create_slot, copy_data) which means it can't be stored\nin pg_subscritpion catalogue. Therefore, user specified value can't be\nfetched once the CREATE/ALTER subscription command is finished. If we\nwere to dump the option, we should be storing it in the catalogue,\nwhich I don't think is necessary. Thoughts?\n\n> 2.\n> + /* Try to connect to the publisher. */\n> + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> + if (!wrconn)\n> + ereport(ERROR,\n> + (errmsg(\"could not connect to the publisher: %s\", err)));\n>\n> Instead of using global wrconn, I think you should use a local variable?\n\nYeah, we should be using local wrconn, otherwise there can be\nconsequences, see the patches at [1]. Thanks for pointing out this.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPuSwWmmeK%2Bfe6E2duep8588Jk82XXH73nE4dUxwDNkNUg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 May 2021 17:38:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Fri, May 7, 2021 at 5:38 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, May 7, 2021 at 11:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, May 6, 2021 at 7:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> >\n> > Some comments:\n> > 1.\n> > I don't see any change in pg_dump.c, don't we need to dump this option?\n>\n> I don't think it is necessary there as the default value of the\n> validate_publication is false, so even if the pg_dump has no mention\n> of the option, then it is assumed to be false while restoring. Note\n> that the validate_publication option is transient (like with other\n> options such as create_slot, copy_data) which means it can't be stored\n> in pg_subscritpion catalogue. Therefore, user specified value can't be\n> fetched once the CREATE/ALTER subscription command is finished. If we\n> were to dump the option, we should be storing it in the catalogue,\n> which I don't think is necessary. Thoughts?\n\nIf we are not storing it in the catalog then it does not need to be dumped.\n\n> > 2.\n> > + /* Try to connect to the publisher. */\n> > + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> > + if (!wrconn)\n> > + ereport(ERROR,\n> > + (errmsg(\"could not connect to the publisher: %s\", err)));\n> >\n> > Instead of using global wrconn, I think you should use a local variable?\n>\n> Yeah, we should be using local wrconn, otherwise there can be\n> consequences, see the patches at [1]. Thanks for pointing out this.\n\nRight.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 May 2021 17:44:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Fri, May 7, 2021 at 5:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, May 7, 2021 at 5:38 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, May 7, 2021 at 11:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Thu, May 6, 2021 at 7:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > >\n> > > Some comments:\n> > > 1.\n> > > I don't see any change in pg_dump.c, don't we need to dump this option?\n> >\n> > I don't think it is necessary there as the default value of the\n> > validate_publication is false, so even if the pg_dump has no mention\n> > of the option, then it is assumed to be false while restoring. Note\n> > that the validate_publication option is transient (like with other\n> > options such as create_slot, copy_data) which means it can't be stored\n> > in pg_subscritpion catalogue. Therefore, user specified value can't be\n> > fetched once the CREATE/ALTER subscription command is finished. If we\n> > were to dump the option, we should be storing it in the catalogue,\n> > which I don't think is necessary. Thoughts?\n>\n> If we are not storing it in the catalog then it does not need to be dumped.\n\nI intentionally did not store this value, I felt we need not persist\nthis option's value. This value will be false while dumping similar to\nother non stored parameters.\n\n> > > 2.\n> > > + /* Try to connect to the publisher. */\n> > > + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> > > + if (!wrconn)\n> > > + ereport(ERROR,\n> > > + (errmsg(\"could not connect to the publisher: %s\", err)));\n> > >\n> > > Instead of using global wrconn, I think you should use a local variable?\n> >\n> > Yeah, we should be using local wrconn, otherwise there can be\n> > consequences, see the patches at [1]. Thanks for pointing out this.\n\nModified.\n\nThanks for the comments, the attached patch has the fix for the same.\n\nRegards,\nVignesh", "msg_date": "Fri, 7 May 2021 18:44:27 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Fri, May 7, 2021 at 6:44 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, May 7, 2021 at 5:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, May 7, 2021 at 5:38 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Fri, May 7, 2021 at 11:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Thu, May 6, 2021 at 7:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > Some comments:\n> > > > 1.\n> > > > I don't see any change in pg_dump.c, don't we need to dump this option?\n> > >\n> > > I don't think it is necessary there as the default value of the\n> > > validate_publication is false, so even if the pg_dump has no mention\n> > > of the option, then it is assumed to be false while restoring. Note\n> > > that the validate_publication option is transient (like with other\n> > > options such as create_slot, copy_data) which means it can't be stored\n> > > in pg_subscritpion catalogue. Therefore, user specified value can't be\n> > > fetched once the CREATE/ALTER subscription command is finished. If we\n> > > were to dump the option, we should be storing it in the catalogue,\n> > > which I don't think is necessary. Thoughts?\n> >\n> > If we are not storing it in the catalog then it does not need to be dumped.\n>\n> I intentionally did not store this value, I felt we need not persist\n> this option's value. This value will be false while dumping similar to\n> other non stored parameters.\n>\n> > > > 2.\n> > > > + /* Try to connect to the publisher. */\n> > > > + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> > > > + if (!wrconn)\n> > > > + ereport(ERROR,\n> > > > + (errmsg(\"could not connect to the publisher: %s\", err)));\n> > > >\n> > > > Instead of using global wrconn, I think you should use a local variable?\n> > >\n> > > Yeah, we should be using local wrconn, otherwise there can be\n> > > consequences, see the patches at [1]. Thanks for pointing out this.\n>\n> Modified.\n>\n> Thanks for the comments, the attached patch has the fix for the same.\n\nThe patch was not applying on the head, attached patch which is rebased on HEAD.\n\nRegards,\nVignesh", "msg_date": "Sun, 6 Jun 2021 11:55:18 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Sun, Jun 6, 2021 at 11:55 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, May 7, 2021 at 6:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments, the attached patch has the fix for the same.\n>\n> The patch was not applying on the head, attached patch which is rebased on HEAD.\n\nThe patch was not applying on the head, attached patch which is rebased on HEAD.\n\nRegards,\nVignesh", "msg_date": "Wed, 30 Jun 2021 20:23:12 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Jun 30, 2021 at 8:23 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, Jun 6, 2021 at 11:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, May 7, 2021 at 6:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Thanks for the comments, the attached patch has the fix for the same.\n> >\n> > The patch was not applying on the head, attached patch which is rebased on HEAD.\n>\n> The patch was not applying on the head, attached patch which is rebased on HEAD.\n\nThe patch was not applying on the head, attached patch which is rebased on HEAD.\n\nRegards,\nVignesh", "msg_date": "Tue, 6 Jul 2021 20:09:31 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Jul 6, 2021 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jun 30, 2021 at 8:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sun, Jun 6, 2021 at 11:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Fri, May 7, 2021 at 6:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > Thanks for the comments, the attached patch has the fix for the same.\n> > >\n> > > The patch was not applying on the head, attached patch which is rebased on HEAD.\n> >\n> > The patch was not applying on the head, attached patch which is rebased on HEAD.\n>\n> The patch was not applying on the head, attached patch which is rebased on HEAD.\n\nThe patch was not applying on the head, attached patch which is rebased on HEAD.\n\nRegards,\nVignesh", "msg_date": "Thu, 15 Jul 2021 17:57:26 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, Jul 15, 2021 at 5:57 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Jul 6, 2021 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Jun 30, 2021 at 8:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Sun, Jun 6, 2021 at 11:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Fri, May 7, 2021 at 6:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > Thanks for the comments, the attached patch has the fix for the same.\n> > > >\n\nThe patch was not applying on the head, attached patch which is rebased on HEAD.\n\nRegards,\nVignesh", "msg_date": "Thu, 26 Aug 2021 19:49:49 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, Aug 26, 2021 at 07:49:49PM +0530, vignesh C wrote:\n> On Thu, Jul 15, 2021 at 5:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Jul 6, 2021 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Wed, Jun 30, 2021 at 8:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Sun, Jun 6, 2021 at 11:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, May 7, 2021 at 6:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > >\n> > > > > > Thanks for the comments, the attached patch has the fix for the same.\n> > > > >\n> \n> The patch was not applying on the head, attached patch which is rebased on HEAD.\n> \n\nHi,\n\nI'm testing this patch now. It doesn't apply cleanly but is the\ndocumentation part, so while a rebase would be good it doesn't avoid me\nto test.\n\nA couple of questions:\n\n+check_publications(WalReceiverConn *wrconn, List *publications,\n+ bool validate_publication)\n[...]\n+connect_and_check_pubs(Subscription *sub, List *publications,\n+ bool validate_publication)\n\nI wonder why validate_publication is passed as an argument just to\nreturn if it's false, why not just test it before calling those\nfunctions? Maybe is just a matter of style.\n\n+get_publications_str(List *publications, StringInfo dest, bool quote_literal)\n\nwhat's the purpose of the quote_literal argument?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 27 Sep 2021 21:19:44 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Sep 28, 2021 at 7:49 AM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Thu, Aug 26, 2021 at 07:49:49PM +0530, vignesh C wrote:\n> > On Thu, Jul 15, 2021 at 5:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 6, 2021 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jun 30, 2021 at 8:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > On Sun, Jun 6, 2021 at 11:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > >\n> > > > > > On Fri, May 7, 2021 at 6:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > > >\n> > > > > > > Thanks for the comments, the attached patch has the fix for the same.\n> > > > > >\n> >\n> > The patch was not applying on the head, attached patch which is rebased on HEAD.\n> >\n>\n> Hi,\n>\n> I'm testing this patch now. It doesn't apply cleanly but is the\n> documentation part, so while a rebase would be good it doesn't avoid me\n> to test.\n\nI have rebased the patch on top of Head.\n\n> A couple of questions:\n>\n> +check_publications(WalReceiverConn *wrconn, List *publications,\n> + bool validate_publication)\n> [...]\n> +connect_and_check_pubs(Subscription *sub, List *publications,\n> + bool validate_publication)\n>\n> I wonder why validate_publication is passed as an argument just to\n> return if it's false, why not just test it before calling those\n> functions? Maybe is just a matter of style.\n\nI felt it will be better to have the check inside function so that it\nneed not be checked at the multiple caller function.\n\n> +get_publications_str(List *publications, StringInfo dest, bool quote_literal)\n>\n> what's the purpose of the quote_literal argument?\n\nIn case of error the publication that is not present will be displayed\nwithin double quotes like below:\nERROR: publications \"pub3\", \"pub4\" do not exist in the publisher\nWhereas in case of the query we use single quotes, so quote_literal is\nused to differentiate and handle accordingly.\n\nAttached v12 version is rebased on top of Head.\n\nRegards,\nVignesh", "msg_date": "Tue, 9 Nov 2021 21:27:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Nov 9, 2021 at 9:27 PM vignesh C <vignesh21@gmail.com> wrote:\n> Attached v12 version is rebased on top of Head.\n\nThanks for the patch. Here are some comments on v12:\n\n1) I think ERRCODE_TOO_MANY_ARGUMENTS isn't the right error code, the\nERRCODE_UNDEFINED_OBJECT is more meaningful. Please change.\n+ ereport(ERROR,\n+ (errcode(ERRCODE_TOO_MANY_ARGUMENTS),\n+ errmsg_plural(\"publication %s does not exist in the publisher\",\n+ \"publications %s do not exist in the publisher\",\n\nThe existing code using\n ereport(ERROR,\n (errcode(ERRCODE_UNDEFINED_OBJECT),\n errmsg(\"subscription \\\"%s\\\" does not exist\", subname)));\n\n2) Typo: It is \"One of the specified publications exists.\"\n+# One of the specified publication exist.\n\n3) I think we can remove the test case \"+# Specified publication does\nnot exist.\" because the \"+# One of the specified publication exist.\"\ncovers the code.\n\n4) Do we need the below test case? Even with refresh = false, it does\ncall connect_and_check_pubs() right? Please remove it.\n+# Specified publication does not exist with refresh = false.\n+($ret, $stdout, $stderr) = $node_subscriber->psql('postgres',\n+ \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub\nWITH (REFRESH = FALSE, VALIDATE_PUBLICATION = TRUE)\"\n+);\n+ok( $stderr =~\n+ m/ERROR: publication \"non_existent_pub\" does not exist in\nthe publisher/,\n+ \"Alter subscription for non existent publication fails\");\n+\n\n5) Change the test case names to different ones instead of the same.\nHave something like:\n\"Create subscription fails with single non-existent publication\");\n\"Create subscription fails with multiple non-existent publications\");\n\"Create subscription fails with mutually exclusive options\");\n\"Alter subscription add publication fails with non-existent publication\");\n\"Alter subscription set publication fails with non-existent publication\");\n\"Alter subscription set publication fails with connection to a\nnon-existent database\");\n\nUnnecessary test cases would add up to the \"make check-world\" times,\nplease remove them.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 10 Nov 2021 11:16:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Nov 10, 2021 at 11:16 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 9:27 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Attached v12 version is rebased on top of Head.\n>\n> Thanks for the patch. Here are some comments on v12:\n>\n> 1) I think ERRCODE_TOO_MANY_ARGUMENTS isn't the right error code, the\n> ERRCODE_UNDEFINED_OBJECT is more meaningful. Please change.\n> + ereport(ERROR,\n> + (errcode(ERRCODE_TOO_MANY_ARGUMENTS),\n> + errmsg_plural(\"publication %s does not exist in the publisher\",\n> + \"publications %s do not exist in the publisher\",\n>\n> The existing code using\n> ereport(ERROR,\n> (errcode(ERRCODE_UNDEFINED_OBJECT),\n> errmsg(\"subscription \\\"%s\\\" does not exist\", subname)));\n\nModified\n\n> 2) Typo: It is \"One of the specified publications exists.\"\n> +# One of the specified publication exist.\n\nModified\n\n> 3) I think we can remove the test case \"+# Specified publication does\n> not exist.\" because the \"+# One of the specified publication exist.\"\n> covers the code.\n\nModified\n\n> 4) Do we need the below test case? Even with refresh = false, it does\n> call connect_and_check_pubs() right? Please remove it.\n> +# Specified publication does not exist with refresh = false.\n> +($ret, $stdout, $stderr) = $node_subscriber->psql('postgres',\n> + \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub\n> WITH (REFRESH = FALSE, VALIDATE_PUBLICATION = TRUE)\"\n> +);\n> +ok( $stderr =~\n> + m/ERROR: publication \"non_existent_pub\" does not exist in\n> the publisher/,\n> + \"Alter subscription for non existent publication fails\");\n> +\n\nModified\n\n> 5) Change the test case names to different ones instead of the same.\n> Have something like:\n> \"Create subscription fails with single non-existent publication\");\n> \"Create subscription fails with multiple non-existent publications\");\n> \"Create subscription fails with mutually exclusive options\");\n> \"Alter subscription add publication fails with non-existent publication\");\n> \"Alter subscription set publication fails with non-existent publication\");\n> \"Alter subscription set publication fails with connection to a\n> non-existent database\");\n\nModified\n\nThanks for the comments, the attached v13 patch has the fixes for the same.\n\nRegards,\nVignesh", "msg_date": "Sat, 13 Nov 2021 12:49:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Sat, Nov 13, 2021 at 12:50 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the comments, the attached v13 patch has the fixes for the same.\n\nThanks for the updated v13 patch. I have no further comments, it looks\ngood to me.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 13 Nov 2021 18:27:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "Just wondering if we should also be detecting the incorrect conninfo\nset with ALTER SUBSCRIPTION command as well. See below:\n\n-- try creating a subscription with incorrect conninfo. the command fails.\npostgres=# create subscription sub1 connection 'host=localhost\nport=5490 dbname=postgres' publication pub1;\nERROR: could not connect to the publisher: connection to server at\n\"localhost\" (::1), port 5490 failed: Connection refused\n Is the server running on that host and accepting TCP/IP connections?\nconnection to server at \"localhost\" (127.0.0.1), port 5490 failed:\nConnection refused\n Is the server running on that host and accepting TCP/IP connections?\npostgres=#\npostgres=#\n\n-- this time the conninfo is correct and the command succeeded.\npostgres=# create subscription sub1 connection 'host=localhost\nport=5432 dbname=postgres' publication pub1;\nNOTICE: created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\npostgres=#\npostgres=#\n\n-- reset the connninfo in the subscription to some wrong value. the\ncommand succeeds.\npostgres=# alter subscription sub1 connection 'host=localhost\nport=5490 dbname=postgres';\nALTER SUBSCRIPTION\npostgres=#\n\npostgres=# drop subscription sub1;\nERROR: could not connect to publisher when attempting to drop\nreplication slot \"sub1\": connection to server at \"localhost\" (::1),\nport 5490 failed: Connection refused\n Is the server running on that host and accepting TCP/IP connections?\nconnection to server at \"localhost\" (127.0.0.1), port 5490 failed:\nConnection refused\n Is the server running on that host and accepting TCP/IP connections?\nHINT: Use ALTER SUBSCRIPTION ... SET (slot_name = NONE) to\ndisassociate the subscription from the slot.\n\n==\n\nWhen creating a subscription we do connect to the publisher node hence\nthe incorrect connection info gets detected. But that's not the case\nwith alter subscription.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Sat, Nov 13, 2021 at 6:27 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Nov 13, 2021 at 12:50 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments, the attached v13 patch has the fixes for the same.\n>\n> Thanks for the updated v13 patch. I have no further comments, it looks\n> good to me.\n>\n> Regards,\n> Bharath Rupireddy.\n>\n>\n\n\n", "msg_date": "Wed, 9 Feb 2022 20:36:31 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Feb 9, 2022, at 12:06 PM, Ashutosh Sharma wrote:\n> Just wondering if we should also be detecting the incorrect conninfo\n> set with ALTER SUBSCRIPTION command as well. See below:\n> \n> -- try creating a subscription with incorrect conninfo. the command fails.\n> postgres=# create subscription sub1 connection 'host=localhost\n> port=5490 dbname=postgres' publication pub1;\n> ERROR: could not connect to the publisher: connection to server at\n> \"localhost\" (::1), port 5490 failed: Connection refused\n> Is the server running on that host and accepting TCP/IP connections?\n> connection to server at \"localhost\" (127.0.0.1), port 5490 failed:\n> Connection refused\n> Is the server running on that host and accepting TCP/IP connections?\nThat's because by default 'connect' parameter is true.\n\nThe important routine for all SUBSCRIPTION commands that handle connection\nstring is to validate the connection string e.g. check if all parameters are\ncorrect. See walrcv_check_conninfo that calls PQconninfoParse.\n\nThe connection string is syntactically correct. Hence, no error. It could be\nthe case that the service is temporarily down. It is a useful and common\nscenario that I wouldn't want to be forbid.\n\n> -- reset the connninfo in the subscription to some wrong value. the\n> command succeeds.\n> postgres=# alter subscription sub1 connection 'host=localhost\n> port=5490 dbname=postgres';\n> ALTER SUBSCRIPTION\n> postgres=#\n> \n> postgres=# drop subscription sub1;\n> ERROR: could not connect to publisher when attempting to drop\n> replication slot \"sub1\": connection to server at \"localhost\" (::1),\n> port 5490 failed: Connection refused\n> Is the server running on that host and accepting TCP/IP connections?\n> connection to server at \"localhost\" (127.0.0.1), port 5490 failed:\n> Connection refused\n> Is the server running on that host and accepting TCP/IP connections?\n> HINT: Use ALTER SUBSCRIPTION ... SET (slot_name = NONE) to\n> disassociate the subscription from the slot.\nAgain, dropping a subscription that is associated with a replication slot\nrequires a connection to remove the replication slot. If the publisher is gone\n(and so the replication slot), follow the HINT advice.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 9, 2022, at 12:06 PM, Ashutosh Sharma wrote:Just wondering if we should also be detecting the incorrect conninfoset with ALTER SUBSCRIPTION command as well. See below:-- try creating a subscription with incorrect conninfo. the command fails.postgres=# create subscription sub1 connection 'host=localhostport=5490 dbname=postgres' publication pub1;ERROR:  could not connect to the publisher: connection to server at\"localhost\" (::1), port 5490 failed: Connection refused    Is the server running on that host and accepting TCP/IP connections?connection to server at \"localhost\" (127.0.0.1), port 5490 failed:Connection refused    Is the server running on that host and accepting TCP/IP connections?That's because by default 'connect' parameter is true.The important routine for all SUBSCRIPTION commands that handle connectionstring is to validate the connection string e.g. check if all parameters arecorrect. See walrcv_check_conninfo that calls PQconninfoParse.The connection string is syntactically correct. Hence, no error. It could bethe case that the service is temporarily down. It is a useful and commonscenario that I wouldn't want to be forbid.-- reset the connninfo in the subscription to some wrong value. thecommand succeeds.postgres=# alter subscription sub1 connection 'host=localhostport=5490 dbname=postgres';ALTER SUBSCRIPTIONpostgres=#postgres=# drop subscription sub1;ERROR:  could not connect to publisher when attempting to dropreplication slot \"sub1\": connection to server at \"localhost\" (::1),port 5490 failed: Connection refused    Is the server running on that host and accepting TCP/IP connections?connection to server at \"localhost\" (127.0.0.1), port 5490 failed:Connection refused    Is the server running on that host and accepting TCP/IP connections?HINT:  Use ALTER SUBSCRIPTION ... SET (slot_name = NONE) todisassociate the subscription from the slot.Again, dropping a subscription that is associated with a replication slotrequires a connection to remove the replication slot. If the publisher is gone(and so the replication slot), follow the HINT advice.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 09 Feb 2022 15:23:06 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Feb 9, 2022 at 11:53 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Feb 9, 2022, at 12:06 PM, Ashutosh Sharma wrote:\n>\n> Just wondering if we should also be detecting the incorrect conninfo\n> set with ALTER SUBSCRIPTION command as well. See below:\n>\n> -- try creating a subscription with incorrect conninfo. the command fails.\n> postgres=# create subscription sub1 connection 'host=localhost\n> port=5490 dbname=postgres' publication pub1;\n> ERROR: could not connect to the publisher: connection to server at\n> \"localhost\" (::1), port 5490 failed: Connection refused\n> Is the server running on that host and accepting TCP/IP connections?\n> connection to server at \"localhost\" (127.0.0.1), port 5490 failed:\n> Connection refused\n> Is the server running on that host and accepting TCP/IP connections?\n>\n> That's because by default 'connect' parameter is true.\n>\n\nSo can we use this option with the ALTER SUBSCRIPTION command. I think\nwe can't, which means if the user sets wrong conninfo using ALTER\nSUBSCRIPTION command then we don't have the option to detect it like\nwe have in case of CREATE SUBSCRIPTION command. Since this thread is\ntrying to add the ability to identify the wrong/missing publication\nname specified with the ALTER SUBSCRIPTION command, can't we do the\nsame for the wrong conninfo?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:15:16 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "I have spent little time looking at the latest patch. The patch looks\nto be in good shape as it has already been reviewed by many people\nhere, although I did get some comments. Please take a look and let me\nknow your thoughts.\n\n\n+ /* Try to connect to the publisher. */\n+ wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n+ if (!wrconn)\n+ ereport(ERROR,\n+ (errmsg(\"could not connect to the publisher: %s\", err)));\n\nI think it would be good to also include the errcode\n(ERRCODE_CONNECTION_FAILURE) here?\n\n--\n\n@@ -514,6 +671,8 @@ CreateSubscription(ParseState *pstate,\nCreateSubscriptionStmt *stmt,\n\n PG_TRY();\n {\n+ check_publications(wrconn, publications, opts.validate_publication);\n+\n\n\nInstead of passing the opts.validate_publication argument to\ncheck_publication function, why can't we first check if this option is\nset or not and accordingly call check_publication function? For other\noptions I see that it has been done in the similar way for e.g. check\nfor opts.connect or opts.refresh or opts.enabled etc.\n\n--\n\nAbove comment also applies for:\n\n@@ -968,6 +1130,8 @@ AlterSubscription(ParseState *pstate,\nAlterSubscriptionStmt *stmt,\n replaces[Anum_pg_subscription_subpublications - 1] = true;\n\n update_tuple = true;\n+ connect_and_check_pubs(sub, stmt->publication,\n+ opts.validate_publication);\n\n\n--\n\n+ <para>\n+ When true, the command verifies if all the specified publications\n+ that are being subscribed to are present in the publisher and throws\n+ an error if any of the publication doesn't exist. The default is\n+ <literal>false</literal>.\n\npublication -> publications (in the 4th line : throw an error if any\nof the publication doesn't exist)\n\nThis applies for both CREATE and ALTER subscription commands.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Sat, Nov 13, 2021 at 12:50 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Nov 10, 2021 at 11:16 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Nov 9, 2021 at 9:27 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > Attached v12 version is rebased on top of Head.\n> >\n> > Thanks for the patch. Here are some comments on v12:\n> >\n> > 1) I think ERRCODE_TOO_MANY_ARGUMENTS isn't the right error code, the\n> > ERRCODE_UNDEFINED_OBJECT is more meaningful. Please change.\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_TOO_MANY_ARGUMENTS),\n> > + errmsg_plural(\"publication %s does not exist in the publisher\",\n> > + \"publications %s do not exist in the publisher\",\n> >\n> > The existing code using\n> > ereport(ERROR,\n> > (errcode(ERRCODE_UNDEFINED_OBJECT),\n> > errmsg(\"subscription \\\"%s\\\" does not exist\", subname)));\n>\n> Modified\n>\n> > 2) Typo: It is \"One of the specified publications exists.\"\n> > +# One of the specified publication exist.\n>\n> Modified\n>\n> > 3) I think we can remove the test case \"+# Specified publication does\n> > not exist.\" because the \"+# One of the specified publication exist.\"\n> > covers the code.\n>\n> Modified\n>\n> > 4) Do we need the below test case? Even with refresh = false, it does\n> > call connect_and_check_pubs() right? Please remove it.\n> > +# Specified publication does not exist with refresh = false.\n> > +($ret, $stdout, $stderr) = $node_subscriber->psql('postgres',\n> > + \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub\n> > WITH (REFRESH = FALSE, VALIDATE_PUBLICATION = TRUE)\"\n> > +);\n> > +ok( $stderr =~\n> > + m/ERROR: publication \"non_existent_pub\" does not exist in\n> > the publisher/,\n> > + \"Alter subscription for non existent publication fails\");\n> > +\n>\n> Modified\n>\n> > 5) Change the test case names to different ones instead of the same.\n> > Have something like:\n> > \"Create subscription fails with single non-existent publication\");\n> > \"Create subscription fails with multiple non-existent publications\");\n> > \"Create subscription fails with mutually exclusive options\");\n> > \"Alter subscription add publication fails with non-existent publication\");\n> > \"Alter subscription set publication fails with non-existent publication\");\n> > \"Alter subscription set publication fails with connection to a\n> > non-existent database\");\n>\n> Modified\n>\n> Thanks for the comments, the attached v13 patch has the fixes for the same.\n>\n> Regards,\n> Vignesh\n\n\n", "msg_date": "Fri, 11 Feb 2022 19:13:58 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Thu, Feb 10, 2022 at 3:15 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Wed, Feb 9, 2022 at 11:53 PM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Wed, Feb 9, 2022, at 12:06 PM, Ashutosh Sharma wrote:\n> >\n> > Just wondering if we should also be detecting the incorrect conninfo\n> > set with ALTER SUBSCRIPTION command as well. See below:\n> >\n> > -- try creating a subscription with incorrect conninfo. the command fails.\n> > postgres=# create subscription sub1 connection 'host=localhost\n> > port=5490 dbname=postgres' publication pub1;\n> > ERROR: could not connect to the publisher: connection to server at\n> > \"localhost\" (::1), port 5490 failed: Connection refused\n> > Is the server running on that host and accepting TCP/IP connections?\n> > connection to server at \"localhost\" (127.0.0.1), port 5490 failed:\n> > Connection refused\n> > Is the server running on that host and accepting TCP/IP connections?\n> >\n> > That's because by default 'connect' parameter is true.\n> >\n>\n> So can we use this option with the ALTER SUBSCRIPTION command. I think\n> we can't, which means if the user sets wrong conninfo using ALTER\n> SUBSCRIPTION command then we don't have the option to detect it like\n> we have in case of CREATE SUBSCRIPTION command. Since this thread is\n> trying to add the ability to identify the wrong/missing publication\n> name specified with the ALTER SUBSCRIPTION command, can't we do the\n> same for the wrong conninfo?\n\nI felt this can be extended once this feature is committed. Thoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 13 Feb 2022 19:32:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Fri, Feb 11, 2022 at 7:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> I have spent little time looking at the latest patch. The patch looks\n> to be in good shape as it has already been reviewed by many people\n> here, although I did get some comments. Please take a look and let me\n> know your thoughts.\n>\n>\n> + /* Try to connect to the publisher. */\n> + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> + if (!wrconn)\n> + ereport(ERROR,\n> + (errmsg(\"could not connect to the publisher: %s\", err)));\n>\n> I think it would be good to also include the errcode\n> (ERRCODE_CONNECTION_FAILURE) here?\n\nModified\n\n> --\n>\n> @@ -514,6 +671,8 @@ CreateSubscription(ParseState *pstate,\n> CreateSubscriptionStmt *stmt,\n>\n> PG_TRY();\n> {\n> + check_publications(wrconn, publications, opts.validate_publication);\n> +\n>\n>\n> Instead of passing the opts.validate_publication argument to\n> check_publication function, why can't we first check if this option is\n> set or not and accordingly call check_publication function? For other\n> options I see that it has been done in the similar way for e.g. check\n> for opts.connect or opts.refresh or opts.enabled etc.\n\nModified\n\n> --\n>\n> Above comment also applies for:\n>\n> @@ -968,6 +1130,8 @@ AlterSubscription(ParseState *pstate,\n> AlterSubscriptionStmt *stmt,\n> replaces[Anum_pg_subscription_subpublications - 1] = true;\n>\n> update_tuple = true;\n> + connect_and_check_pubs(sub, stmt->publication,\n> + opts.validate_publication);\n>\n\nModified\n\n> --\n>\n> + <para>\n> + When true, the command verifies if all the specified publications\n> + that are being subscribed to are present in the publisher and throws\n> + an error if any of the publication doesn't exist. The default is\n> + <literal>false</literal>.\n>\n> publication -> publications (in the 4th line : throw an error if any\n> of the publication doesn't exist)\n>\n> This applies for both CREATE and ALTER subscription commands.\n\nModified\n\nThanks for the comments, the attached v14 patch has the changes for the same.\n\nRegard,s\nVignesh", "msg_date": "Sun, 13 Feb 2022 19:34:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Sun, Feb 13, 2022 at 7:32 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Feb 10, 2022 at 3:15 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > On Wed, Feb 9, 2022 at 11:53 PM Euler Taveira <euler@eulerto.com> wrote:\n> > >\n> > > On Wed, Feb 9, 2022, at 12:06 PM, Ashutosh Sharma wrote:\n> > >\n> > > Just wondering if we should also be detecting the incorrect conninfo\n> > > set with ALTER SUBSCRIPTION command as well. See below:\n> > >\n> > > -- try creating a subscription with incorrect conninfo. the command fails.\n> > > postgres=# create subscription sub1 connection 'host=localhost\n> > > port=5490 dbname=postgres' publication pub1;\n> > > ERROR: could not connect to the publisher: connection to server at\n> > > \"localhost\" (::1), port 5490 failed: Connection refused\n> > > Is the server running on that host and accepting TCP/IP connections?\n> > > connection to server at \"localhost\" (127.0.0.1), port 5490 failed:\n> > > Connection refused\n> > > Is the server running on that host and accepting TCP/IP connections?\n> > >\n> > > That's because by default 'connect' parameter is true.\n> > >\n> >\n> > So can we use this option with the ALTER SUBSCRIPTION command. I think\n> > we can't, which means if the user sets wrong conninfo using ALTER\n> > SUBSCRIPTION command then we don't have the option to detect it like\n> > we have in case of CREATE SUBSCRIPTION command. Since this thread is\n> > trying to add the ability to identify the wrong/missing publication\n> > name specified with the ALTER SUBSCRIPTION command, can't we do the\n> > same for the wrong conninfo?\n>\n> I felt this can be extended once this feature is committed. Thoughts?\n>\n\nI think that should be okay. I just wanted to share with you people to\nknow if it can be taken care of in this patch itself but it's ok if we\nsee it later.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Mon, 14 Feb 2022 20:23:21 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "Thanks for working on my review comments. I'll take a look at the new\nchanges and let you know my comments, if any. I didn't get a chance to\ncheck it out today as I was busy reviewing some other patches, but\nI'll definitely take a look at the new patch in a day or so and let\nyou know my feedback.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Sun, Feb 13, 2022 at 7:34 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Feb 11, 2022 at 7:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > I have spent little time looking at the latest patch. The patch looks\n> > to be in good shape as it has already been reviewed by many people\n> > here, although I did get some comments. Please take a look and let me\n> > know your thoughts.\n> >\n> >\n> > + /* Try to connect to the publisher. */\n> > + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> > + if (!wrconn)\n> > + ereport(ERROR,\n> > + (errmsg(\"could not connect to the publisher: %s\", err)));\n> >\n> > I think it would be good to also include the errcode\n> > (ERRCODE_CONNECTION_FAILURE) here?\n>\n> Modified\n>\n> > --\n> >\n> > @@ -514,6 +671,8 @@ CreateSubscription(ParseState *pstate,\n> > CreateSubscriptionStmt *stmt,\n> >\n> > PG_TRY();\n> > {\n> > + check_publications(wrconn, publications, opts.validate_publication);\n> > +\n> >\n> >\n> > Instead of passing the opts.validate_publication argument to\n> > check_publication function, why can't we first check if this option is\n> > set or not and accordingly call check_publication function? For other\n> > options I see that it has been done in the similar way for e.g. check\n> > for opts.connect or opts.refresh or opts.enabled etc.\n>\n> Modified\n>\n> > --\n> >\n> > Above comment also applies for:\n> >\n> > @@ -968,6 +1130,8 @@ AlterSubscription(ParseState *pstate,\n> > AlterSubscriptionStmt *stmt,\n> > replaces[Anum_pg_subscription_subpublications - 1] = true;\n> >\n> > update_tuple = true;\n> > + connect_and_check_pubs(sub, stmt->publication,\n> > + opts.validate_publication);\n> >\n>\n> Modified\n>\n> > --\n> >\n> > + <para>\n> > + When true, the command verifies if all the specified publications\n> > + that are being subscribed to are present in the publisher and throws\n> > + an error if any of the publication doesn't exist. The default is\n> > + <literal>false</literal>.\n> >\n> > publication -> publications (in the 4th line : throw an error if any\n> > of the publication doesn't exist)\n> >\n> > This applies for both CREATE and ALTER subscription commands.\n>\n> Modified\n>\n> Thanks for the comments, the attached v14 patch has the changes for the same.\n>\n> Regard,s\n> Vignesh\n\n\n", "msg_date": "Mon, 14 Feb 2022 20:26:49 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "Thanks for working on the review comments. The changes in the new\npatch look good to me. I am marking it as ready to commit.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Sun, Feb 13, 2022 at 7:34 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Feb 11, 2022 at 7:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > I have spent little time looking at the latest patch. The patch looks\n> > to be in good shape as it has already been reviewed by many people\n> > here, although I did get some comments. Please take a look and let me\n> > know your thoughts.\n> >\n> >\n> > + /* Try to connect to the publisher. */\n> > + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> > + if (!wrconn)\n> > + ereport(ERROR,\n> > + (errmsg(\"could not connect to the publisher: %s\", err)));\n> >\n> > I think it would be good to also include the errcode\n> > (ERRCODE_CONNECTION_FAILURE) here?\n>\n> Modified\n>\n> > --\n> >\n> > @@ -514,6 +671,8 @@ CreateSubscription(ParseState *pstate,\n> > CreateSubscriptionStmt *stmt,\n> >\n> > PG_TRY();\n> > {\n> > + check_publications(wrconn, publications, opts.validate_publication);\n> > +\n> >\n> >\n> > Instead of passing the opts.validate_publication argument to\n> > check_publication function, why can't we first check if this option is\n> > set or not and accordingly call check_publication function? For other\n> > options I see that it has been done in the similar way for e.g. check\n> > for opts.connect or opts.refresh or opts.enabled etc.\n>\n> Modified\n>\n> > --\n> >\n> > Above comment also applies for:\n> >\n> > @@ -968,6 +1130,8 @@ AlterSubscription(ParseState *pstate,\n> > AlterSubscriptionStmt *stmt,\n> > replaces[Anum_pg_subscription_subpublications - 1] = true;\n> >\n> > update_tuple = true;\n> > + connect_and_check_pubs(sub, stmt->publication,\n> > + opts.validate_publication);\n> >\n>\n> Modified\n>\n> > --\n> >\n> > + <para>\n> > + When true, the command verifies if all the specified publications\n> > + that are being subscribed to are present in the publisher and throws\n> > + an error if any of the publication doesn't exist. The default is\n> > + <literal>false</literal>.\n> >\n> > publication -> publications (in the 4th line : throw an error if any\n> > of the publication doesn't exist)\n> >\n> > This applies for both CREATE and ALTER subscription commands.\n>\n> Modified\n>\n> Thanks for the comments, the attached v14 patch has the changes for the same.\n>\n> Regard,s\n> Vignesh\n\n\n", "msg_date": "Tue, 15 Feb 2022 20:36:48 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On 2022-02-13 19:34:05 +0530, vignesh C wrote:\n> Thanks for the comments, the attached v14 patch has the changes for the same.\n\nThe patch needs a rebase, it currently fails to apply:\nhttp://cfbot.cputube.org/patch_37_2957.log\n\n\n", "msg_date": "Mon, 21 Mar 2022 16:59:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Mar 22, 2022 at 5:29 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-02-13 19:34:05 +0530, vignesh C wrote:\n> > Thanks for the comments, the attached v14 patch has the changes for the same.\n>\n> The patch needs a rebase, it currently fails to apply:\n> http://cfbot.cputube.org/patch_37_2957.log\n\nThe attached v15 patch is rebased on top of HEAD.\n\nRegards,\nVignesh", "msg_date": "Tue, 22 Mar 2022 15:23:20 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Mar 22, 2022 at 3:23 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Mar 22, 2022 at 5:29 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-02-13 19:34:05 +0530, vignesh C wrote:\n> > > Thanks for the comments, the attached v14 patch has the changes for the same.\n> >\n> > The patch needs a rebase, it currently fails to apply:\n> > http://cfbot.cputube.org/patch_37_2957.log\n\nThe patch was not applying on HEAD, attached patch which is rebased on\ntop of HEAD.\n\nRegards,\nVignesh", "msg_date": "Sat, 26 Mar 2022 19:52:36 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Sat, Mar 26, 2022 at 7:53 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> The patch was not applying on HEAD, attached patch which is rebased on\n> top of HEAD.\n>\n\nIIUC, this patch provides an option that allows us to give an error if\nwhile creating/altering subcsiction, user gives non-existant\npublications. I am not sure how useful it is to add such behavior via\nan option especially when we know that it can occur in some other ways\nlike after creating the subscription, users can independently drop\npublication from publisher. I think it could be useful to provide\nadditional information here but it would be better if we can follow\nEuler's suggestion [1] in the thread where he suggested issuing a\nWARNING if the publications don't exist and document that the\nsubscription catalog can have non-existent publications.\n\nI think we should avoid adding new options unless they are really\nrequired and useful.\n\n[1] - https://www.postgresql.org/message-id/a2f2fba6-40dd-44cc-b40e-58196bb77f1c%40www.fastmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 29 Mar 2022 11:01:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Mar 29, 2022 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Mar 26, 2022 at 7:53 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > The patch was not applying on HEAD, attached patch which is rebased on\n> > top of HEAD.\n> >\n>\n> IIUC, this patch provides an option that allows us to give an error if\n> while creating/altering subcsiction, user gives non-existant\n> publications. I am not sure how useful it is to add such behavior via\n> an option especially when we know that it can occur in some other ways\n> like after creating the subscription, users can independently drop\n> publication from publisher. I think it could be useful to provide\n> additional information here but it would be better if we can follow\n> Euler's suggestion [1] in the thread where he suggested issuing a\n> WARNING if the publications don't exist and document that the\n> subscription catalog can have non-existent publications.\n>\n> I think we should avoid adding new options unless they are really\n> required and useful.\n>\n\n*\n+connect_and_check_pubs(Subscription *sub, List *publications)\n+{\n+ char *err;\n+ WalReceiverConn *wrconn;\n+\n+ /* Load the library providing us libpq calls. */\n+ load_file(\"libpqwalreceiver\", false);\n+\n+ /* Try to connect to the publisher. */\n+ wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n+ if (!wrconn)\n+ ereport(ERROR,\n+ errcode(ERRCODE_CONNECTION_FAILURE),\n+ errmsg(\"could not connect to the publisher: %s\", err));\n\nI think it won't be a good idea to add new failure modes in existing\ncommands especially if we decide to make it non-optional. I think we\ncan do this check only in case we are already connecting to the\npublisher.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 29 Mar 2022 16:12:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Mar 29, 2022 at 11:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Mar 26, 2022 at 7:53 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > The patch was not applying on HEAD, attached patch which is rebased on\n> > top of HEAD.\n> >\n>\n> IIUC, this patch provides an option that allows us to give an error if\n> while creating/altering subcsiction, user gives non-existant\n> publications. I am not sure how useful it is to add such behavior via\n> an option especially when we know that it can occur in some other ways\n> like after creating the subscription, users can independently drop\n> publication from publisher. I think it could be useful to provide\n> additional information here but it would be better if we can follow\n> Euler's suggestion [1] in the thread where he suggested issuing a\n> WARNING if the publications don't exist and document that the\n> subscription catalog can have non-existent publications.\n>\n> I think we should avoid adding new options unless they are really\n> required and useful.\n>\n> [1] - https://www.postgresql.org/message-id/a2f2fba6-40dd-44cc-b40e-58196bb77f1c%40www.fastmail.com\n\nThanks for the suggestion, I have changed the patch as suggested.\nAttached v16 patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 29 Mar 2022 20:11:18 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Mar 29, 2022 at 4:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 29, 2022 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Mar 26, 2022 at 7:53 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > The patch was not applying on HEAD, attached patch which is rebased on\n> > > top of HEAD.\n> > >\n> >\n> > IIUC, this patch provides an option that allows us to give an error if\n> > while creating/altering subcsiction, user gives non-existant\n> > publications. I am not sure how useful it is to add such behavior via\n> > an option especially when we know that it can occur in some other ways\n> > like after creating the subscription, users can independently drop\n> > publication from publisher. I think it could be useful to provide\n> > additional information here but it would be better if we can follow\n> > Euler's suggestion [1] in the thread where he suggested issuing a\n> > WARNING if the publications don't exist and document that the\n> > subscription catalog can have non-existent publications.\n> >\n> > I think we should avoid adding new options unless they are really\n> > required and useful.\n> >\n>\n> *\n> +connect_and_check_pubs(Subscription *sub, List *publications)\n> +{\n> + char *err;\n> + WalReceiverConn *wrconn;\n> +\n> + /* Load the library providing us libpq calls. */\n> + load_file(\"libpqwalreceiver\", false);\n> +\n> + /* Try to connect to the publisher. */\n> + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n> + if (!wrconn)\n> + ereport(ERROR,\n> + errcode(ERRCODE_CONNECTION_FAILURE),\n> + errmsg(\"could not connect to the publisher: %s\", err));\n>\n> I think it won't be a good idea to add new failure modes in existing\n> commands especially if we decide to make it non-optional. I think we\n> can do this check only in case we are already connecting to the\n> publisher.\n\nI have modified it to check only in create subscription/alter\nsubscription .. add publication and alter subscription.. set\npublication cases where we are connecting to the publisher.\nThe changes for the same are present at v16 patch attached at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm2zHd9FAn%2BMAQ3x-2%2BRnu8%3DRu%2BBeQXokfNBKo6sNAVb3A%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 29 Mar 2022 20:15:25 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Tue, Mar 29, 2022 at 8:11 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Mar 29, 2022 at 11:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> Thanks for the suggestion, I have changed the patch as suggested.\n> Attached v16 patch has the changes for the same.\n>\n\nThanks, I have one more comment.\n\npostgres=# Alter subscription sub1 add publication pub4;\nWARNING: publications \"pub2\", \"pub4\" do not exist in the publisher\nALTER SUBSCRIPTION\n\nThis gives additional publication in WARNING message which was not\npart of current command but is present from the earlier time.\n\npostgres=# Alter Subscription sub1 set publication pub5;\nWARNING: publication \"pub5\" does not exist in the publisher\nALTER SUBSCRIPTION\n\nSET variant doesn't give such a problem.\n\nI feel we should be consistent here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 30 Mar 2022 11:22:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Mar 30, 2022 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 29, 2022 at 8:11 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Mar 29, 2022 at 11:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > Thanks for the suggestion, I have changed the patch as suggested.\n> > Attached v16 patch has the changes for the same.\n> >\n>\n> Thanks, I have one more comment.\n>\n> postgres=# Alter subscription sub1 add publication pub4;\n> WARNING: publications \"pub2\", \"pub4\" do not exist in the publisher\n> ALTER SUBSCRIPTION\n>\n> This gives additional publication in WARNING message which was not\n> part of current command but is present from the earlier time.\n>\n> postgres=# Alter Subscription sub1 set publication pub5;\n> WARNING: publication \"pub5\" does not exist in the publisher\n> ALTER SUBSCRIPTION\n>\n> SET variant doesn't give such a problem.\n>\n> I feel we should be consistent here.\n\nI have made the changes for this, attached v17 patch has the changes\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 30 Mar 2022 12:21:51 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Mar 30, 2022 at 12:22 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I have made the changes for this, attached v17 patch has the changes\n> for the same.\n>\n\nThe patch looks good to me. I have made minor edits in the comments\nand docs. See the attached and let me know what you think? I intend to\ncommit this tomorrow unless there are more comments or suggestions.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 30 Mar 2022 16:29:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Mar 30, 2022 at 4:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 12:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > I have made the changes for this, attached v17 patch has the changes\n> > for the same.\n> >\n>\n> The patch looks good to me. I have made minor edits in the comments\n> and docs. See the attached and let me know what you think? I intend to\n> commit this tomorrow unless there are more comments or suggestions.\n\nI have one minor comment:\n\n+ \"Create subscription throws warning for multiple non-existent publications\");\n+$node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION mysub1;\");\n+ \"CREATE SUBSCRIPTION mysub1 CONNECTION '$publisher_connstr'\nPUBLICATION mypub;\"\n+ \"ALTER SUBSCRIPTION mysub1 ADD PUBLICATION non_existent_pub\"\n+ \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub\"\n\nWhy should we drop the subscription mysub1 and create it for ALTER ..\nADD and ALTER .. SET tests? Can't we just do below which saves\nunnecessary subscription creation, drop, wait_for_catchup and\npoll_query_until?\n\n+ \"Create subscription throws warning for multiple non-existent publications\");\n+ \"ALTER SUBSCRIPTION mysub1 ADD PUBLICATION non_existent_pub2\"\n+ \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub3\"\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 30 Mar 2022 17:37:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Mar 30, 2022 at 5:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 4:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 30, 2022 at 12:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > I have made the changes for this, attached v17 patch has the changes\n> > > for the same.\n> > >\n> >\n> > The patch looks good to me. I have made minor edits in the comments\n> > and docs. See the attached and let me know what you think? I intend to\n> > commit this tomorrow unless there are more comments or suggestions.\n>\n> I have one minor comment:\n>\n> + \"Create subscription throws warning for multiple non-existent publications\");\n> +$node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION mysub1;\");\n> + \"CREATE SUBSCRIPTION mysub1 CONNECTION '$publisher_connstr'\n> PUBLICATION mypub;\"\n> + \"ALTER SUBSCRIPTION mysub1 ADD PUBLICATION non_existent_pub\"\n> + \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub\"\n>\n> Why should we drop the subscription mysub1 and create it for ALTER ..\n> ADD and ALTER .. SET tests? Can't we just do below which saves\n> unnecessary subscription creation, drop, wait_for_catchup and\n> poll_query_until?\n>\n> + \"Create subscription throws warning for multiple non-existent publications\");\n> + \"ALTER SUBSCRIPTION mysub1 ADD PUBLICATION non_existent_pub2\"\n> + \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub3\"\n\nOr I would even simplify the entire tests as follows:\n\n+ \"CREATE SUBSCRIPTION mysub1 CONNECTION '$publisher_connstr'\nPUBLICATION mypub, non_existent_pub1\"\n+ \"Create subscription throws warning for non-existent publication\");\n>> no drop of mysub1 >>\n+ \"CREATE SUBSCRIPTION mysub2 CONNECTION '$publisher_connstr'\nPUBLICATION non_existent_pub1, non_existent_pub2\"\n+ \"Create subscription throws warning for multiple non-existent publications\");\n>> no drop of mysub2 >>\n+ \"ALTER SUBSCRIPTION mysub2 ADD PUBLICATION non_existent_pub3\"\n+ \"Alter subscription add publication throws warning for non-existent\npublication\");\n+ \"ALTER SUBSCRIPTION mysub2 SET PUBLICATION non_existent_pub4\"\n+ \"Alter subscription set publication throws warning for non-existent\npublication\");\n $node_subscriber->stop;\n $node_publisher->stop;\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 30 Mar 2022 17:42:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Mar 30, 2022 at 5:42 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 5:37 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Mar 30, 2022 at 4:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 30, 2022 at 12:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > I have made the changes for this, attached v17 patch has the changes\n> > > > for the same.\n> > > >\n> > >\n> > > The patch looks good to me. I have made minor edits in the comments\n> > > and docs. See the attached and let me know what you think? I intend to\n> > > commit this tomorrow unless there are more comments or suggestions.\n> >\n> > I have one minor comment:\n> >\n> > + \"Create subscription throws warning for multiple non-existent publications\");\n> > +$node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION mysub1;\");\n> > + \"CREATE SUBSCRIPTION mysub1 CONNECTION '$publisher_connstr'\n> > PUBLICATION mypub;\"\n> > + \"ALTER SUBSCRIPTION mysub1 ADD PUBLICATION non_existent_pub\"\n> > + \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub\"\n> >\n> > Why should we drop the subscription mysub1 and create it for ALTER ..\n> > ADD and ALTER .. SET tests? Can't we just do below which saves\n> > unnecessary subscription creation, drop, wait_for_catchup and\n> > poll_query_until?\n> >\n> > + \"Create subscription throws warning for multiple non-existent publications\");\n> > + \"ALTER SUBSCRIPTION mysub1 ADD PUBLICATION non_existent_pub2\"\n> > + \"ALTER SUBSCRIPTION mysub1 SET PUBLICATION non_existent_pub3\"\n>\n> Or I would even simplify the entire tests as follows:\n>\n> + \"CREATE SUBSCRIPTION mysub1 CONNECTION '$publisher_connstr'\n> PUBLICATION mypub, non_existent_pub1\"\n> + \"Create subscription throws warning for non-existent publication\");\n> >> no drop of mysub1 >>\n> + \"CREATE SUBSCRIPTION mysub2 CONNECTION '$publisher_connstr'\n> PUBLICATION non_existent_pub1, non_existent_pub2\"\n> + \"Create subscription throws warning for multiple non-existent publications\");\n> >> no drop of mysub2 >>\n> + \"ALTER SUBSCRIPTION mysub2 ADD PUBLICATION non_existent_pub3\"\n> + \"Alter subscription add publication throws warning for non-existent\n> publication\");\n> + \"ALTER SUBSCRIPTION mysub2 SET PUBLICATION non_existent_pub4\"\n> + \"Alter subscription set publication throws warning for non-existent\n> publication\");\n> $node_subscriber->stop;\n> $node_publisher->stop;\n\nYour suggestion looks valid, I have modified it as suggested.\nAdditionally I have removed Create subscription with multiple\nnon-existent publications and changed add publication with sing\nnon-existent publication to add publication with multiple non-existent\npublications to cover the multiple non-existent publications path.\nAttached v19 patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 30 Mar 2022 21:54:00 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." }, { "msg_contents": "On Wed, Mar 30, 2022 at 9:54 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 5:42 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Your suggestion looks valid, I have modified it as suggested.\n> Additionally I have removed Create subscription with multiple\n> non-existent publications and changed add publication with sing\n> non-existent publication to add publication with multiple non-existent\n> publications to cover the multiple non-existent publications path.\n> Attached v19 patch has the changes for the same.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 31 Mar 2022 15:15:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identify missing publications from publisher while create/alter\n subscription." } ]
[ { "msg_contents": "Hi,\n\nRecently I was trying to copy some of the data of one database to\nanother through postgres_fdw, and found that it wouldn't import that\npartition through IMPORT FOREIGN SCHEMA, even when I explicitly\nspecified the name of the table that contained the data in the LIMIT\nTO clause.\n\nI realised the reason is that currently, postgres_fdw explicitly\ndisallows importing foreign partitions. This is a reasonable default\nwhen importing a whole schema, but if I wanted to explicitly import\none of a partitioned tables' partitions, that would now require me to\nmanually copy the foreign table's definitions through the use of\nCREATE FOREIGN TABLE, which is a hassle and prone to mistakes.\n\nAs such, I propose the attached patch, in which the 'no\npartitions'-restriction of postgres_fdw is lifted for the LIMIT TO\nclause. This has several benefits, including not holding locks on the\nforeign root partition during queries, and less suprising behaviour\nfor LIMIT TO (\"table that happens to be a partition\").\n\nRegards,\n\nMatthias van de Meent", "msg_date": "Thu, 21 Jan 2021 15:56:07 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "Am Donnerstag, dem 21.01.2021 um 15:56 +0100 schrieb Matthias van de\nMeent:\n> Hi,\n> \n> Recently I was trying to copy some of the data of one database to\n> another through postgres_fdw, and found that it wouldn't import that\n> partition through IMPORT FOREIGN SCHEMA, even when I explicitly\n> specified the name of the table that contained the data in the LIMIT\n> TO clause.\n> \n> I realised the reason is that currently, postgres_fdw explicitly\n> disallows importing foreign partitions. This is a reasonable default\n> when importing a whole schema, but if I wanted to explicitly import\n> one of a partitioned tables' partitions, that would now require me to\n> manually copy the foreign table's definitions through the use of\n> CREATE FOREIGN TABLE, which is a hassle and prone to mistakes.\n> \n\nHi,\n\nI took a look at this patch.\n\nPatch applies on current master.\n\nDocumentation and adjusted regression tests included.\nRegression tests passes without errors.\n\nThe patch changes IMPORT FOREIGN SCHEMA to explicitely allow partition\nchild tables in the LIMIT TO clause of the IMPORT FOREIGN SCHEMA\ncommand by relaxing the checks introduced with commit [1]. The reason\nbehind [1] are discussed in [2].\n\nSo the original behavior this patch wants to address was done\nintentionally, so what needs to be discussed here is whether we want to\nrelax that a little. One argument for the original behavior since then\nwas that it is cleaner to just automatically import the parent, which\nallows access to the childs through the foreign table anways and\nexclude partition childs when querying pg_class.\n\nI haven't seen demand for the implemented feature here myself, but i\ncould imagine use cases where just a single child or a set of child\ntables are candidates. For example, i think it's possible that users\ncan query only specific childs and want them to have imported on\nanother foreign server.\n\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f49bcd4ef3e9a75de210357a4d9bbe3e004db956\n\n[2]\nhttps://www.postgresql.org/message-id/20170309141531.GD9812@tamriel.snowman.net\n\n\n-- \nThanks,\n\tBernd\n\n\n\n\n", "msg_date": "Mon, 22 Mar 2021 21:16:28 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "On Mon, 22 Mar 2021 at 21:16, Bernd Helmle <mailings@oopsware.de> wrote:\n>\n> Hi,\n>\n> I took a look at this patch.\n\nThanks!\n\n> Patch applies on current master.\n>\n> Documentation and adjusted regression tests included.\n> Regression tests passes without errors.\n>\n> The patch changes IMPORT FOREIGN SCHEMA to explicitely allow partition\n> child tables in the LIMIT TO clause of the IMPORT FOREIGN SCHEMA\n> command by relaxing the checks introduced with commit [1]. The reason\n> behind [1] are discussed in [2].\n\nI should've included potentially interested parties earlier, but never\ntoo late. Stephen, Michael, Amit, would you have an opinion on lifting\nthis restriction for the LIMIT TO clause, seeing your involvement in\nthe implementation of removing partitions from IFS?\n\n> So the original behavior this patch wants to address was done\n> intentionally, so what needs to be discussed here is whether we want to\n> relax that a little. One argument for the original behavior since then\n> was that it is cleaner to just automatically import the parent, which\n> allows access to the childs through the foreign table anways and\n> exclude partition childs when querying pg_class.\n\nYes, but it should be noted that the main reason that was mentioned as\na reason to exclude partitions is to not cause table catalog bloat,\nand I argue that this argument is not as solid in the case of the\nexplicitly named tables of the LIMIT TO clause. Except if SQL standard\nprescribes otherwise, I think allowing partitions in LIMIT TO clauses\nis an improvement overall.\n\n> I haven't seen demand for the implemented feature here myself, but i\n> could imagine use cases where just a single child or a set of child\n> tables are candidates. For example, i think it's possible that users\n> can query only specific childs and want them to have imported on\n> another foreign server.\n\nI myself have had this need, in that I've had to import some\npartitions manually as a result of this limitation. IMPORT FORAIGN\nSCHEMA really is great when it works, but limitations like these are\ncrippling for some more specific use cases (e.g. allowing\nlong-duration read-only access to one partition in the partition tree\nwhile also allowing the partition layout of the parents to be\nmodified).\n\n\nWith regards,\n\nMatthias.\n\n\n", "msg_date": "Wed, 24 Mar 2021 13:23:42 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "Am Mittwoch, dem 24.03.2021 um 13:23 +0100 schrieb Matthias van de\nMeent:\n> Yes, but it should be noted that the main reason that was mentioned\n> as\n> a reason to exclude partitions is to not cause table catalog bloat,\n> and I argue that this argument is not as solid in the case of the\n> explicitly named tables of the LIMIT TO clause. Except if SQL\n> standard\n> prescribes otherwise, I think allowing partitions in LIMIT TO clauses\n> is an improvement overall.\n\nDon't get me wrong, i find this useful, too. Especially because it's a\nvery minor change in the code. And i don't see negative aspects here\ncurrently, either (which doesn't mean there aren't some).\n\n> \n> I myself have had this need, in that I've had to import some\n> partitions manually as a result of this limitation. IMPORT FORAIGN\n> SCHEMA really is great when it works, but limitations like these are\n> crippling for some more specific use cases (e.g. allowing\n> long-duration read-only access to one partition in the partition tree\n> while also allowing the partition layout of the parents to be\n> modified).\n\nInteresting use case. \n\n\n-- \nThanks,\n\tBernd\n\n\n\n\n", "msg_date": "Wed, 24 Mar 2021 17:32:26 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "Am Mittwoch, dem 24.03.2021 um 17:32 +0100 schrieb Bernd Helmle:\n> > Yes, but it should be noted that the main reason that was mentioned\n> > as\n> > a reason to exclude partitions is to not cause table catalog bloat,\n> > and I argue that this argument is not as solid in the case of the\n> > explicitly named tables of the LIMIT TO clause. Except if SQL\n> > standard\n> > prescribes otherwise, I think allowing partitions in LIMIT TO\n> > clauses\n> > is an improvement overall.\n> \n> Don't get me wrong, i find this useful, too. Especially because it's\n> a\n> very minor change in the code. And i don't see negative aspects here\n> currently, either (which doesn't mean there aren't some).\n\nSince there are currently no obvious objections i've marked this \"Read\nfor Committer\".\n\n\n-- \nThanks,\n\tBernd\n\n\n\n\n", "msg_date": "Sun, 28 Mar 2021 19:39:25 +0200", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "\n\nOn 2021/03/29 2:39, Bernd Helmle wrote:\n> Am Mittwoch, dem 24.03.2021 um 17:32 +0100 schrieb Bernd Helmle:\n>>> Yes, but it should be noted that the main reason that was mentioned\n>>> as\n>>> a reason to exclude partitions is to not cause table catalog bloat,\n>>> and I argue that this argument is not as solid in the case of the\n>>> explicitly named tables of the LIMIT TO clause. Except if SQL\n>>> standard\n>>> prescribes otherwise, I think allowing partitions in LIMIT TO\n>>> clauses\n>>> is an improvement overall.\n>>\n>> Don't get me wrong, i find this useful, too. Especially because it's\n>> a\n>> very minor change in the code. And i don't see negative aspects here\n>> currently, either (which doesn't mean there aren't some).\n> \n> Since there are currently no obvious objections i've marked this \"Read\n> for Committer\".\n\nFor now I have no objection to this feature.\n\n-IMPORT FOREIGN SCHEMA import_source EXCEPT (t1, \"x 4\", nonesuch)\n+IMPORT FOREIGN SCHEMA import_source EXCEPT (t1, \"x 4\", nonesuch, t4_part)\n\nIsn't it better to create also another partition like \"t4_part2\"?\nIf we do this, for example, the above test can confirm that both\npartitions in EXCEPT and not in are excluded.\n\n+ All tables or foreign tables which are partitions of some other table\n+ are automatically excluded from <xref linkend=\"sql-importforeignschema\"/>\n+ unless they are explicitly included in the <literal>LIMIT TO</literal>\n\nIMO it's better to document that partitions are imported when they are\nincluded in LIMIT TO, instead. What about the following?\n\n Tables or foreign tables which are partitions of some other table are\n imported only when they are explicitly specified in\n <literal>LIMIT TO</literal> clause. Otherwise they are automatically\n excluded from <xref linkend=\"sql-importforeignschema\"/>.\n\n+ clause. Since all data can be accessed through the partitioned table\n+ which is the root of the partitioning hierarchy, this approach should\n+ allow access to all the data without creating extra objects.\n\nNow \"this approach\" in the above is not clear? What about replacing it with\nsomething like \"importing only partitioned tables\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 6 Apr 2021 08:34:25 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "Hi Matthias,\n\nOn Wed, Mar 24, 2021 at 9:23 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Mon, 22 Mar 2021 at 21:16, Bernd Helmle <mailings@oopsware.de> wrote:\n> > The patch changes IMPORT FOREIGN SCHEMA to explicitely allow partition\n> > child tables in the LIMIT TO clause of the IMPORT FOREIGN SCHEMA\n> > command by relaxing the checks introduced with commit [1]. The reason\n> > behind [1] are discussed in [2].\n>\n> I should've included potentially interested parties earlier, but never\n> too late. Stephen, Michael, Amit, would you have an opinion on lifting\n> this restriction for the LIMIT TO clause, seeing your involvement in\n> the implementation of removing partitions from IFS?\n\nSorry that I'm replying to this a bit late.\n\n> > So the original behavior this patch wants to address was done\n> > intentionally, so what needs to be discussed here is whether we want to\n> > relax that a little. One argument for the original behavior since then\n> > was that it is cleaner to just automatically import the parent, which\n> > allows access to the childs through the foreign table anways and\n> > exclude partition childs when querying pg_class.\n>\n> Yes, but it should be noted that the main reason that was mentioned as\n> a reason to exclude partitions is to not cause table catalog bloat,\n> and I argue that this argument is not as solid in the case of the\n> explicitly named tables of the LIMIT TO clause. Except if SQL standard\n> prescribes otherwise, I think allowing partitions in LIMIT TO clauses\n> is an improvement overall.\n>\n> > I haven't seen demand for the implemented feature here myself, but i\n> > could imagine use cases where just a single child or a set of child\n> > tables are candidates. For example, i think it's possible that users\n> > can query only specific childs and want them to have imported on\n> > another foreign server.\n>\n> I myself have had this need, in that I've had to import some\n> partitions manually as a result of this limitation. IMPORT FORAIGN\n> SCHEMA really is great when it works, but limitations like these are\n> crippling for some more specific use cases (e.g. allowing\n> long-duration read-only access to one partition in the partition tree\n> while also allowing the partition layout of the parents to be\n> modified).\n\nFWIW, I agree that it would be nice to have this.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 16:00:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "On Tue, Apr 6, 2021 at 8:34 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> For now I have no objection to this feature.\n>\n> -IMPORT FOREIGN SCHEMA import_source EXCEPT (t1, \"x 4\", nonesuch)\n> +IMPORT FOREIGN SCHEMA import_source EXCEPT (t1, \"x 4\", nonesuch, t4_part)\n>\n> Isn't it better to create also another partition like \"t4_part2\"?\n> If we do this, for example, the above test can confirm that both\n> partitions in EXCEPT and not in are excluded.\n>\n> + All tables or foreign tables which are partitions of some other table\n> + are automatically excluded from <xref linkend=\"sql-importforeignschema\"/>\n> + unless they are explicitly included in the <literal>LIMIT TO</literal>\n>\n> IMO it's better to document that partitions are imported when they are\n> included in LIMIT TO, instead. What about the following?\n>\n> Tables or foreign tables which are partitions of some other table are\n> imported only when they are explicitly specified in\n> <literal>LIMIT TO</literal> clause. Otherwise they are automatically\n> excluded from <xref linkend=\"sql-importforeignschema\"/>.\n>\n> + clause. Since all data can be accessed through the partitioned table\n> + which is the root of the partitioning hierarchy, this approach should\n> + allow access to all the data without creating extra objects.\n>\n> Now \"this approach\" in the above is not clear? What about replacing it with\n> something like \"importing only partitioned tables\"?\n\n+1, that wording is better.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 16:05:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "On 2021/04/06 16:05, Amit Langote wrote:\n> On Tue, Apr 6, 2021 at 8:34 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> For now I have no objection to this feature.\n>>\n>> -IMPORT FOREIGN SCHEMA import_source EXCEPT (t1, \"x 4\", nonesuch)\n>> +IMPORT FOREIGN SCHEMA import_source EXCEPT (t1, \"x 4\", nonesuch, t4_part)\n>>\n>> Isn't it better to create also another partition like \"t4_part2\"?\n>> If we do this, for example, the above test can confirm that both\n>> partitions in EXCEPT and not in are excluded.\n>>\n>> + All tables or foreign tables which are partitions of some other table\n>> + are automatically excluded from <xref linkend=\"sql-importforeignschema\"/>\n>> + unless they are explicitly included in the <literal>LIMIT TO</literal>\n>>\n>> IMO it's better to document that partitions are imported when they are\n>> included in LIMIT TO, instead. What about the following?\n>>\n>> Tables or foreign tables which are partitions of some other table are\n>> imported only when they are explicitly specified in\n>> <literal>LIMIT TO</literal> clause. Otherwise they are automatically\n>> excluded from <xref linkend=\"sql-importforeignschema\"/>.\n>>\n>> + clause. Since all data can be accessed through the partitioned table\n>> + which is the root of the partitioning hierarchy, this approach should\n>> + allow access to all the data without creating extra objects.\n>>\n>> Now \"this approach\" in the above is not clear? What about replacing it with\n>> something like \"importing only partitioned tables\"?\n> \n> +1, that wording is better.\n\nThanks! So I applied all the changes that I suggested upthread to the patch.\nI also updated the comment as follows.\n\n \t\t * Import table data for partitions only when they are explicitly\n-\t\t * specified in LIMIT TO clause. Otherwise ignore them and\n-\t\t * only include the definitions of the root partitioned tables to\n-\t\t * allow access to the complete remote data set locally in\n-\t\t * the schema imported.\n+\t\t * specified in LIMIT TO clause. Otherwise ignore them and only\n+\t\t * include the definitions of the root partitioned tables to allow\n+\t\t * access to the complete remote data set locally in the schema\n+\t\t * imported.\n\nAttached is the updated version of the patch. Barring any objection,\nI'm thinking to commit this.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 6 Apr 2021 21:29:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "On Tue, 6 Apr 2021 at 14:29, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Thanks! So I applied all the changes that I suggested upthread to the patch.\n> I also updated the comment as follows.\n>\n> * Import table data for partitions only when they are explicitly\n> - * specified in LIMIT TO clause. Otherwise ignore them and\n> - * only include the definitions of the root partitioned tables to\n> - * allow access to the complete remote data set locally in\n> - * the schema imported.\n> + * specified in LIMIT TO clause. Otherwise ignore them and only\n> + * include the definitions of the root partitioned tables to allow\n> + * access to the complete remote data set locally in the schema\n> + * imported.\n>\n> Attached is the updated version of the patch. Barring any objection,\n> I'm thinking to commit this.\n\nThanks, this was on my to-do list for today, but you were faster.\n\nNo objections on my part, and thanks for picking this up.\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 6 Apr 2021 14:39:54 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "On Tue, Apr 06, 2021 at 02:39:54PM +0200, Matthias van de Meent wrote:\n> On Tue, 6 Apr 2021 at 14:29, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Attached is the updated version of the patch. Barring any objection,\n>> I'm thinking to commit this.\n\nSorry for the late reply. The approach to use LIMIT TO for this\npurpose looks sensible from here, and I agree that it can have its \nuses. So what you have here LGTM.\n--\nMichael", "msg_date": "Tue, 6 Apr 2021 22:02:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" }, { "msg_contents": "\n\nOn 2021/04/06 22:02, Michael Paquier wrote:\n> On Tue, Apr 06, 2021 at 02:39:54PM +0200, Matthias van de Meent wrote:\n>> On Tue, 6 Apr 2021 at 14:29, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Attached is the updated version of the patch. Barring any objection,\n>>> I'm thinking to commit this.\n> \n> Sorry for the late reply. The approach to use LIMIT TO for this\n> purpose looks sensible from here, and I agree that it can have its\n> uses. So what you have here LGTM.\n\nPushed. Thanks all!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 7 Apr 2021 02:35:42 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: IMPORT FOREIGN SCHEMA ... LIMIT TO (partition)" } ]
[ { "msg_contents": "Hello,\n\nWe found an issue in pg_upgrade on a cluster with a third-party\nbackground worker. The upgrade goes fine, but the new cluster is then in\nan inconsistent state. The background worker comes from the PoWA\nextension but the issue does not appear to related to this particular\ncode.\n\nHere is a shell script to reproduce the issue (error at the end):\n\n OLDBINDIR=/usr/lib/postgresql/11/bin\n NEWBINDIR=/usr/lib/postgresql/13/bin\n \n OLDDATADIR=$(mktemp -d)\n NEWDATADIR=$(mktemp -d)\n \n $OLDBINDIR/initdb -D $OLDDATADIR\n echo \"unix_socket_directories = '/tmp'\" >> $OLDDATADIR/postgresql.auto.conf\n echo \"shared_preload_libraries = 'pg_stat_statements, powa'\" >> $OLDDATADIR/postgresql.auto.conf\n $OLDBINDIR/pg_ctl -D $OLDDATADIR -l $OLDDATADIR/pgsql.log start\n $OLDBINDIR/createdb -h /tmp powa\n $OLDBINDIR/psql -h /tmp -d powa -c \"CREATE EXTENSION powa CASCADE\"\n $OLDBINDIR/pg_ctl -D $OLDDATADIR -m fast stop\n \n $NEWBINDIR/initdb -D $NEWDATADIR\n cp $OLDDATADIR/postgresql.auto.conf $NEWDATADIR/postgresql.auto.conf\n \n $NEWBINDIR/pg_upgrade --old-datadir $OLDDATADIR --new-datadir $NEWDATADIR --old-bindir $OLDBINDIR\n \n $NEWBINDIR/pg_ctl -D $NEWDATADIR -l $NEWDATADIR/pgsql.log start\n $NEWBINDIR/psql -h /tmp -d powa -c \"SELECT 1 FROM powa_snapshot_metas\"\n # ERROR: MultiXactId 1 has not been created yet -- apparent wraparound\n\n(This needs PoWA to be installed; packages are available on pgdg\nrepositories as postgresql-<pgversion>-powa on Debian or\npowa_<pgversion> on RedHat or directly from source code at\nhttps://github.com/powa-team/powa-archivist).\n\nAs far as I currently understand, this is due to the data to be migrated\nbeing somewhat inconsistent (from the perspective of pg_upgrade) when\nthe old cluster and its background workers get started in pg_upgrade\nduring the \"checks\" step. (The old cluster remains sane, still.)\n\nAs a solution, it seems that, for similar reasons that we restrict\nsocket access to prevent accidental connections (from commit\nf763b77193), we should also prevent background workers to start at this\nstep.\n\nPlease find attached a patch implementing this.\n\nThanks for considering,\nDenis", "msg_date": "Thu, 21 Jan 2021 16:23:58 +0100", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": true, "msg_subject": "[PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi,\n\nOn 2021-01-21 16:23:58 +0100, Denis Laxalde wrote:\n> We found an issue in pg_upgrade on a cluster with a third-party\n> background worker. The upgrade goes fine, but the new cluster is then in\n> an inconsistent state. The background worker comes from the PoWA\n> extension but the issue does not appear to related to this particular\n> code.\n\nWell, it does imply that that backgrounder did something, as the pure\nexistence of a bgworker shouldn't affect\n\nanything. Presumably the issue is that the bgworker actually does\ntransactional writes, which causes problems because the xids /\nmultixacts from early during pg_upgrade won't actually be valid after we\ndo pg_resetxlog etc.\n\n\n> As a solution, it seems that, for similar reasons that we restrict\n> socket access to prevent accidental connections (from commit\n> f763b77193), we should also prevent background workers to start at this\n> step.\n\nI think that'd have quite the potential for negative impact - imagine\nextensions that refuse to be loaded outside of shared_preload_libraries\n(e.g. because they need to allocate shared memory) but that are required\nduring the course of pg_upgrade (e.g. because it's a tableam, a PL or\nsuch). Those libraries will then tried to be loaded during the upgrade\n(due to the _PG_init() hook being called when functions from the\nextension are needed, e.g. the tableam or PL handler).\n\nNor is it clear to me that the only way this would be problematic is\nwith shared_preload_libraries. A library in local_preload_libraries, or\na demand loaded library can trigger bgworkers (or database writes in\nsome other form) as well.\n\n\nI wonder if we could\n\na) set default_transaction_read_only to true, and explicitly change it\n in the sessions that need that.\nb) when in binary upgrade mode / -b, error out on all wal writes in\n sessions that don't explicitly set a session-level GUC to allow\n writes.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 23 Jan 2021 16:36:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi,\n\nAndres Freund a �crit :\n> On 2021-01-21 16:23:58 +0100, Denis Laxalde wrote:\n> > We found an issue in pg_upgrade on a cluster with a third-party\n> > background worker. The upgrade goes fine, but the new cluster is then in\n> > an inconsistent state. The background worker comes from the PoWA\n> > extension but the issue does not appear to related to this particular\n> > code.\n> \n> Well, it does imply that that backgrounder did something, as the pure\n> existence of a bgworker shouldn't affect\n> \n> anything. Presumably the issue is that the bgworker actually does\n> transactional writes, which causes problems because the xids /\n> multixacts from early during pg_upgrade won't actually be valid after we\n> do pg_resetxlog etc.\n> \n> \n> > As a solution, it seems that, for similar reasons that we restrict\n> > socket access to prevent accidental connections (from commit\n> > f763b77193), we should also prevent background workers to start at this\n> > step.\n> \n> I think that'd have quite the potential for negative impact - imagine\n> extensions that refuse to be loaded outside of shared_preload_libraries\n> (e.g. because they need to allocate shared memory) but that are required\n> during the course of pg_upgrade (e.g. because it's a tableam, a PL or\n> such). Those libraries will then tried to be loaded during the upgrade\n> (due to the _PG_init() hook being called when functions from the\n> extension are needed, e.g. the tableam or PL handler).\n> \n> Nor is it clear to me that the only way this would be problematic is\n> with shared_preload_libraries. A library in local_preload_libraries, or\n> a demand loaded library can trigger bgworkers (or database writes in\n> some other form) as well.\n\nThank you for those insights!\n\n> I wonder if we could\n> \n> a) set default_transaction_read_only to true, and explicitly change it\n> in the sessions that need that.\n> b) when in binary upgrade mode / -b, error out on all wal writes in\n> sessions that don't explicitly set a session-level GUC to allow\n> writes.\n\nSolution \"a\" appears to be enough to solve the problem described in my\ninitial email. See attached patch implementing this.\n\nCheers,\nDenis", "msg_date": "Wed, 27 Jan 2021 11:25:11 +0100", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi, \n\nOn Wed, 27 Jan 2021 11:25:11 +0100\nDenis Laxalde <denis.laxalde@dalibo.com> wrote:\n\n> Andres Freund a écrit :\n> > On 2021-01-21 16:23:58 +0100, Denis Laxalde wrote: \n> > > We found an issue in pg_upgrade on a cluster with a third-party\n> > > background worker. The upgrade goes fine, but the new cluster is then in\n> > > an inconsistent state. The background worker comes from the PoWA\n> > > extension but the issue does not appear to related to this particular\n> > > code. \n> > \n> > Well, it does imply that that backgrounder did something, as the pure\n> > existence of a bgworker shouldn't affect anything. Presumably the issue is\n> > that the bgworker actually does transactional writes, which causes problems\n> > because the xids / multixacts from early during pg_upgrade won't actually\n> > be valid after we do pg_resetxlog etc.\n\nIndeed, it does some writes. As soon as the powa bgworker starts, it takes\n\"snapshots\" of pg_stat_statements state and record them in a local table. To\navoid concurrent run, it takes a lock on some of its local rows using SELECT FOR\nUPDATE, hence the mxid consumption.\n\nThe inconsistency occurs at least at two place:\n\n* the datminmxid and relminmxid fields pg_dump(all)'ed and restored in the new\n cluster.\n* the multixid fields in the controlfile read during the check phase and\n restored later using pg_resetxlog.\n\n> > > As a solution, it seems that, for similar reasons that we restrict\n> > > socket access to prevent accidental connections (from commit\n> > > f763b77193), we should also prevent background workers to start at this\n> > > step. \n> > \n> > I think that'd have quite the potential for negative impact - [...]\n> \n> Thank you for those insights!\n\n+1\n\n> > I wonder if we could\n> > \n> > a) set default_transaction_read_only to true, and explicitly change it\n> > in the sessions that need that.\n\nAccording to Denis' tests discussed off-list, it works fine in regard with the\npowa bgworker, albeit some complaints in logs. However, I wonder how fragile it\ncould be as bgworker could easily manipulate either the GUC or \"BEGIN READ\nWRITE\". I realize this is really uncommon practices, but bgworker code from\nthird parties might be surprising.\n\n> > b) when in binary upgrade mode / -b, error out on all wal writes in\n> > sessions that don't explicitly set a session-level GUC to allow\n> > writes.\n\nIt feels safer because more specific to the subject. But I wonder if the\nbenefice worth adding some (limited?) complexity and a small/quick conditional\nblock in a very hot code path for a very limited use case.\n\nWhat about c) where the bgworker are not loaded by default during binary upgrade\nmode / -b unless they explicitly set a bgw_flags (BGWORKER_BINARY_UPGRADE ?)\nwhen they are required during pg_upgrade?\n\nRegards,\n\n\n", "msg_date": "Wed, 27 Jan 2021 14:41:32 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Oh, I forgot another point before sending my previous email.\n\nMaybe it might worth adding some final safety checks in pg_upgrade itself?\nEg. checking controldata and mxid files coherency between old and new\ncluster would have catch the inconsistency here.\n\n\n", "msg_date": "Wed, 27 Jan 2021 15:06:46 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Wed, Jan 27, 2021 at 02:41:32PM +0100, Jehan-Guillaume de Rorthais wrote:\n> \n> On Wed, 27 Jan 2021 11:25:11 +0100\n> Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n> \n> > Andres Freund a �crit :\n> \n> > > I wonder if we could\n> > > \n> > > a) set default_transaction_read_only to true, and explicitly change it\n> > > in the sessions that need that.\n> \n> According to Denis' tests discussed off-list, it works fine in regard with the\n> powa bgworker, albeit some complaints in logs. However, I wonder how fragile it\n> could be as bgworker could easily manipulate either the GUC or \"BEGIN READ\n> WRITE\". I realize this is really uncommon practices, but bgworker code from\n> third parties might be surprising.\n\nGiven that having any writes happening at the wrong moment on the old cluster\ncan end up corrupting the new cluster, and that the corruption might not be\nimmediately visible we should try to put as many safeguards as possible.\n\nso +1 for the default_transaction_read_only as done in Denis' patch at minimum,\nbut not only.\n\nAFAICT it should be easy to prevent all background worker from being launched\nby adding a check on IsBinaryUpgrade at the beginning of\nbgworker_should_start_now(). It won't prevent modules from being loaded, so\nthis approach should be problematic.\n\n> > > b) when in binary upgrade mode / -b, error out on all wal writes in\n> > > sessions that don't explicitly set a session-level GUC to allow\n> > > writes.\n> \n> It feels safer because more specific to the subject. But I wonder if the\n> benefice worth adding some (limited?) complexity and a small/quick conditional\n> block in a very hot code path for a very limited use case.\n\nI don't think that it would add that much complexity or overhead as there's\nalready all the infrastructure to prevent WAL writes in certain condition. It\nshould be enough to add an additional test in XLogInsertAllowed() with some new\nvariable that is set when starting in binary upgrade mode, and a new function\nto disable it that will be emitted by pg_dump / pg_dumpall in binary upgrade\nmode.\n\n> What about c) where the bgworker are not loaded by default during binary upgrade\n> mode / -b unless they explicitly set a bgw_flags (BGWORKER_BINARY_UPGRADE ?)\n> when they are required during pg_upgrade?\n\nAs mentioned above +1 for not launching the bgworkers. Does anyone can think\nof a reason why some bgworker would really be needed during pg_upgrade, either\non the source or the target cluster?\n\n\n", "msg_date": "Fri, 12 Mar 2021 16:23:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Julien Rouhaud a écrit :\n> On Wed, Jan 27, 2021 at 02:41:32PM +0100, Jehan-Guillaume de Rorthais wrote:\n>>\n>> On Wed, 27 Jan 2021 11:25:11 +0100\n>> Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n>>\n>>> Andres Freund a écrit :\n>>\n>>>> I wonder if we could\n>>>>\n>>>> a) set default_transaction_read_only to true, and explicitly change it\n>>>> in the sessions that need that.\n>>\n>> According to Denis' tests discussed off-list, it works fine in regard with the\n>> powa bgworker, albeit some complaints in logs. However, I wonder how fragile it\n>> could be as bgworker could easily manipulate either the GUC or \"BEGIN READ\n>> WRITE\". I realize this is really uncommon practices, but bgworker code from\n>> third parties might be surprising.\n> \n> Given that having any writes happening at the wrong moment on the old cluster\n> can end up corrupting the new cluster, and that the corruption might not be\n> immediately visible we should try to put as many safeguards as possible.\n> \n> so +1 for the default_transaction_read_only as done in Denis' patch at minimum,\n> but not only.\n> \n> AFAICT it should be easy to prevent all background worker from being launched\n> by adding a check on IsBinaryUpgrade at the beginning of\n> bgworker_should_start_now(). It won't prevent modules from being loaded, so\n> this approach should be problematic.\n\nPlease find attached another patch implementing this suggestion (as a \ncomplement to the previous patch setting default_transaction_read_only).\n\n>>>> b) when in binary upgrade mode / -b, error out on all wal writes in\n>>>> sessions that don't explicitly set a session-level GUC to allow\n>>>> writes.\n>>\n>> It feels safer because more specific to the subject. But I wonder if the\n>> benefice worth adding some (limited?) complexity and a small/quick conditional\n>> block in a very hot code path for a very limited use case.\n> \n> I don't think that it would add that much complexity or overhead as there's\n> already all the infrastructure to prevent WAL writes in certain condition. It\n> should be enough to add an additional test in XLogInsertAllowed() with some new\n> variable that is set when starting in binary upgrade mode, and a new function\n> to disable it that will be emitted by pg_dump / pg_dumpall in binary upgrade\n> mode.\n\nThis part is less clear to me so I'm not sure I'd be able to work on it.", "msg_date": "Tue, 24 Aug 2021 16:40:02 +0200", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "> On 24 Aug 2021, at 16:40, Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n\n> Please find attached another patch implementing this suggestion (as a complement to the previous patch setting default_transaction_read_only).\n\nPlease add this to the upcoming commitfest to make sure it's not missed:\n\n\thttps://commitfest.postgresql.org/34/\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 24 Aug 2021 21:41:58 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Wed, Jan 27, 2021 at 03:06:46PM +0100, Jehan-Guillaume de Rorthais wrote:\n> Maybe it might worth adding some final safety checks in pg_upgrade itself?\n> Eg. checking controldata and mxid files coherency between old and new\n> cluster would have catch the inconsistency here.\n\nYeah, I agree that there are things in this area that could be\nbetter.\n--\nMichael", "msg_date": "Wed, 25 Aug 2021 13:00:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Tue, Aug 24, 2021 at 04:40:02PM +0200, Denis Laxalde wrote:\n> Julien Rouhaud a écrit :\n>> I don't think that it would add that much complexity or overhead as there's\n>> already all the infrastructure to prevent WAL writes in certain condition. It\n>> should be enough to add an additional test in XLogInsertAllowed() with some new\n>> variable that is set when starting in binary upgrade mode, and a new function\n>> to disable it that will be emitted by pg_dump / pg_dumpall in binary upgrade\n>> mode.\n> \n> This part is less clear to me so I'm not sure I'd be able to work on it.\n\ndefault_transaction_read_only brings in a certain level of safety, but\nit is limited when it comes to operations involving maintenance like a\nREINDEX or a VACUUM code path. Making use of a different way to\ncontrol if WAL should be allowed for binary upgrades with a new mean\nlooks like a more promising and more robust approach, even if that\nmeans that any bgworkers started by the clusters on which the upgrade\nis done would need to deal with any errors generated by this new\nfacility.\n\nSaying that, I don't see a scenario where we'd need a bgworker to be\naround during an upgrade. But perhaps some cloud providers have this\nneed in their own golden garden?\n\n> @@ -5862,6 +5862,9 @@ do_start_bgworker(RegisteredBgWorker *rw)\n> static bool\n> bgworker_should_start_now(BgWorkerStartTime start_time)\n> {\n> +\tif (IsBinaryUpgrade)\n> +\t\treturn false;\n> +\n\nUsing -c max_worker_processes=0 would just have the same effect, no?\nSo we could just patch pg_upgrade's server.c to get the same level of\nprotection?\n--\nMichael", "msg_date": "Wed, 25 Aug 2021 14:27:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Michael Paquier a �crit�:\n>> @@ -5862,6 +5862,9 @@ do_start_bgworker(RegisteredBgWorker *rw)\n>> static bool\n>> bgworker_should_start_now(BgWorkerStartTime start_time)\n>> {\n>> +\tif (IsBinaryUpgrade)\n>> +\t\treturn false;\n>> +\n> Using -c max_worker_processes=0 would just have the same effect, no?\n> So we could just patch pg_upgrade's server.c to get the same level of\n> protection?\n\nYes, same effect indeed. This would log \"too many background workers\" \nmessages in pg_upgrade logs, though.\nSee attached patch implementing this suggestion.", "msg_date": "Thu, 26 Aug 2021 09:15:25 +0200", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Thu, Aug 26, 2021 at 3:15 PM Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n>\n> Michael Paquier a écrit :\n> >> @@ -5862,6 +5862,9 @@ do_start_bgworker(RegisteredBgWorker *rw)\n> >> static bool\n> >> bgworker_should_start_now(BgWorkerStartTime start_time)\n> >> {\n> >> + if (IsBinaryUpgrade)\n> >> + return false;\n> >> +\n> > Using -c max_worker_processes=0 would just have the same effect, no?\n> > So we could just patch pg_upgrade's server.c to get the same level of\n> > protection?\n>\n> Yes, same effect indeed. This would log \"too many background workers\"\n> messages in pg_upgrade logs, though.\n> See attached patch implementing this suggestion.\n\nI disagree. It can appear to have the same effect but it's not\nguaranteed. Any module in shared_preload_libraries could stick a\n\"max_worker_processes +=X\" if it thinks it should account for its own\nressources. That may not be something encouraged, but it's definitely\npossible (and I think Andres recently mentioned that some extensions\ndo things like that, although maybe for other GUCs) and could result\nin a corruption of a pg_upgrade'd cluster, so I still think that\nchanging bgworker_should_start_now() is a better option.\n\n\n", "msg_date": "Thu, 26 Aug 2021 15:24:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Thu, Aug 26, 2021 at 03:24:33PM +0800, Julien Rouhaud wrote:\n> On Thu, Aug 26, 2021 at 3:15 PM Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n> >\n> > Michael Paquier a écrit :\n> > >> @@ -5862,6 +5862,9 @@ do_start_bgworker(RegisteredBgWorker *rw)\n> > >> static bool\n> > >> bgworker_should_start_now(BgWorkerStartTime start_time)\n> > >> {\n> > >> + if (IsBinaryUpgrade)\n> > >> + return false;\n> > >> +\n> > > Using -c max_worker_processes=0 would just have the same effect, no?\n> > > So we could just patch pg_upgrade's server.c to get the same level of\n> > > protection?\n> >\n> > Yes, same effect indeed. This would log \"too many background workers\"\n> > messages in pg_upgrade logs, though.\n> > See attached patch implementing this suggestion.\n> \n> I disagree. It can appear to have the same effect but it's not\n> guaranteed. Any module in shared_preload_libraries could stick a\n> \"max_worker_processes +=X\" if it thinks it should account for its own\n> ressources. That may not be something encouraged, but it's definitely\n> possible (and I think Andres recently mentioned that some extensions\n> do things like that, although maybe for other GUCs) and could result\n> in a corruption of a pg_upgrade'd cluster, so I still think that\n> changing bgworker_should_start_now() is a better option.\n\nI am not sure. We have a lot of pg_upgrade code that turns off things\nlike autovacuum and that has worked fine:\n\n snprintf(cmd, sizeof(cmd),\n \"\\\"%s/pg_ctl\\\" -w -l \\\"%s\\\" -D \\\"%s\\\" -o \\\"-p %d%s%s %s%s\\\" start\",\n cluster->bindir, SERVER_LOG_FILE, cluster->pgconfig, cluster->port,\n (cluster->controldata.cat_ver >=\n BINARY_UPGRADE_SERVER_FLAG_CAT_VER) ? \" -b\" :\n \" -c autovacuum=off -c autovacuum_freeze_max_age=2000000000\",\n (cluster == &new_cluster) ?\n \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c vacuum_defer_cleanup_age=0\" : \"\",\n cluster->pgopts ? cluster->pgopts : \"\", socket_string);\n\nBasically, pg_upgrade has avoided any backend changes that could be\ncontrolled by GUCs and I am not sure we want to start adding such\nchanges for just this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 09:09:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "> On 26 Aug 2021, at 15:09, Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, Aug 26, 2021 at 03:24:33PM +0800, Julien Rouhaud wrote:\n\n>> .. I still think that\n>> changing bgworker_should_start_now() is a better option.\n> \n> I am not sure. We have a lot of pg_upgrade code that turns off things\n> like autovacuum and that has worked fine:\n\nTrue, but there are also conditionals on IsBinaryUpgrade for starting the\nautovacuum launcher in the postmaster, so there is some precedent.\n\n> Basically, pg_upgrade has avoided any backend changes that could be\n> controlled by GUCs and I am not sure we want to start adding such\n> changes for just this.\n\nIn principle I think it’s sound to try to avoid backend changes where possible\nwithout sacrificing robustness.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 15:38:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Thu, Aug 26, 2021 at 03:38:23PM +0200, Daniel Gustafsson wrote:\n> > On 26 Aug 2021, at 15:09, Bruce Momjian <bruce@momjian.us> wrote:\n> > On Thu, Aug 26, 2021 at 03:24:33PM +0800, Julien Rouhaud wrote:\n> \n> >> .. I still think that\n> >> changing bgworker_should_start_now() is a better option.\n> > \n> > I am not sure. We have a lot of pg_upgrade code that turns off things\n> > like autovacuum and that has worked fine:\n> \n> True, but there are also conditionals on IsBinaryUpgrade for starting the\n> autovacuum launcher in the postmaster, so there is some precedent.\n\nOh, I was not aware of that.\n\n> > Basically, pg_upgrade has avoided any backend changes that could be\n> > controlled by GUCs and I am not sure we want to start adding such\n> > changes for just this.\n> \n> In principle I think it’s sound to try to avoid backend changes where possible\n> without sacrificing robustness.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 09:42:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Le jeu. 26 août 2021 à 21:38, Daniel Gustafsson <daniel@yesql.se> a écrit :\n\n> > On 26 Aug 2021, at 15:09, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> > Basically, pg_upgrade has avoided any backend changes that could be\n> > controlled by GUCs and I am not sure we want to start adding such\n> > changes for just this.\n>\n> In principle I think it’s sound to try to avoid backend changes where\n> possible\n> without sacrificing robustness.\n>\n\nI agree, but it seems quite more likely that an extension relying on a\nbgworker changes this guc, compared to an extension forcing autovacuum to\nbe on for instance.\n\n>\n\nLe jeu. 26 août 2021 à 21:38, Daniel Gustafsson <daniel@yesql.se> a écrit :> On 26 Aug 2021, at 15:09, Bruce Momjian <bruce@momjian.us> wrote:\n\n> Basically, pg_upgrade has avoided any backend changes that could be\n> controlled by GUCs and I am not sure we want to start adding such\n> changes for just this.\n\nIn principle I think it’s sound to try to avoid backend changes where possible\nwithout sacrificing robustness.I agree, but it seems quite more likely that an extension relying on a bgworker changes this guc, compared to an extension forcing autovacuum to be on for instance.", "msg_date": "Thu, 26 Aug 2021 21:43:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "> On 26 Aug 2021, at 15:43, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> Le jeu. 26 août 2021 à 21:38, Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> a écrit :\n> > On 26 Aug 2021, at 15:09, Bruce Momjian <bruce@momjian.us <mailto:bruce@momjian.us>> wrote:\n> \n> > Basically, pg_upgrade has avoided any backend changes that could be\n> > controlled by GUCs and I am not sure we want to start adding such\n> > changes for just this.\n> \n> In principle I think it’s sound to try to avoid backend changes where possible\n> without sacrificing robustness.\n> \n> I agree, but it seems quite more likely that an extension relying on a bgworker changes this guc, compared to an extension forcing autovacuum to be on for instance. \n\nAgreed, in this particular case I think there is merit to the idea of enforcing\nit in the backend.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 15:59:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Bruce Momjian a écrit :\n> On Thu, Aug 26, 2021 at 03:38:23PM +0200, Daniel Gustafsson wrote:\n>>> On 26 Aug 2021, at 15:09, Bruce Momjian<bruce@momjian.us> wrote:\n>>> On Thu, Aug 26, 2021 at 03:24:33PM +0800, Julien Rouhaud wrote:\n>>>> .. I still think that\n>>>> changing bgworker_should_start_now() is a better option.\n>>> I am not sure. We have a lot of pg_upgrade code that turns off things\n>>> like autovacuum and that has worked fine:\n>> True, but there are also conditionals on IsBinaryUpgrade for starting the\n>> autovacuum launcher in the postmaster, so there is some precedent.\n> Oh, I was not aware of that.\n> \n\nIf I understand correctly, autovacuum is turned off by pg_upgrade code \nonly if the old cluster does not support the -b flag (prior to 9.1 \napparently). Otherwise, this is indeed handled by IsBinaryUpgrade.\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:07:40 +0200", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Thu, Aug 26, 2021 at 03:59:49PM +0200, Daniel Gustafsson wrote:\n> > On 26 Aug 2021, at 15:43, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > \n> > Le jeu. 26 août 2021 à 21:38, Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> a écrit :\n> > > On 26 Aug 2021, at 15:09, Bruce Momjian <bruce@momjian.us <mailto:bruce@momjian.us>> wrote:\n> > \n> > > Basically, pg_upgrade has avoided any backend changes that could be\n> > > controlled by GUCs and I am not sure we want to start adding such\n> > > changes for just this.\n> > \n> > In principle I think it’s sound to try to avoid backend changes where possible\n> > without sacrificing robustness.\n> > \n> > I agree, but it seems quite more likely that an extension relying on a bgworker changes this guc, compared to an extension forcing autovacuum to be on for instance. \n> \n> Agreed, in this particular case I think there is merit to the idea of enforcing\n> it in the backend.\n\nOK, works for me\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 10:34:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Thu, Aug 26, 2021 at 10:34:48AM -0400, Bruce Momjian wrote:\n> On Thu, Aug 26, 2021 at 03:59:49PM +0200, Daniel Gustafsson wrote:\n>> Agreed, in this particular case I think there is merit to the idea of enforcing\n>> it in the backend.\n> \n> OK, works for me\n\nIndeed, there is some history here with autovacuum. I have not been\ncareful enough to check that. Still, putting a check on\nIsBinaryUpgrade in bgworker_should_start_now() would mean that we\nstill keep track of the set of bgworkers registered in shared memory.\n\nWouldn't it be better to block things at the source, as of\nRegisterBackgroundWorker()? And that would keep track of the control\nwe have on bgworkers in a single place. I also think that we'd better \ndocument something about that either in bgworker.sgml or pg_upgrade's\npage.\n--\nMichael", "msg_date": "Fri, 27 Aug 2021 08:30:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Fri, Aug 27, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Indeed, there is some history here with autovacuum. I have not been\n> careful enough to check that. Still, putting a check on\n> IsBinaryUpgrade in bgworker_should_start_now() would mean that we\n> still keep track of the set of bgworkers registered in shared memory.\n\nThat shouldn't lead to any problem right?\n\n> Wouldn't it be better to block things at the source, as of\n> RegisterBackgroundWorker()? And that would keep track of the control\n> we have on bgworkers in a single place. I also think that we'd better\n> document something about that either in bgworker.sgml or pg_upgrade's\n> page.\n\nI'm fine with that approach too.\n\n\n", "msg_date": "Fri, 27 Aug 2021 09:34:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Fri, Aug 27, 2021 at 09:34:24AM +0800, Julien Rouhaud wrote:\n> That shouldn't lead to any problem right?\n\nWell, bgworker_should_start_now() does not exist for that, and\nRegisterBackgroundWorker() is the one doing the classification job, so\nit would be more consistent to keep everything under control in the\nsame code path.\n--\nMichael", "msg_date": "Fri, 27 Aug 2021 11:02:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Fri, Aug 27, 2021 at 10:02 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Aug 27, 2021 at 09:34:24AM +0800, Julien Rouhaud wrote:\n> > That shouldn't lead to any problem right?\n>\n> Well, bgworker_should_start_now() does not exist for that, and\n> RegisterBackgroundWorker() is the one doing the classification job, so\n> it would be more consistent to keep everything under control in the\n> same code path.\n\nI'm not sure it's that uncontroversial. The way I see\nRegisterBackgroundWorker() is that it's responsible for doing some\nsanity checks to see if the module didn't make any error and if\nressources are available. Surely checking for IsBinaryUpgrade should\nnot be the responsibility of third-party code, so the question is\nwhether binary upgrade is seen as a resource and as such a reason to\nforbid bgworker registration, in opposition to forbid the launch\nitself.\n\nRight now AFAICT there's no official API to check if a call to\nRegisterBackgroundWorker() succeeded or not, but an extension could\ncheck by itself using BackgroundWorkerList in bgworker_internals.h,\nand error out or something if it didn't succeed, as a way to inform\nusers that they didn't configure the instance properly (like maybe\nincreasing max_worker_processes). Surely using a *_internals.h header\nis a clear sign that you expose yourself to problems, but adding an\nofficial way to check for bgworker registration doesn't seem\nunreasonable to me. Is that worth the risk to have pg_upgrade\nerroring out in this kind of scenario, or make the addition of a new\nAPI to check for registration status more difficult?\n\n\n", "msg_date": "Fri, 27 Aug 2021 11:25:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Fri, Aug 27, 2021 at 11:25:19AM +0800, Julien Rouhaud wrote:\n> Right now AFAICT there's no official API to check if a call to\n> RegisterBackgroundWorker() succeeded or not, but an extension could\n> check by itself using BackgroundWorkerList in bgworker_internals.h,\n> and error out or something if it didn't succeed, as a way to inform\n> users that they didn't configure the instance properly (like maybe\n> increasing max_worker_processes). Surely using a *_internals.h header\n> is a clear sign that you expose yourself to problems, but adding an\n> official way to check for bgworker registration doesn't seem\n> unreasonable to me. Is that worth the risk to have pg_upgrade\n> erroring out in this kind of scenario, or make the addition of a new\n> API to check for registration status more difficult?\n\nPerhaps. That feels like a topic different than what's discussed\nhere, though, because we don't really need to check if a bgworker has\nbeen launched or not. We just need to make sure that it never runs in\nthe context of a binary upgrade, like autovacuum.\n--\nMichael", "msg_date": "Fri, 27 Aug 2021 13:41:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Fri, Aug 27, 2021 at 12:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Perhaps. That feels like a topic different than what's discussed\n> here, though, because we don't really need to check if a bgworker has\n> been launched or not. We just need to make sure that it never runs in\n> the context of a binary upgrade, like autovacuum.\n\nI'm a bit confused. Did you mean checking if a bgworker has been\n*registered* or not?\n\nBut my point was that preventing a bgworker registration as a way to\navoid it from being launched may lead to some problem if an extensions\ndecides that a failure in the registration is problematic enough to\nprevent the startup altogether for instance. I'm ok if we decide that\nit's *not* an acceptable behavior, but it should be clear that it's\nthe case, and probably documented.\n\n\n", "msg_date": "Fri, 27 Aug 2021 13:49:16 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi,\n\nOn 2021-08-27 09:34:24 +0800, Julien Rouhaud wrote:\n> On Fri, Aug 27, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > Indeed, there is some history here with autovacuum. I have not been\n> > careful enough to check that. Still, putting a check on\n> > IsBinaryUpgrade in bgworker_should_start_now() would mean that we\n> > still keep track of the set of bgworkers registered in shared memory.\n> \n> That shouldn't lead to any problem right?\n> \n> > Wouldn't it be better to block things at the source, as of\n> > RegisterBackgroundWorker()? And that would keep track of the control\n> > we have on bgworkers in a single place. I also think that we'd better\n> > document something about that either in bgworker.sgml or pg_upgrade's\n> > page.\n> \n> I'm fine with that approach too.\n\nIsn't that just going to end up with extension code erroring out and/or\nblocking waiting for a bgworker to start?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Aug 2021 12:28:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Fri, Aug 27, 2021 at 12:28:42PM -0700, Andres Freund wrote:\n> Isn't that just going to end up with extension code erroring out and/or\n> blocking waiting for a bgworker to start?\n\nWell, that's the point to block things during an upgrade. Do you have\na list of requirements you'd like to see satisfied here? POWA would\nbe fine with blocking the execution of bgworkers AFAIK (Julien feel\nfree to correct me here if necessary). It could be possible that\npreventing extension code to execute this way could prevent hazards as\nwell. The idea from upthread to prevent any writes and/or WAL\nactivity is not really different as extensions may still generate an\nerror because of pg_upgrade's safety measures we'd put in, no?\n--\nMichael", "msg_date": "Sat, 28 Aug 2021 10:40:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Sat, Aug 28, 2021 at 3:28 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > On Fri, Aug 27, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > > Wouldn't it be better to block things at the source, as of\n> > > RegisterBackgroundWorker()? And that would keep track of the control\n> > > we have on bgworkers in a single place. I also think that we'd better\n> > > document something about that either in bgworker.sgml or pg_upgrade's\n> > > page.\n>\n> Isn't that just going to end up with extension code erroring out and/or\n> blocking waiting for a bgworker to start?\n\nBut there's no API to wait for the start of a non-dynamic bgworker or\ncheck for it right? So I don't see how the extension code could wait\nor error out. As far as I know the only thing you can do is\nRegisterBackgroundWorker() in your _PG_init() code and hope that the\nserver is correctly configured. The only thing that third-party code\ncould I think is try to check if the bgworker could be successfully\nregistered or not as I mentioned in [1]. Maybe extra paranoid code\nmay add such check in all executor hook but the overhead would be so\nterrible that no one would use such an extension anyway.\n\n[1] https://www.postgresql.org/message-id/CAOBaU_ZtR88x3Si6XwprqGo8UZNLncAQrD_-wc67sC=acO3g=w@mail.gmail.com\n\n\n", "msg_date": "Sat, 28 Aug 2021 09:41:10 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Sat, Aug 28, 2021 at 9:40 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Aug 27, 2021 at 12:28:42PM -0700, Andres Freund wrote:\n> > Isn't that just going to end up with extension code erroring out and/or\n> > blocking waiting for a bgworker to start?\n>\n> Well, that's the point to block things during an upgrade. Do you have\n> a list of requirements you'd like to see satisfied here? POWA would\n> be fine with blocking the execution of bgworkers AFAIK (Julien feel\n> free to correct me here if necessary).\n\nYes, no problem at all, whether the bgworker isn't registered or never\nlaunched. The bgworker isn't even mandatory anymore since a few\nyears, as we introduced an external daemon to collect metrics on a\ndistant database.\n\n\n", "msg_date": "Sat, 28 Aug 2021 09:43:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Sat, Aug 28, 2021 at 10:40:42AM +0900, Michael Paquier wrote:\n> On Fri, Aug 27, 2021 at 12:28:42PM -0700, Andres Freund wrote:\n> > Isn't that just going to end up with extension code erroring out and/or\n> > blocking waiting for a bgworker to start?\n> \n> Well, that's the point to block things during an upgrade. Do you have\n> a list of requirements you'd like to see satisfied here? POWA would\n> be fine with blocking the execution of bgworkers AFAIK (Julien feel\n> free to correct me here if necessary). It could be possible that\n> preventing extension code to execute this way could prevent hazards as\n> well. The idea from upthread to prevent any writes and/or WAL\n> activity is not really different as extensions may still generate an\n> error because of pg_upgrade's safety measures we'd put in, no?\n\nThis thread is now almost one year old, and AFAICT there's still no consensus\non how to fix this problem. It would be good to have something done in pg15,\nideally backpatched, as this is a corruption hazard that triggered at least\nonce already.\n\nAndres, do you still have an objection with either preventing bgworker\nregistration/launch or WAL-write during the impacted pg_upgrade steps, or a\nbetter alternative to fix the problem?\n\n\n", "msg_date": "Wed, 12 Jan 2022 17:54:31 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi,\n\nOn 2022-01-12 17:54:31 +0800, Julien Rouhaud wrote:\n> On Sat, Aug 28, 2021 at 10:40:42AM +0900, Michael Paquier wrote:\n> > On Fri, Aug 27, 2021 at 12:28:42PM -0700, Andres Freund wrote:\n> > > Isn't that just going to end up with extension code erroring out and/or\n> > > blocking waiting for a bgworker to start?\n> >\n> > Well, that's the point to block things during an upgrade. Do you have\n> > a list of requirements you'd like to see satisfied here? POWA would\n> > be fine with blocking the execution of bgworkers AFAIK (Julien feel\n> > free to correct me here if necessary). It could be possible that\n> > preventing extension code to execute this way could prevent hazards as\n> > well. The idea from upthread to prevent any writes and/or WAL\n> > activity is not really different as extensions may still generate an\n> > error because of pg_upgrade's safety measures we'd put in, no?\n\nThe point is that we need the check for WAL writes / xid assignments / etc\n*either* way. There are ways extensions could trigger problems like e.g. xid\nassigned, besides bgworker doing stuff. Or postgres components doing so\nunintentionally.\n\nErroring out in situation where we *know* that there were concurrent changes\nunacceptable during pg_upgrade is always the right call. Such errors can be\ndebugged and then addressed (removing the extension from s_p_l, fixing the\nextension, etc).\n\nIn contrast to that, preventing upgrades from succeeding because an extension\nhas a dependency on bgworkers working, just because the bgworker *could* be\ndoing something bad is different. The bgworker might never write, have a check\nfor binary upgrade mode, etc. It may not be realistic to fix and extension to\nwork without the bgworkers.\n\nImagine something like an table access method that has IO workers or such.\n\n\n> Andres, do you still have an objection with either preventing bgworker\n> registration/launch or WAL-write during the impacted pg_upgrade steps, or a\n> better alternative to fix the problem?\n\nI still object to the approach of preventing bgworker registration. It doesn't\nprovide much protection and might cause hard to address problems for some\nextensions.\n\nI don't think I ever objected to preventing WAL-writes, I even proposed that\nupthread? Unless you suggest doing it specifically in bgworkers, rather than\npreventing similar problems outside bgworkers as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Jan 2022 18:44:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 13, 2022 at 06:44:12PM -0800, Andres Freund wrote:\n> \n> The point is that we need the check for WAL writes / xid assignments / etc\n> *either* way. There are ways extensions could trigger problems like e.g. xid\n> assigned, besides bgworker doing stuff. Or postgres components doing so\n> unintentionally.\n> \n> Erroring out in situation where we *know* that there were concurrent changes\n> unacceptable during pg_upgrade is always the right call. Such errors can be\n> debugged and then addressed (removing the extension from s_p_l, fixing the\n> extension, etc).\n> \n> In contrast to that, preventing upgrades from succeeding because an extension\n> has a dependency on bgworkers working, just because the bgworker *could* be\n> doing something bad is different. The bgworker might never write, have a check\n> for binary upgrade mode, etc. It may not be realistic to fix and extension to\n> work without the bgworkers.\n> \n> Imagine something like an table access method that has IO workers or such.\n\nIIUC if a table access method has IO workers that starts doing writes quickly\n(or any similar extension that *is* required to be present during upgrade but\nthat should be partially disabled), the only way to do a pg_upgrade would be to\nmake sure that the extension explicitly checks for the binary-upgrade mode and\ndon't do any writes, or provide a GUC for the same, since it should still\npreloaded? I'm fine with that, but that should probably be documented.\n> \n> \n> > Andres, do you still have an objection with either preventing bgworker\n> > registration/launch or WAL-write during the impacted pg_upgrade steps, or a\n> > better alternative to fix the problem?\n> \n> I still object to the approach of preventing bgworker registration. It doesn't\n> provide much protection and might cause hard to address problems for some\n> extensions.\n> \n> I don't think I ever objected to preventing WAL-writes, I even proposed that\n> upthread? Unless you suggest doing it specifically in bgworkers, rather than\n> preventing similar problems outside bgworkers as well.\n\nSorry I missed that when re-reading the thread. And no I'm not suggesting\npreventing WAL writes in bgworkers only.\n\nSince there's a clear consensus on how to fix the problem, I'm switching the\npatch as Waiting on Author.\n\n\n", "msg_date": "Fri, 14 Jan 2022 10:59:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi,\n\nJulien Rouhaud a écrit :\n>> On Wed, 27 Jan 2021 11:25:11 +0100\n>> Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n>>\n>>> Andres Freund a écrit :\n>>>> b) when in binary upgrade mode / -b, error out on all wal writes in\n>>>> sessions that don't explicitly set a session-level GUC to allow\n>>>> writes.\n\n> It should be enough to add an additional test in XLogInsertAllowed() with some new\n> variable that is set when starting in binary upgrade mode, and a new function\n> to disable it that will be emitted by pg_dump / pg_dumpall in binary upgrade\n> mode.\n\nI tried that simple change first:\n\ndiff --git a/src/backend/access/transam/xlog.c \nb/src/backend/access/transam/xlog.c\nindex dfe2a0bcce..8feab0cb96 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -8498,6 +8498,9 @@ HotStandbyActiveInReplay(void)\n bool\n XLogInsertAllowed(void)\n {\n+ if (IsBinaryUpgrade)\n+ return false;\n+\n /*\n * If value is \"unconditionally true\" or \"unconditionally \nfalse\", just\n * return it. This provides the normal fast path once recovery \nis known\n\n\nBut then, pg_upgrade's tests (make -C src/bin/pg_upgrade/ check) fail at \nvaccumdb but not during pg_dumpall:\n\n$ cat src/bin/pg_upgrade/pg_upgrade_utility.log\n-----------------------------------------------------------------\n pg_upgrade run on Fri Jan 28 10:37:36 2022\n-----------------------------------------------------------------\n\ncommand: \n\"/home/denis/src/pgsql/build/tmp_install/home/denis/.local/pgsql/bin/pg_dumpall\" \n--host /home/denis/src/pgsql/build/src/bin/pg_upgrade --port 51696 \n--username denis --globals-only --quote-all-identifiers --binary-upgrade \n -f pg_upgrade_dump_globals.sql >> \"pg_upgrade_utility.log\" 2>&1\n\n\ncommand: \n\"/home/denis/src/pgsql/build/tmp_install/home/denis/.local/pgsql/bin/vacuumdb\" \n--host /home/denis/src/pgsql/build/src/bin/pg_upgrade --port 51696 \n--username denis --all --analyze >> \"pg_upgrade_utility.log\" 2>&1\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: error: processing of database \"postgres\" failed: PANIC: \ncannot make new WAL entries during recovery\n\n\nIn contrast with pg_dump/pg_dumpall, vacuumdb has no --binary-upgrade \nflag, so it does not seem possible to use a special GUC setting to allow \nWAL writes in that vacuumdb session at the moment.\nShould we add --binary-upgrade to vacuumdb as well? Or am I going in the \nwrong direction?\n\n\nThanks,\nDenis\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:02:46 +0100", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 28, 2022 at 11:02:46AM +0100, Denis Laxalde wrote:\n> \n> I tried that simple change first:\n> \n> diff --git a/src/backend/access/transam/xlog.c\n> b/src/backend/access/transam/xlog.c\n> index dfe2a0bcce..8feab0cb96 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -8498,6 +8498,9 @@ HotStandbyActiveInReplay(void)\n> bool\n> XLogInsertAllowed(void)\n> {\n> + if (IsBinaryUpgrade)\n> + return false;\n> +\n> \n> \n> But then, pg_upgrade's tests (make -C src/bin/pg_upgrade/ check) fail at\n> vaccumdb but not during pg_dumpall:\n> \n> [...]\n> \n> command: \"/home/denis/src/pgsql/build/tmp_install/home/denis/.local/pgsql/bin/vacuumdb\"\n> --host /home/denis/src/pgsql/build/src/bin/pg_upgrade --port 51696\n> --username denis --all --analyze >> \"pg_upgrade_utility.log\" 2>&1\n> vacuumdb: vacuuming database \"postgres\"\n> vacuumdb: error: processing of database \"postgres\" failed: PANIC: cannot\n> make new WAL entries during recovery\n> \n> In contrast with pg_dump/pg_dumpall, vacuumdb has no --binary-upgrade flag,\n> so it does not seem possible to use a special GUC setting to allow WAL\n> writes in that vacuumdb session at the moment.\n> Should we add --binary-upgrade to vacuumdb as well? Or am I going in the\n> wrong direction?\n\nI think having a new option for vacuumdb is the right move.\n\nIt seems unlikely that any cron or similar on the host will try to run some\nconcurrent vacuumdb, but we still have to enforce that only the one executed by\npg_upgrade can succeed.\n\nI guess it could be an undocumented option, similar to postgres' -b, which\nwould only be allowed iff --all and --freeze is also passed to be extra safe.\n\nWhile at it:\n\n> vacuumdb: error: processing of database \"postgres\" failed: PANIC: cannot\n> make new WAL entries during recovery\n\nShould we tweak that message when IsBinaryUpgrade is true?\n\n\n", "msg_date": "Fri, 28 Jan 2022 21:56:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Julien Rouhaud a écrit :\n> I think having a new option for vacuumdb is the right move.\n> \n> It seems unlikely that any cron or similar on the host will try to run some\n> concurrent vacuumdb, but we still have to enforce that only the one executed by\n> pg_upgrade can succeed.\n> \n> I guess it could be an undocumented option, similar to postgres' -b, which\n> would only be allowed iff --all and --freeze is also passed to be extra safe.\n\nThe help text in pg_dump's man page states:\n\n --binary-upgrade\n This option is for use by in-place upgrade\n utilities. Its use for other purposes is not\n recommended or supported. The behavior of\n the option may change in future releases\n without notice.\n\nIs it enough? Or do we actually want to hide it for vacuumdb?\n\n> While at it:\n> \n>> vacuumdb: error: processing of database \"postgres\" failed: PANIC: cannot\n>> make new WAL entries during recovery\n> \n> Should we tweak that message when IsBinaryUpgrade is true?\n\nYes, indeed, I had in mind to simply make the message more generic as: \n\"cannot insert new WAL entries\".\n\n\n", "msg_date": "Fri, 28 Jan 2022 15:06:57 +0100", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Fri, Jan 28, 2022 at 03:06:57PM +0100, Denis Laxalde wrote:\n> Julien Rouhaud a �crit�:\n> > I think having a new option for vacuumdb is the right move.\n> > \n> > It seems unlikely that any cron or similar on the host will try to run some\n> > concurrent vacuumdb, but we still have to enforce that only the one executed by\n> > pg_upgrade can succeed.\n> > \n> > I guess it could be an undocumented option, similar to postgres' -b, which\n> > would only be allowed iff --all and --freeze is also passed to be extra safe.\n> \n> The help text in pg_dump's man page states:\n> \n> --binary-upgrade\n> This option is for use by in-place upgrade\n> utilities. Its use for other purposes is not\n> recommended or supported. The behavior of\n> the option may change in future releases\n> without notice.\n> \n> Is it enough? Or do we actually want to hide it for vacuumdb?\n\nI think it should be hidden, with a comment about it like postmaster.c getopt\ncall:\n\n\t\t\tcase 'b':\n\t\t\t\t/* Undocumented flag used for binary upgrades */\n\n> > > vacuumdb: error: processing of database \"postgres\" failed: PANIC: cannot\n> > > make new WAL entries during recovery\n> > \n> > Should we tweak that message when IsBinaryUpgrade is true?\n> \n> Yes, indeed, I had in mind to simply make the message more generic as:\n> \"cannot insert new WAL entries\".\n\n-1, it's good to have a clear reason why the error happened, especially since\nit's supposed to be \"should not happen\" situation.\n\n\n", "msg_date": "Fri, 28 Jan 2022 22:18:28 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade*" }, { "msg_contents": "Hi,\n\nOn 2022-01-28 21:56:36 +0800, Julien Rouhaud wrote:\n> I think having a new option for vacuumdb is the right move.\n\nCan't we pass the option via the connection string, e.g. something\nPGOPTIONS='-c binary_upgrade_mode=true'? That seems to scale better than to\nadd it gradually to multiple tools.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 28 Jan 2022 10:20:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 28, 2022 at 10:20:07AM -0800, Andres Freund wrote:\n> \n> On 2022-01-28 21:56:36 +0800, Julien Rouhaud wrote:\n> > I think having a new option for vacuumdb is the right move.\n> \n> Can't we pass the option via the connection string, e.g. something\n> PGOPTIONS='-c binary_upgrade_mode=true'? That seems to scale better than to\n> add it gradually to multiple tools.\n\nAh right that's a better idea.\n\n\n", "msg_date": "Sat, 29 Jan 2022 10:50:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" }, { "msg_contents": "On Fri, Jan 28, 2022 at 9:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Fri, Jan 28, 2022 at 10:20:07AM -0800, Andres Freund wrote:\n> > On 2022-01-28 21:56:36 +0800, Julien Rouhaud wrote:\n> > > I think having a new option for vacuumdb is the right move.\n> >\n> > Can't we pass the option via the connection string, e.g. something\n> > PGOPTIONS='-c binary_upgrade_mode=true'? That seems to scale better than to\n> > add it gradually to multiple tools.\n>\n> Ah right that's a better idea.\n\nOK, so I think the conclusion here is that no patch which does\n$SUBJECT is going to get committed, but somebody might write (or\nfinish?) a patch that does something else which could possibly get\ncommitted once it's written. If and when that happens, I think that\npatch should be submitted on a new thread with a subject line that\nmatches what the patch actually does. In the meantime, I'm going to\nmark the CF entry for *this* thread as Returned with Feedback.\n\nFor what it's worth, I'm not 100% sure that $SUBJECT is a bad idea --\nnor am I 100% sure that it's a good idea. On the other hand, I\ndefinitely think the alternative proposal of blocking WAL writes at\ntimes when they shouldn't be happening is a good idea, and most likely\nextensions should also be coded in a way where they're smart enough\nnot to try except at times when it is safe. Therefore, it make sense\nto me to proceed along those kinds of lines for now, and if that's not\nenough and we need to revisit this idea at some point in the future,\nwe can.\n\nNote that I'm taking no view for the present on whether any change\nthat might end up being agreed here should go into v15 or not. It's in\nthat fuzzy grey area where you could call it a feature, or a bug fix,\nor technically-a-feature-but-let's-slip-it-in-after-freeze-anyway. We\ncan decide that when a completed patch shows up, though it's fair to\npoint out that the longer that takes, the less likely it is to be v15\nmaterial. I am, however, taking the position that holding this\nCommitFest entry open is not in the best interest of the project. The\npatch we'd theoretically be committing doesn't exist yet and doesn't\ndo what the subject line suggests.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 17:12:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Disable bgworkers during servers start in pg_upgrade" } ]
[ { "msg_contents": "Hey, all,\n\nI'm working with native logical replication, and I don't fully understand\nwhy logical replication subscribers need to be superusers, nor do I fully\nunderstand the implication of some of the comments made on this page:\n\nhttps://www.postgresql.org/docs/current/logical-replication-security.html\n\n> A user able to modify the schema of subscriber-side tables can execute\n> arbitrary code as a superuser.\n\nDoes \"execute arbitrary code\" here really mean executing arbitrary code on the\nmachine that is running Postgres, or simply running arbitrary SQL in the\nreplicating database? Would it only be able to modify data in the database\ncontaining the subscription, or could it modify other databases in the same\ncluster? Is there any \"blast-radius\" to the capabilities gained by such a user?\n\nAccording to the commit message that added this text, the callout of this\npoint was added as result of CVE-2020-14349, but the details there don't\nreally help me understand what the concern is here, nor do I have a deep\nunderstanding of various features that might combine to create a vulnerability.\n\nI'm not sure what arbitrary code could be executed, but my rough guess, based\non some of the other text on that page, is that custom triggers, written in\nan arbitrary language (e.g., Python), would run arbitrary code and that is\nthe concern. Is that correct? And, if so, assuming that Python triggers were\nthe only way to execute arbitrary code, then simply building Postgres without\nPython support would prevent users that can modify the schema from executing\ncode as superuser. What is the full set of features that could lead to running\narbitrary code in this scenario? Is it just all the different languages that\ncan be used to write triggers?\n\nEssentially, I'm wondering what a loose proof-of-concept privilege escalation\nvulnerability would look like if a non-superuser can modify the schema of\nsubscriber-side tables.\n\nOn a related note, what happens if a superuser creates a subscription, and then\nis demoted to a regular user? My understanding is that the worker that applies\nthe logical changes to the database connects as the user that created the\nsubscription, so would that prevent potential vulnerabilities in any way?\n\n\nThanks,\nPaul\n\n\n", "msg_date": "Thu, 21 Jan 2021 08:20:55 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "Why does creating logical replication subscriptions require\n superuser?" }, { "msg_contents": "Hi!\n\n> 21 янв. 2021 г., в 21:20, Paul Martinez <paulmtz@google.com> написал(а):\n> \n> Hey, all,\n> \n> I'm working with native logical replication, and I don't fully understand\n> why logical replication subscribers need to be superusers, nor do I fully\n> understand the implication of some of the comments made on this page:\n> \n> https://www.postgresql.org/docs/current/logical-replication-security.html\n> \n>> A user able to modify the schema of subscriber-side tables can execute\n>> arbitrary code as a superuser.\n> \n> Does \"execute arbitrary code\" here really mean executing arbitrary code on the\n> machine that is running Postgres, or simply running arbitrary SQL in the\n> replicating database? Would it only be able to modify data in the database\n> containing the subscription, or could it modify other databases in the same\n> cluster? Is there any \"blast-radius\" to the capabilities gained by such a user?\nI suspect it means what it states. Replication is running under superuser and e.g. one can add system catalog to subscription.\nOr exploit this fact other way. Having superuser you can just COPY FROM PROGRAM anything.\n\n> According to the commit message that added this text, the callout of this\n> point was added as result of CVE-2020-14349, but the details there don't\n> really help me understand what the concern is here, nor do I have a deep\n> understanding of various features that might combine to create a vulnerability.\n> \n> I'm not sure what arbitrary code could be executed, but my rough guess, based\n> on some of the other text on that page, is that custom triggers, written in\n> an arbitrary language (e.g., Python), would run arbitrary code and that is\n> the concern. Is that correct? And, if so, assuming that Python triggers were\n> the only way to execute arbitrary code, then simply building Postgres without\n> Python support would prevent users that can modify the schema from executing\n> code as superuser. What is the full set of features that could lead to running\n> arbitrary code in this scenario? Is it just all the different languages that\n> can be used to write triggers?\nWe cannot build PostgreSQL without SQL.\n\n> Essentially, I'm wondering what a loose proof-of-concept privilege escalation\n> vulnerability would look like if a non-superuser can modify the schema of\n> subscriber-side tables.\n\n> On a related note, what happens if a superuser creates a subscription, and then\n> is demoted to a regular user? My understanding is that the worker that applies\n> the logical changes to the database connects as the user that created the\n> subscription, so would that prevent potential vulnerabilities in any way?\n\nSubscription operations must run with privileges of user that created it. All other ways are error prone and leave subscriptions only in superuser's arsenal.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 22 Jan 2021 12:32:03 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Why does creating logical replication subscriptions require\n superuser?" }, { "msg_contents": "Andrey Borodin schrieb am 22.01.2021 um 08:32:\n\n> Replication is running under superuser and e.g. one can add system catalog to subscription.\n> Or exploit this fact other way. Having superuser you can just COPY FROM PROGRAM anything.\n\nIt was my understanding that the replication process itself runs with the user specified\nwhen creating the subscription - which is no necessarily a superuser. Only a user that\nis part of the \"replication\" role.\n\nThe replication user also needs to be granted SELECT privileges on all tables of the publication,\nso it's quite easy to control what the replication user has access to.\nPlus the publication also limits what the replication can see.\n\nI second the idea that not requiring a superuser to create a subscription would make things\na lot easier. We worked around that by creating a security definer function that runs\nthe CREATE SUBSCRIPTION command.\n\nThomas\n\n\n", "msg_date": "Fri, 22 Jan 2021 09:16:28 +0100", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Why does creating logical replication subscriptions require\n superuser?" }, { "msg_contents": "[offlist]\n\n> 22 янв. 2021 г., в 13:16, Thomas Kellerer <shammat@gmx.net> написал(а):\n> \n> Andrey Borodin schrieb am 22.01.2021 um 08:32:\n> \n>> Replication is running under superuser and e.g. one can add system catalog to subscription.\n>> Or exploit this fact other way. Having superuser you can just COPY FROM PROGRAM anything.\n> \n> It was my understanding that the replication process itself runs with the user specified\n> when creating the subscription - which is no necessarily a superuser. Only a user that\n> is part of the \"replication\" role.\n> \n> The replication user also needs to be granted SELECT privileges on all tables of the publication,\n> so it's quite easy to control what the replication user has access to.\n> Plus the publication also limits what the replication can see.\n> \n> I second the idea that not requiring a superuser to create a subscription would make things\n> a lot easier. We worked around that by creating a security definer function that runs\n> the CREATE SUBSCRIPTION command.\n\nHi! Yes, at Yandex.Cloud we want it too https://www.postgresql.org/message-id/flat/1269681541151271%40myt5-68ad52a76c91.qloud-c.yandex.net\nAnd we run PG with patches that create special role for replication that allows you to create subscriptions for tables you own.\nWe successfully created exploits against Aiven and AWS RDS services gaining superuser with their ways of subscription creation (and reported vulnerabilities, of cause). Probably, you have this (not so scary) vulnerability too.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 22 Jan 2021 13:25:31 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Why does creating logical replication subscriptions require\n superuser?" }, { "msg_contents": "> We successfully created exploits against Aiven and AWS RDS services gaining\n> superuser with their ways of subscription creation (and reported\n> vulnerabilities, of course). Probably, you have this (not so scary)\n> vulnerability too.\n\nCan you share the rough idea of how these exploits work? What parts of the\ncurrent architecture allowed that to happen?\n\nI read the thread regarding creating a special role for creating subscriptions,\nand I think it helped me understand various aspects of the current architecture\nbetter.\n\nPlease correct me if any of these points are incorrect:\n\nSome of the original justifications for requiring superuser to create\nsubscriptions include:\n- Replication inherently adds significant network traffic and extra background\n process, and we wouldn't want unprivileged users to be able to add such\n drastic load to then database.\n- Subjectively, subscription is a \"major\" operation, so it makes sense to not\n allow every user to perform it.\n- Running the apply process as a superuser drastically simplifies the number\n of possible errors that might arise due to not having sufficient permissions\n to write to target tables, and may have simplified the initial\n implementation.\n- Subscriptions store plaintext passwords, which are sensitive, and we\n wouldn't want unprivileged users to see these. Only allowing superusers\n to create subscriptions and view the subconninfo column is a way to resolve\n this.\n\nAre there any other major reasons that we require superuser? Notably one may\nwonder why we didn't check for the REPLICATION attribute, but even replication\nusers could run into table ownership and access issues.\n\nUnless I'm mistaken, the apply worker process runs as the user that created\nthe subscription. Thus, it is the requirement that only superusers can create\nsubscriptions that leads to two warnings in the Security documentation:\n\nhttps://www.postgresql.org/docs/current/logical-replication-security.html\n\n> The subscription apply process will run in the local database with the\n> privileges of a superuser.\n\nThis is a direct consequence of requiring superuser to create subscriptions\nand running the apply process as the creator. If the subscription weren't\ncreated by a superuser, then the apply process wouldn't run as superuser\neither, correct?\n\n> A user able to modify the schema of subscriber-side tables can execute\n> arbitrary code as a superuser. Limit ownership and TRIGGER privilege on such\n> tables to roles that superusers trust.\n\nI believe a theoretical exploit here would involve the unprivileged user\ncreating a trigger with a function defined using SECURITY INVOKER and attaching\nit to a table that is a subscription target. Since the apply process is running\nas superuser, this means that the trigger is invoked as superuser, leading to\nthe privilege escalation. More accurately, a user able to modify the schema\nof subscriber-side tables could execute arbitrary code as the _creator of the\nsubscription_, correct?\n\nSo it seems privilege escalation to _superuser_ can be prevented by simply\nmaking the owner of the subscription not a superuser. But this can already\nhappen now by simply altering the user after the subscription has been created.\nI haven't tested this edge case, but I hope that Postgres doesn't crash if it\nsubsequently runs into a permission issue; I assume that it will simply stop\nreplication, which seems appropriate.\n\n\nOne other point:\n\nOne message in the thread mentioned somehow tricking Postgres into replicating\nsystem catalog tables:\n\nhttps://www.postgresql.org/message-id/109201553163096%40myt5-68ad52a76c91.qloud-c.yandex.net\n\nI'm unsure whether this is allowed by default, but it doesn't seem like too\nmuch trouble to run a modified publisher node that does allow it. Then\nsomething like 'UPDATE pg_authid SET rolsuper = TRUE' could result in privilege\nescalation on the subscriber side. But, again, if the apply process isn't\nrunning as superuser, then presumably applying the change in the subscriber\nwould fail, stopping replication, and neutralizing the attack.\n\n\nThanks,\nPaul\n\n\n", "msg_date": "Fri, 22 Jan 2021 14:08:02 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "Re: Why does creating logical replication subscriptions require\n superuser?" }, { "msg_contents": "On Sat, Jan 23, 2021 at 3:46 AM Paul Martinez <paulmtz@google.com> wrote:\n>\n>\n> Unless I'm mistaken, the apply worker process runs as the user that created\n> the subscription. Thus, it is the requirement that only superusers can create\n> subscriptions that leads to two warnings in the Security documentation:\n>\n> https://www.postgresql.org/docs/current/logical-replication-security.html\n>\n> > The subscription apply process will run in the local database with the\n> > privileges of a superuser.\n>\n> This is a direct consequence of requiring superuser to create subscriptions\n> and running the apply process as the creator. If the subscription weren't\n> created by a superuser, then the apply process wouldn't run as superuser\n> either, correct?\n>\n\nYes, this is correct. We use the owner of the subscription in the\napply process to connect to the local database.\n\n> > A user able to modify the schema of subscriber-side tables can execute\n> > arbitrary code as a superuser. Limit ownership and TRIGGER privilege on such\n> > tables to roles that superusers trust.\n>\n> I believe a theoretical exploit here would involve the unprivileged user\n> creating a trigger with a function defined using SECURITY INVOKER and attaching\n> it to a table that is a subscription target. Since the apply process is running\n> as superuser, this means that the trigger is invoked as superuser, leading to\n> the privilege escalation. More accurately, a user able to modify the schema\n> of subscriber-side tables could execute arbitrary code as the _creator of the\n> subscription_, correct?\n>\n> So it seems privilege escalation to _superuser_ can be prevented by simply\n> making the owner of the subscription not a superuser. But this can already\n> happen now by simply altering the user after the subscription has been created.\n>\n\nWe can't change the owner of the subscription to a non-superuser. See\nthe below example:\npostgres=# Alter Subscription mysub Owner to test;\nERROR: permission denied to change owner of subscription \"mysub\"\nHINT: The owner of a subscription must be a superuser.\n\nIn the above example, the 'test' is a non-superuser.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 29 Jan 2021 11:18:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why does creating logical replication subscriptions require\n superuser?" }, { "msg_contents": "On Fri, Jan 22, 2021 at 02:08:02PM -0800, Paul Martinez wrote:\n> Some of the original justifications for requiring superuser to create\n> subscriptions include:\n> - Replication inherently adds significant network traffic and extra background\n> process, and we wouldn't want unprivileged users to be able to add such\n> drastic load to then database.\n\nI think you're referring to these messages:\n\n https://postgr.es/m/CA+TgmoahEoM2zZO71yv4883HFarXcBcOs3if6fEdRcRs8Fs=zA@mail.gmail.com\n https://postgr.es/m/CA+TgmobqXGe_7dcX1_Dv8+kaf3NEoe5Sy4NXGB9AyfM5YDjsGQ@mail.gmail.com\n\nA permanent background process bypasses authentication, so mechanisms like\npg_hba.conf and expiration of the auth SSL certificate don't stop access.\nLike that thread discussed, this justifies some privilege enforcement.\n(Autovacuum also bypasses authentication, but those are less predictable.)\n\nSince we already let users drive the database to out-of-memory, I wouldn't\nworry about load. In other words, the quantity of network traffic and number\nof background processes don't matter, just the act of allowing them at all.\n\n> - Running the apply process as a superuser drastically simplifies the number\n> of possible errors that might arise due to not having sufficient permissions\n> to write to target tables, and may have simplified the initial\n> implementation.\n\nI think you're referring to this:\n\nhttps://postgr.es/m/CA+TgmoYe1x21zLyCqOVL-KDd9DJSVZ4v8NNmfDscjRwV9Qfgkg@mail.gmail.com wrote:\n> It seems more likely that there is a person whose job it is to set up\n> replication but who doesn't normally interact with the table data\n> itself. In that kind of case, you just want to give the person\n> permission to create subscriptions, without needing to also give them\n> lots of privileges on individual tables (and maybe having whatever\n> they are trying to do fail if you miss a table someplace).\n\nExposure to permission checks is a chief benefit of doing anything as a\nnon-superuser, so I disagree with this. (I've bcc'd the author of that\nmessage, in case he wants to comment.) We could add a pg_write_any_table\nspecial role. DBAs should be more cautious granting pg_write_any_table than\ngranting subscription privilege. (For this use case, grant both.)\n\n> - Subscriptions store plaintext passwords, which are sensitive, and we\n> wouldn't want unprivileged users to see these. Only allowing superusers\n> to create subscriptions and view the subconninfo column is a way to resolve\n> this.\n\npg_user_mapping.umoptions has the same security considerations; one should be\nable to protect it and subconninfo roughly the same way.\n\n> Are there any other major reasons that we require superuser?\n\nAs another prerequisite for non-superuser-owned subscriptions, the connection\nto the publisher must enforce the equivalent of dblink_security_check().\n\n> Notably one may\n> wonder why we didn't check for the REPLICATION attribute, but even replication\n> users could run into table ownership and access issues.\n\nREPLICATION represents the authority to read all bytes of the data directory.\nCompared to the implications of starting a subscriber, REPLICATION carries a\nlot of power. I would not reuse REPLICATION here.\n\n> One message in the thread mentioned somehow tricking Postgres into replicating\n> system catalog tables:\n> \n> https://www.postgresql.org/message-id/109201553163096%40myt5-68ad52a76c91.qloud-c.yandex.net\n> \n> I'm unsure whether this is allowed by default, but it doesn't seem like too\n> much trouble to run a modified publisher node that does allow it. Then\n> something like 'UPDATE pg_authid SET rolsuper = TRUE' could result in privilege\n> escalation on the subscriber side. But, again, if the apply process isn't\n> running as superuser, then presumably applying the change in the subscriber\n> would fail, stopping replication, and neutralizing the attack.\n\nThis is a special case of the need for ordinary ACL checks in the subscriber.\nTreating system catalogs differently would be insufficient and unnecessary.\n\nThanks,\nnm\n\n\n", "msg_date": "Sun, 31 Jan 2021 14:22:35 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Why does creating logical replication subscriptions require\n superuser?" } ]
[ { "msg_contents": "Hi,\n\nEvery nbtree index build currently does an smgrimmedsync at the end:\n\n/*\n * Read tuples in correct sort order from tuplesort, and load them into\n * btree leaves.\n */\nstatic void\n_bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2)\n...\n\t/*\n\t * When we WAL-logged index pages, we must nonetheless fsync index files.\n\t * Since we're building outside shared buffers, a CHECKPOINT occurring\n\t * during the build has no way to flush the previously written data to\n\t * disk (indeed it won't know the index even exists). A crash later on\n\t * would replay WAL from the checkpoint, therefore it wouldn't replay our\n\t * earlier WAL entries. If we do not fsync those pages here, they might\n\t * still not be on disk when the crash occurs.\n\t */\n\tif (wstate->btws_use_wal)\n\t{\n\t\tRelationOpenSmgr(wstate->index);\n\t\tsmgrimmedsync(wstate->index->rd_smgr, MAIN_FORKNUM);\n\t}\n\nIn cases we create lots of small indexes, e.g. because of an initial\nschema load, partition creation or something like that, that turns out\nto be a major limiting factor (unless one turns fsync off).\n\n\nOne way to address that would be to put newly built indexes into s_b\n(using a strategy, to avoid blowing out the whole cache), instead of\nusing smgrwrite() etc directly. But that's a discussion with a bit more\ncomplex tradeoffs.\n\n\nWhat I wonder is why the issue addressed in the comment I copied above\ncan't more efficiently be addressed using sync requests, like we do for\nother writes? It's possibly bit more complicated than just passing\nskipFsync=false to smgrwrite/smgrextend, but it should be quite doable?\n\n\nA quick hack (probably not quite correct!) to evaluate the benefit shows\nthat the attached script takes 2m17.223s with the smgrimmedsync and\n0m22.870s passing skipFsync=false to write/extend. Entirely IO bound in\nthe former case, CPU bound in the latter.\n\nCreating lots of tables with indexes (directly or indirectly through\nrelations having a toast table) is pretty common, particularly after the\nintroduction of partitioning.\n\n\nThinking through the correctness of replacing smgrimmedsync() with sync\nrequests, the potential problems that I can see are:\n\n1) redo point falls between the log_newpage() and the\n write()/register_dirty_segment() in smgrextend/smgrwrite.\n2) redo point falls between write() and register_dirty_segment()\n\nBut both of these are fine in the context of initially filling a newly\ncreated relfilenode, as far as I can tell? Otherwise the current\nsmgrimmedsync() approach wouldn't be safe either, as far as I can tell?\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 21 Jan 2021 12:36:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On 21/01/2021 22:36, Andres Freund wrote:\n> Hi,\n> \n> Every nbtree index build currently does an smgrimmedsync at the end:\n> \n> /*\n> * Read tuples in correct sort order from tuplesort, and load them into\n> * btree leaves.\n> */\n> static void\n> _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2)\n> ...\n> \t/*\n> \t * When we WAL-logged index pages, we must nonetheless fsync index files.\n> \t * Since we're building outside shared buffers, a CHECKPOINT occurring\n> \t * during the build has no way to flush the previously written data to\n> \t * disk (indeed it won't know the index even exists). A crash later on\n> \t * would replay WAL from the checkpoint, therefore it wouldn't replay our\n> \t * earlier WAL entries. If we do not fsync those pages here, they might\n> \t * still not be on disk when the crash occurs.\n> \t */\n> \tif (wstate->btws_use_wal)\n> \t{\n> \t\tRelationOpenSmgr(wstate->index);\n> \t\tsmgrimmedsync(wstate->index->rd_smgr, MAIN_FORKNUM);\n> \t}\n> \n> In cases we create lots of small indexes, e.g. because of an initial\n> schema load, partition creation or something like that, that turns out\n> to be a major limiting factor (unless one turns fsync off).\n> \n> \n> One way to address that would be to put newly built indexes into s_b\n> (using a strategy, to avoid blowing out the whole cache), instead of\n> using smgrwrite() etc directly. But that's a discussion with a bit more\n> complex tradeoffs.\n> \n> \n> What I wonder is why the issue addressed in the comment I copied above\n> can't more efficiently be addressed using sync requests, like we do for\n> other writes? It's possibly bit more complicated than just passing\n> skipFsync=false to smgrwrite/smgrextend, but it should be quite doable?\n\nMakes sense.\n\n> A quick hack (probably not quite correct!) to evaluate the benefit shows\n> that the attached script takes 2m17.223s with the smgrimmedsync and\n> 0m22.870s passing skipFsync=false to write/extend. Entirely IO bound in\n> the former case, CPU bound in the latter.\n> \n> Creating lots of tables with indexes (directly or indirectly through\n> relations having a toast table) is pretty common, particularly after the\n> introduction of partitioning.\n> \n> \n> Thinking through the correctness of replacing smgrimmedsync() with sync\n> requests, the potential problems that I can see are:\n> \n> 1) redo point falls between the log_newpage() and the\n> write()/register_dirty_segment() in smgrextend/smgrwrite.\n> 2) redo point falls between write() and register_dirty_segment()\n> \n> But both of these are fine in the context of initially filling a newly\n> created relfilenode, as far as I can tell? Otherwise the current\n> smgrimmedsync() approach wouldn't be safe either, as far as I can tell?\n\nHmm. If the redo point falls between write() and the \nregister_dirty_segment(), and the checkpointer finishes the whole \ncheckpoint before register_dirty_segment(), you are not safe. That can't \nhappen with write from the buffer manager, because the checkpointer \nwould block waiting for the flush of the buffer to finish.\n\n- Heikki\n\n\n", "msg_date": "Thu, 21 Jan 2021 23:54:04 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "Hi,\n\nOn 2021-01-21 23:54:04 +0200, Heikki Linnakangas wrote:\n> On 21/01/2021 22:36, Andres Freund wrote:\n> > A quick hack (probably not quite correct!) to evaluate the benefit shows\n> > that the attached script takes 2m17.223s with the smgrimmedsync and\n> > 0m22.870s passing skipFsync=false to write/extend. Entirely IO bound in\n> > the former case, CPU bound in the latter.\n> >\n> > Creating lots of tables with indexes (directly or indirectly through\n> > relations having a toast table) is pretty common, particularly after the\n> > introduction of partitioning.\n> >\n> >\n> > Thinking through the correctness of replacing smgrimmedsync() with sync\n> > requests, the potential problems that I can see are:\n> >\n> > 1) redo point falls between the log_newpage() and the\n> > write()/register_dirty_segment() in smgrextend/smgrwrite.\n> > 2) redo point falls between write() and register_dirty_segment()\n> >\n> > But both of these are fine in the context of initially filling a newly\n> > created relfilenode, as far as I can tell? Otherwise the current\n> > smgrimmedsync() approach wouldn't be safe either, as far as I can tell?\n>\n> Hmm. If the redo point falls between write() and the\n> register_dirty_segment(), and the checkpointer finishes the whole checkpoint\n> before register_dirty_segment(), you are not safe. That can't happen with\n> write from the buffer manager, because the checkpointer would block waiting\n> for the flush of the buffer to finish.\n\nHm, right.\n\nThe easiest way to address that race would be to just record the redo\npointer in _bt_leafbuild() and continue to do the smgrimmedsync if a\ncheckpoint started since the start of the index build.\n\nAnother approach would be to utilize PGPROC.delayChkpt, but I would\nrather not unnecessarily expand the use of that.\n\nIt's kind of interesting - in my aio branch I moved the\nregister_dirty_segment() to before the actual asynchronous write (due to\navailability of the necessary data), which ought to be safe because of\nthe buffer interlocking. But that doesn't work here, or for other places\ndoing writes without going through s_b. It'd be great if we could come\nup with a general solution, but I don't immediately see anything great.\n\nThe best I can come up with is adding helper functions to wrap some of\nthe complexity for \"unbuffered\" writes of doing an immedsync iff the\nredo pointer changed. Something very roughly like\n\ntypedef struct UnbufferedWriteState { XLogRecPtr redo; uint64 numwrites;} UnbufferedWriteState;\nvoid unbuffered_prep(UnbufferedWriteState* state);\nvoid unbuffered_write(UnbufferedWriteState* state, ...);\nvoid unbuffered_extend(UnbufferedWriteState* state, ...);\nvoid unbuffered_finish(UnbufferedWriteState* state);\n\nwhich wouldn't just do the dance to avoid the immedsync() if possible,\nbut also took care of PageSetChecksumInplace() (and PageEncryptInplace()\nif we get that [1]).\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20210112193431.2edcz776qjen7kao%40alap3.anarazel.de\n\n\n", "msg_date": "Thu, 21 Jan 2021 14:51:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "So, I've written a patch which avoids doing the immediate fsync for\nindex builds either by using shared buffers or by queueing sync requests\nfor the checkpointer. If a checkpoint starts during the index build and\nthe backend is not using shared buffers for the index build, it will\nneed to do the fsync.\n\nThe reviewer will notice that _bt_load() extends the index relation for\nthe metapage before beginning the actual load of leaf pages but does not\nactually write the metapage until the end of the index build. When using\nshared buffers, it was difficult to create block 0 of the index after\ncreating all of the other blocks, as the block number is assigned inside\nof ReadBuffer_common(), and it doesn't really work with the current\nbufmgr API to extend a relation with a caller-specified block number.\n\nI am not entirely sure of the correctness of doing an smgrextend() (when\nnot using shared buffers) without writing any WAL. However, the metapage\ncontents are not written until after WAL logging them later in\n_bt_blwritepage(), so, perhaps it is okay?\n\nI am also not fond of the change to the signature of _bt_uppershutdown()\nthat this implementation forces. Now, I must pass the shared buffer\n(when using shared buffers) that I've reserved (pinned and locked) for\nthe metapage and, if not using shared buffers, the page I've allocated\nfor the metapage, before doing the index build to _bt_uppershutdown()\nafter doing the rest of the index build. I don't know that it seems\nincorrect -- more that it feels a bit messy (and inefficient) to hold\nonto that shared buffer or memory for the duration of the index build,\nduring which I have no intention of doing anything with that buffer or\nmemory. However, the alternative I devised was to change\nReadBuffer_common() or to add a new ReadBufferExtended() mode which\nindicated that the caller would specify the block number and whether or\nnot it was an extend, which also didn't seem right.\n\nFor the extensions of the index done during index build, I use\nReadBufferExtended() directly instead of _bt_getbuf() for a few reasons.\nI thought (am not sure) that I don't need to do\nLockRelationForExtension() during index build. Also, I decided to use\nRBM_ZERO_AND_LOCK mode so that I had an exclusive lock on the buffer\ncontent instead of doing _bt_lockbuf() (which is what _bt_getbuf()\ndoes). And, most of the places I added the call to ReadBufferExtended(),\nthe non-shared buffer code path is already initializing the page, so it\nmade more sense to just share that codepath.\n\nI considered whether or not it made sense to add a new btree utility\nfunction which calls ReadBufferExtended() in this way, however, I wasn't\nsure how much that would buy me. The other place it might be able to be\nused is btvacuumpage(), but that case is different enough that I'm not\neven sure what the function would be called -- basically it would just\nbe an alternative to _bt_getbuf() for a couple of somewhat unrelated edge\ncases.\n\nOn Thu, Jan 21, 2021 at 5:51 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-01-21 23:54:04 +0200, Heikki Linnakangas wrote:\n> > On 21/01/2021 22:36, Andres Freund wrote:\n> > > A quick hack (probably not quite correct!) to evaluate the benefit shows\n> > > that the attached script takes 2m17.223s with the smgrimmedsync and\n> > > 0m22.870s passing skipFsync=false to write/extend. Entirely IO bound in\n> > > the former case, CPU bound in the latter.\n> > >\n> > > Creating lots of tables with indexes (directly or indirectly through\n> > > relations having a toast table) is pretty common, particularly after the\n> > > introduction of partitioning.\n> > >\n\nMoving index builds of indexes which would fit in shared buffers back\ninto shared buffers has the benefit of eliminating the need to write\nthem out and fsync them if they will be subsequently used and thus read\nright back into shared buffers. This avoids some of the unnecessary\nfsyncs Andres is talking about here as well as avoiding some of the\nextra IO required to write them and then read them into shared buffers.\n\nI have dummy criteria for whether or not to use shared buffers (if the\nnumber of tuples to be indexed is > 1000). I am considering using a\nthreshold of some percentage of the size of shared buffers as the\nactual criteria for determining where to do the index build.\n\n> > >\n> > > Thinking through the correctness of replacing smgrimmedsync() with sync\n> > > requests, the potential problems that I can see are:\n> > >\n> > > 1) redo point falls between the log_newpage() and the\n> > > write()/register_dirty_segment() in smgrextend/smgrwrite.\n> > > 2) redo point falls between write() and register_dirty_segment()\n> > >\n> > > But both of these are fine in the context of initially filling a newly\n> > > created relfilenode, as far as I can tell? Otherwise the current\n> > > smgrimmedsync() approach wouldn't be safe either, as far as I can tell?\n> >\n> > Hmm. If the redo point falls between write() and the\n> > register_dirty_segment(), and the checkpointer finishes the whole checkpoint\n> > before register_dirty_segment(), you are not safe. That can't happen with\n> > write from the buffer manager, because the checkpointer would block waiting\n> > for the flush of the buffer to finish.\n>\n> Hm, right.\n>\n> The easiest way to address that race would be to just record the redo\n> pointer in _bt_leafbuild() and continue to do the smgrimmedsync if a\n> checkpoint started since the start of the index build.\n>\n> Another approach would be to utilize PGPROC.delayChkpt, but I would\n> rather not unnecessarily expand the use of that.\n>\n> It's kind of interesting - in my aio branch I moved the\n> register_dirty_segment() to before the actual asynchronous write (due to\n> availability of the necessary data), which ought to be safe because of\n> the buffer interlocking. But that doesn't work here, or for other places\n> doing writes without going through s_b. It'd be great if we could come\n> up with a general solution, but I don't immediately see anything great.\n>\n> The best I can come up with is adding helper functions to wrap some of\n> the complexity for \"unbuffered\" writes of doing an immedsync iff the\n> redo pointer changed. Something very roughly like\n>\n> typedef struct UnbufferedWriteState { XLogRecPtr redo; uint64 numwrites;} UnbufferedWriteState;\n> void unbuffered_prep(UnbufferedWriteState* state);\n> void unbuffered_write(UnbufferedWriteState* state, ...);\n> void unbuffered_extend(UnbufferedWriteState* state, ...);\n> void unbuffered_finish(UnbufferedWriteState* state);\n>\n> which wouldn't just do the dance to avoid the immedsync() if possible,\n> but also took care of PageSetChecksumInplace() (and PageEncryptInplace()\n> if we get that [1]).\n>\n\nRegarding the implementation, I think having an API to do these\n\"unbuffered\" or \"direct\" writes outside of shared buffers is a good\nidea. In this specific case, the proposed API would not change the code\nmuch. I would just wrap the small diffs I added to the beginning and end\nof _bt_load() in the API calls for unbuffered_prep() and\nunbuffered_finish() and then tuck away the second half of\n_bt_blwritepage() in unbuffered_write()/unbuffered_extend(). I figured I\nwould do so after ensuring the correctness of the logic in this patch.\nThen I will work on a patch which implements the unbuffered_write() API\nand demonstrates its utility with at least a few of the most compelling\nmost compelling use cases in the code.\n\n- Melanie", "msg_date": "Mon, 3 May 2021 17:24:50 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Mon, May 3, 2021 at 5:24 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Thu, Jan 21, 2021 at 5:51 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-01-21 23:54:04 +0200, Heikki Linnakangas wrote:\n> > > On 21/01/2021 22:36, Andres Freund wrote:\n> > > >\n> > > > Thinking through the correctness of replacing smgrimmedsync() with sync\n> > > > requests, the potential problems that I can see are:\n> > > >\n> > > > 1) redo point falls between the log_newpage() and the\n> > > > write()/register_dirty_segment() in smgrextend/smgrwrite.\n> > > > 2) redo point falls between write() and register_dirty_segment()\n> > > >\n> > > > But both of these are fine in the context of initially filling a newly\n> > > > created relfilenode, as far as I can tell? Otherwise the current\n> > > > smgrimmedsync() approach wouldn't be safe either, as far as I can tell?\n> > >\n> > > Hmm. If the redo point falls between write() and the\n> > > register_dirty_segment(), and the checkpointer finishes the whole checkpoint\n> > > before register_dirty_segment(), you are not safe. That can't happen with\n> > > write from the buffer manager, because the checkpointer would block waiting\n> > > for the flush of the buffer to finish.\n> >\n> > Hm, right.\n> >\n> > The easiest way to address that race would be to just record the redo\n> > pointer in _bt_leafbuild() and continue to do the smgrimmedsync if a\n> > checkpoint started since the start of the index build.\n> >\n> > Another approach would be to utilize PGPROC.delayChkpt, but I would\n> > rather not unnecessarily expand the use of that.\n> >\n> > It's kind of interesting - in my aio branch I moved the\n> > register_dirty_segment() to before the actual asynchronous write (due to\n> > availability of the necessary data), which ought to be safe because of\n> > the buffer interlocking. But that doesn't work here, or for other places\n> > doing writes without going through s_b. It'd be great if we could come\n> > up with a general solution, but I don't immediately see anything great.\n> >\n> > The best I can come up with is adding helper functions to wrap some of\n> > the complexity for \"unbuffered\" writes of doing an immedsync iff the\n> > redo pointer changed. Something very roughly like\n> >\n> > typedef struct UnbufferedWriteState { XLogRecPtr redo; uint64 numwrites;} UnbufferedWriteState;\n> > void unbuffered_prep(UnbufferedWriteState* state);\n> > void unbuffered_write(UnbufferedWriteState* state, ...);\n> > void unbuffered_extend(UnbufferedWriteState* state, ...);\n> > void unbuffered_finish(UnbufferedWriteState* state);\n> >\n> > which wouldn't just do the dance to avoid the immedsync() if possible,\n> > but also took care of PageSetChecksumInplace() (and PageEncryptInplace()\n> > if we get that [1]).\n> >\n>\n> Regarding the implementation, I think having an API to do these\n> \"unbuffered\" or \"direct\" writes outside of shared buffers is a good\n> idea. In this specific case, the proposed API would not change the code\n> much. I would just wrap the small diffs I added to the beginning and end\n> of _bt_load() in the API calls for unbuffered_prep() and\n> unbuffered_finish() and then tuck away the second half of\n> _bt_blwritepage() in unbuffered_write()/unbuffered_extend(). I figured I\n> would do so after ensuring the correctness of the logic in this patch.\n> Then I will work on a patch which implements the unbuffered_write() API\n> and demonstrates its utility with at least a few of the most compelling\n> most compelling use cases in the code.\n>\n\nI've taken a pass at writing the API for \"direct\" or \"unbuffered\" writes\nand extends. It introduces the suggested functions: unbuffered_prep(),\nunbuffered_finish(), unbuffered_write(), and unbuffered_extend().\n\nThis is a rough cut -- corrections welcome and encouraged!\n\nunbuffered_prep() saves the xlog redo pointer at the time it is called.\nThen, if the redo pointer hasn't changed by the time unbuffered_finish()\nis called, the backend can avoid calling smgrimmedsync(). Note that this\nonly works if intervening calls to smgrwrite() and smgrextend() pass\nskipFsync=False.\n\nunbuffered_write() and unbuffered_extend() might be able to be used even\nif unbuffered_prep() and unbuffered_finish() are not used -- for example\nhash indexes do something I don't entirely understand in which they call\nsmgrextend() directly when allocating buckets but then initialize the\nnew bucket pages using the bufmgr machinery.\n\nI also intend to move accounting of pages written and extended into the\nunbuffered_write() and unbuffered_extend() functions using the functions\nI propose in [1] to populate a new view, pg_stat_buffers. Then this\n\"unbuffered\" IO would be counted in stats.\n\nI picked the name \"direct\" for the directory in /src/backend/storage\nbecause I thought that these functions are analogous to direct IO in\nLinux -- in that they are doing IO without going through Postgres bufmgr\n-- unPGbuffered, basically. Other suggestions were \"raw\" and \"relIO\".\nRaw seemed confusing since raw device IO is pretty far from what is\nhappening here. RelIO didn't seem like it belonged next to bufmgr (to\nme). However, direct and unbuffered will both soon become fraught\nterminology with the introduction of AIO and direct IO to Postgres...\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/20200124195226.lth52iydq2n2uilq%40alap3.anarazel.de", "msg_date": "Wed, 29 Sep 2021 14:35:47 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Mon, May 3, 2021 at 5:24 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> So, I've written a patch which avoids doing the immediate fsync for\n> index builds either by using shared buffers or by queueing sync requests\n> for the checkpointer. If a checkpoint starts during the index build and\n> the backend is not using shared buffers for the index build, it will\n> need to do the fsync.\n\nI've attached a rebased version of the patch (old patch doesn't apply).\n\nWith the patch applied (compiled at O2), creating twenty empty tables in\na transaction with a text column and an index on another column (like in\nthe attached SQL [make a test_idx schema first]) results in a fairly\nconsistent 15-30% speedup on my laptop (timings still in tens of ms -\navg 50 ms to avg 65 ms so run variation affects the % a lot).\nReducing the number of fsync calls from 40 to 1 was what likely causes\nthis difference.\n\n- Melanie", "msg_date": "Fri, 19 Nov 2021 15:11:57 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Fri, Nov 19, 2021 at 3:11 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 5:24 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > So, I've written a patch which avoids doing the immediate fsync for\n> > index builds either by using shared buffers or by queueing sync requests\n> > for the checkpointer. If a checkpoint starts during the index build and\n> > the backend is not using shared buffers for the index build, it will\n> > need to do the fsync.\n>\n> I've attached a rebased version of the patch (old patch doesn't apply).\n>\n> With the patch applied (compiled at O2), creating twenty empty tables in\n> a transaction with a text column and an index on another column (like in\n> the attached SQL [make a test_idx schema first]) results in a fairly\n> consistent 15-30% speedup on my laptop (timings still in tens of ms -\n> avg 50 ms to avg 65 ms so run variation affects the % a lot).\n> Reducing the number of fsync calls from 40 to 1 was what likely causes\n> this difference.\n\nCorrection for the above: I haven't worked on mac in a while and didn't\nrealize that wal_sync_method=fsync was not enough to ensure that all\nbuffered data would actually be flushed to disk on mac (which was\nrequired for my test).\n\nSetting wal_sync_method to fsync_writethrough with my small test I see\nover a 5-6X improvement in time taken - from 1 second average to 0.2\nseconds average. And running Andres' \"createlots.sql\" test, I see around\na 16x improvement - from around 11 minutes to around 40 seconds. I ran\nit on a laptop running macos and other than wal_sync_method, I only\nchanged shared_buffers (to 1GB).\n\n- Melanie\n\n\n", "msg_date": "Tue, 23 Nov 2021 15:51:51 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "Hi,\n\nOn 2021-11-19 15:11:57 -0500, Melanie Plageman wrote:\n> From 2130175c5d794f60c5f15d6eb1b626c81fb7c39b Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Thu, 15 Apr 2021 07:01:01 -0400\n> Subject: [PATCH v2] Index build avoids immed fsync\n> \n> Avoid immediate fsync for just built indexes either by using shared\n> buffers or by leveraging checkpointer's SyncRequest queue. When a\n> checkpoint begins during the index build, if not using shared buffers,\n> the backend will have to do its own fsync.\n> ---\n> src/backend/access/nbtree/nbtree.c | 39 +++---\n> src/backend/access/nbtree/nbtsort.c | 186 +++++++++++++++++++++++-----\n> src/backend/access/transam/xlog.c | 14 +++\n> src/include/access/xlog.h | 1 +\n> 4 files changed, 188 insertions(+), 52 deletions(-)\n> \n> diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c\n> index 40ad0956e0..a2e32f18e6 100644\n> --- a/src/backend/access/nbtree/nbtree.c\n> +++ b/src/backend/access/nbtree/nbtree.c\n> @@ -150,30 +150,29 @@ void\n> btbuildempty(Relation index)\n> {\n> \tPage\t\tmetapage;\n> +\tBuffer metabuf;\n> \n> -\t/* Construct metapage. */\n> -\tmetapage = (Page) palloc(BLCKSZ);\n> -\t_bt_initmetapage(metapage, P_NONE, 0, _bt_allequalimage(index, false));\n> -\n> +\t// TODO: test this.\n\nShouldn't this path have plenty coverage?\n\n\n> \t/*\n> -\t * Write the page and log it. It might seem that an immediate sync would\n> -\t * be sufficient to guarantee that the file exists on disk, but recovery\n> -\t * itself might remove it while replaying, for example, an\n> -\t * XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record. Therefore, we need\n> -\t * this even when wal_level=minimal.\n> +\t * Construct metapage.\n> +\t * Because we don't need to lock the relation for extension (since\n> +\t * noone knows about it yet) and we don't need to initialize the\n> +\t * new page, as it is done below by _bt_blnewpage(), _bt_getbuf()\n> +\t * (with P_NEW and BT_WRITE) is overkill.\n\nIsn't the more relevant operation the log_newpage_buffer()?\n\n\n> However, it might be worth\n> +\t * either modifying it or adding a new helper function instead of\n> +\t * calling ReadBufferExtended() directly. We pass mode RBM_ZERO_AND_LOCK\n> +\t * because we want to hold an exclusive lock on the buffer content\n> \t */\n\n\"modifying it\" refers to what?\n\nI don't see a problem using ReadBufferExtended() here. Why would you like to\nreplace it with something else?\n\n\n\n> +\t/*\n> +\t * Based on the number of tuples, either create a buffered or unbuffered\n> +\t * write state. if the number of tuples is small, make a buffered write\n> +\t * if the number of tuples is larger, then we make an unbuffered write state\n> +\t * and must ensure that we check the redo pointer to know whether or not we\n> +\t * need to fsync ourselves\n> +\t */\n> \n> \t/*\n> \t * Finish the build by (1) completing the sort of the spool file, (2)\n> \t * inserting the sorted tuples into btree pages and (3) building the upper\n> \t * levels. Finally, it may also be necessary to end use of parallelism.\n> \t */\n> -\t_bt_leafbuild(buildstate.spool, buildstate.spool2);\n> +\tif (reltuples > 1000)\n\nI'm ok with some random magic constant here, but it seems worht putting it in\nsome constant / #define to make it more obvious.\n\n> +\t\t_bt_leafbuild(buildstate.spool, buildstate.spool2, false);\n> +\telse\n> +\t\t_bt_leafbuild(buildstate.spool, buildstate.spool2, true);\n\nWhy duplicate the function call?\n\n\n> /*\n> * allocate workspace for a new, clean btree page, not linked to any siblings.\n> + * If index is not built in shared buffers, buf should be InvalidBuffer\n> */\n> static Page\n> -_bt_blnewpage(uint32 level)\n> +_bt_blnewpage(uint32 level, Buffer buf)\n> {\n> \tPage\t\tpage;\n> \tBTPageOpaque opaque;\n> \n> -\tpage = (Page) palloc(BLCKSZ);\n> +\tif (buf)\n> +\t\tpage = BufferGetPage(buf);\n> +\telse\n> +\t\tpage = (Page) palloc(BLCKSZ);\n> \n> \t/* Zero the page and set up standard page header info */\n> \t_bt_pageinit(page, BLCKSZ);\n\nIck, that seems pretty ugly API-wise and subsequently too likely to lead to\npfree()ing a page that's actually in shared buffers. I think it'd make sense\nto separate the allocation from the initialization bits?\n\n\n> @@ -635,8 +657,20 @@ _bt_blnewpage(uint32 level)\n> * emit a completed btree page, and release the working storage.\n> */\n> static void\n> -_bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)\n> +_bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno, Buffer buf)\n> {\n> +\tif (wstate->use_shared_buffers)\n> +\t{\n> +\t\tAssert(buf);\n> +\t\tSTART_CRIT_SECTION();\n> +\t\tMarkBufferDirty(buf);\n> +\t\tif (wstate->btws_use_wal)\n> +\t\t\tlog_newpage_buffer(buf, true);\n> +\t\tEND_CRIT_SECTION();\n> +\t\t_bt_relbuf(wstate->index, buf);\n> +\t\treturn;\n> +\t}\n> +\n> \t/* XLOG stuff */\n> \tif (wstate->btws_use_wal)\n> \t{\n> @@ -659,7 +693,7 @@ _bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)\n> \t\tsmgrextend(RelationGetSmgr(wstate->index), MAIN_FORKNUM,\n> \t\t\t\t wstate->btws_pages_written++,\n> \t\t\t\t (char *) wstate->btws_zeropage,\n> -\t\t\t\t true);\n> +\t\t\t\t false);\n> \t}\n\nIs there a good place to document the way we ensure durability for this path?\n\n\n> +\t/*\n> +\t * Extend the index relation upfront to reserve the metapage\n> +\t */\n> +\tif (wstate->use_shared_buffers)\n> +\t{\n> +\t\t/*\n> +\t\t * We should not need to LockRelationForExtension() as no one else knows\n> +\t\t * about this index yet?\n> +\t\t * Extend the index relation by one block for the metapage. _bt_getbuf()\n> +\t\t * is not used here as it does _bt_pageinit() which is one later by\n\n*done\n\n\n> +\t\t * _bt_initmetapage(). We will fill in the metapage and write it out at\n> +\t\t * the end of index build when we have all of the information required\n> +\t\t * for the metapage. However, we initially extend the relation for it to\n> +\t\t * occupy block 0 because it is much easier when using shared buffers to\n> +\t\t * extend the relation with a block number that is always increasing by\n> +\t\t * 1.\n\nNot quite following what you're trying to get at here. There generally is no\nway to extend a relation except by increasing block numbers?\n\n\n> @@ -1425,7 +1544,10 @@ _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2)\n> \t * still not be on disk when the crash occurs.\n> \t */\n> \tif (wstate->btws_use_wal)\n> -\t\tsmgrimmedsync(RelationGetSmgr(wstate->index), MAIN_FORKNUM);\n> +\t{\n> +\t\tif (!wstate->use_shared_buffers && RedoRecPtrChanged(wstate->redo))\n> +\t\t\tsmgrimmedsync(RelationGetSmgr(wstate->index), MAIN_FORKNUM);\n> +\t}\n> }\n> \n> /*\n\nThis needs documentation. The whole comment above isn't accurate anymore afaict?\n\n\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index 1616448368..63fd212787 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -8704,6 +8704,20 @@ GetRedoRecPtr(void)\n> \treturn RedoRecPtr;\n> }\n> \n> +bool\n> +RedoRecPtrChanged(XLogRecPtr comparator_ptr)\n> +{\n> +\tXLogRecPtr\tptr;\n> +\n> +\tSpinLockAcquire(&XLogCtl->info_lck);\n> +\tptr = XLogCtl->RedoRecPtr;\n> +\tSpinLockRelease(&XLogCtl->info_lck);\n> +\n> +\tif (RedoRecPtr < ptr)\n> +\t\tRedoRecPtr = ptr;\n> +\treturn RedoRecPtr != comparator_ptr;\n> +}\n\nWhat's the deal with the < comparison?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Dec 2021 11:33:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "I have attached a v3 which includes two commits -- one of which\nimplements the directmgr API and uses it and the other which adds\nfunctionality to use either directmgr or bufmgr API during index build.\n\nAlso registering for march commitfest.\n\nOn Thu, Dec 9, 2021 at 2:33 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-11-19 15:11:57 -0500, Melanie Plageman wrote:\n> > From 2130175c5d794f60c5f15d6eb1b626c81fb7c39b Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Thu, 15 Apr 2021 07:01:01 -0400\n> > Subject: [PATCH v2] Index build avoids immed fsync\n> >\n> > Avoid immediate fsync for just built indexes either by using shared\n> > buffers or by leveraging checkpointer's SyncRequest queue. When a\n> > checkpoint begins during the index build, if not using shared buffers,\n> > the backend will have to do its own fsync.\n> > ---\n> > src/backend/access/nbtree/nbtree.c | 39 +++---\n> > src/backend/access/nbtree/nbtsort.c | 186 +++++++++++++++++++++++-----\n> > src/backend/access/transam/xlog.c | 14 +++\n> > src/include/access/xlog.h | 1 +\n> > 4 files changed, 188 insertions(+), 52 deletions(-)\n> >\n> > diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c\n> > index 40ad0956e0..a2e32f18e6 100644\n> > --- a/src/backend/access/nbtree/nbtree.c\n> > +++ b/src/backend/access/nbtree/nbtree.c\n> > @@ -150,30 +150,29 @@ void\n> > btbuildempty(Relation index)\n> > {\n> > Page metapage;\n> > + Buffer metabuf;\n> >\n> > - /* Construct metapage. */\n> > - metapage = (Page) palloc(BLCKSZ);\n> > - _bt_initmetapage(metapage, P_NONE, 0, _bt_allequalimage(index, false));\n> > -\n> > + // TODO: test this.\n>\n> Shouldn't this path have plenty coverage?\n\nYep. Sorry.\n\n> > /*\n> > - * Write the page and log it. It might seem that an immediate sync would\n> > - * be sufficient to guarantee that the file exists on disk, but recovery\n> > - * itself might remove it while replaying, for example, an\n> > - * XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record. Therefore, we need\n> > - * this even when wal_level=minimal.\n> > + * Construct metapage.\n> > + * Because we don't need to lock the relation for extension (since\n> > + * noone knows about it yet) and we don't need to initialize the\n> > + * new page, as it is done below by _bt_blnewpage(), _bt_getbuf()\n> > + * (with P_NEW and BT_WRITE) is overkill.\n>\n> Isn't the more relevant operation the log_newpage_buffer()?\n\nReturning to this after some time away, many of my comments no longer\nmake sense to me either. I can't actually tell which diff your question\napplies to because this comment was copy-pasted in two different places\nin my code. Also, I've removed this comment and added new ones. So,\ngiven all that, is there still something about log_newpage_buffer() I\nshould be commenting on?\n\n> > However, it might be worth\n> > + * either modifying it or adding a new helper function instead of\n> > + * calling ReadBufferExtended() directly. We pass mode RBM_ZERO_AND_LOCK\n> > + * because we want to hold an exclusive lock on the buffer content\n> > */\n>\n> \"modifying it\" refers to what?\n>\n> I don't see a problem using ReadBufferExtended() here. Why would you like to\n> replace it with something else?\n\nI would just disregard these comments now.\n\n> > + /*\n> > + * Based on the number of tuples, either create a buffered or unbuffered\n> > + * write state. if the number of tuples is small, make a buffered write\n> > + * if the number of tuples is larger, then we make an unbuffered write state\n> > + * and must ensure that we check the redo pointer to know whether or not we\n> > + * need to fsync ourselves\n> > + */\n> >\n> > /*\n> > * Finish the build by (1) completing the sort of the spool file, (2)\n> > * inserting the sorted tuples into btree pages and (3) building the upper\n> > * levels. Finally, it may also be necessary to end use of parallelism.\n> > */\n> > - _bt_leafbuild(buildstate.spool, buildstate.spool2);\n> > + if (reltuples > 1000)\n>\n> I'm ok with some random magic constant here, but it seems worht putting it in\n> some constant / #define to make it more obvious.\n\nDone.\n\n> > + _bt_leafbuild(buildstate.spool, buildstate.spool2, false);\n> > + else\n> > + _bt_leafbuild(buildstate.spool, buildstate.spool2, true);\n>\n> Why duplicate the function call?\n\nFixed.\n\n> > /*\n> > * allocate workspace for a new, clean btree page, not linked to any siblings.\n> > + * If index is not built in shared buffers, buf should be InvalidBuffer\n> > */\n> > static Page\n> > -_bt_blnewpage(uint32 level)\n> > +_bt_blnewpage(uint32 level, Buffer buf)\n> > {\n> > Page page;\n> > BTPageOpaque opaque;\n> >\n> > - page = (Page) palloc(BLCKSZ);\n> > + if (buf)\n> > + page = BufferGetPage(buf);\n> > + else\n> > + page = (Page) palloc(BLCKSZ);\n> >\n> > /* Zero the page and set up standard page header info */\n> > _bt_pageinit(page, BLCKSZ);\n>\n> Ick, that seems pretty ugly API-wise and subsequently too likely to lead to\n> pfree()ing a page that's actually in shared buffers. I think it'd make sense\n> to separate the allocation from the initialization bits?\n\nFixed.\n\n> > @@ -635,8 +657,20 @@ _bt_blnewpage(uint32 level)\n> > * emit a completed btree page, and release the working storage.\n> > */\n> > static void\n> > -_bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)\n> > +_bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno, Buffer buf)\n> > {\n> > + if (wstate->use_shared_buffers)\n> > + {\n> > + Assert(buf);\n> > + START_CRIT_SECTION();\n> > + MarkBufferDirty(buf);\n> > + if (wstate->btws_use_wal)\n> > + log_newpage_buffer(buf, true);\n> > + END_CRIT_SECTION();\n> > + _bt_relbuf(wstate->index, buf);\n> > + return;\n> > + }\n> > +\n> > /* XLOG stuff */\n> > if (wstate->btws_use_wal)\n> > {\n> > @@ -659,7 +693,7 @@ _bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)\n> > smgrextend(RelationGetSmgr(wstate->index), MAIN_FORKNUM,\n> > wstate->btws_pages_written++,\n> > (char *) wstate->btws_zeropage,\n> > - true);\n> > + false);\n> > }\n>\n> Is there a good place to document the way we ensure durability for this path?\n\nI added some new comments. Let me know if you think that I am still\nmissing this documentation.\n\n> > + /*\n> > + * Extend the index relation upfront to reserve the metapage\n> > + */\n> > + if (wstate->use_shared_buffers)\n> > + {\n> > + /*\n> > + * We should not need to LockRelationForExtension() as no one else knows\n> > + * about this index yet?\n> > + * Extend the index relation by one block for the metapage. _bt_getbuf()\n> > + * is not used here as it does _bt_pageinit() which is one later by\n>\n> *done\n>\n>\n> > + * _bt_initmetapage(). We will fill in the metapage and write it out at\n> > + * the end of index build when we have all of the information required\n> > + * for the metapage. However, we initially extend the relation for it to\n> > + * occupy block 0 because it is much easier when using shared buffers to\n> > + * extend the relation with a block number that is always increasing by\n> > + * 1.\n>\n> Not quite following what you're trying to get at here. There generally is no\n> way to extend a relation except by increasing block numbers?\n\nI've updated this comment too. It should make more sense now.\n\n> > @@ -1425,7 +1544,10 @@ _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2)\n> > * still not be on disk when the crash occurs.\n> > */\n> > if (wstate->btws_use_wal)\n> > - smgrimmedsync(RelationGetSmgr(wstate->index), MAIN_FORKNUM);\n> > + {\n> > + if (!wstate->use_shared_buffers && RedoRecPtrChanged(wstate->redo))\n> > + smgrimmedsync(RelationGetSmgr(wstate->index), MAIN_FORKNUM);\n> > + }\n> > }\n> >\n> > /*\n>\n> This needs documentation. The whole comment above isn't accurate anymore afaict?\n\nShould be correct now.\n\n> > diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> > index 1616448368..63fd212787 100644\n> > --- a/src/backend/access/transam/xlog.c\n> > +++ b/src/backend/access/transam/xlog.c\n> > @@ -8704,6 +8704,20 @@ GetRedoRecPtr(void)\n> > return RedoRecPtr;\n> > }\n> >\n> > +bool\n> > +RedoRecPtrChanged(XLogRecPtr comparator_ptr)\n> > +{\n> > + XLogRecPtr ptr;\n> > +\n> > + SpinLockAcquire(&XLogCtl->info_lck);\n> > + ptr = XLogCtl->RedoRecPtr;\n> > + SpinLockRelease(&XLogCtl->info_lck);\n> > +\n> > + if (RedoRecPtr < ptr)\n> > + RedoRecPtr = ptr;\n> > + return RedoRecPtr != comparator_ptr;\n> > +}\n>\n> What's the deal with the < comparison?\n\nI saw that GetRedoRecPtr() does this and thought maybe I should do the\nsame in this function. I'm not quite sure where I should be getting the\nredo pointer.\n\nMaybe I should just call GetRedoRecPtr() and compare it to the one I\nsaved? I suppose I also thought that maybe someone else in the future\nwould like to have a function like RedoRecPtrChanged().\n\n- Melanie", "msg_date": "Mon, 10 Jan 2022 17:50:40 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Mon, Jan 10, 2022 at 5:50 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> I have attached a v3 which includes two commits -- one of which\n> implements the directmgr API and uses it and the other which adds\n> functionality to use either directmgr or bufmgr API during index build.\n>\n> Also registering for march commitfest.\n\nForgot directmgr.h. Attached v4\n\n- Melanie", "msg_date": "Tue, 11 Jan 2022 12:10:54 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Tue, Jan 11, 2022 at 12:10:54PM -0500, Melanie Plageman wrote:\n> On Mon, Jan 10, 2022 at 5:50 PM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> >\n> > I have attached a v3 which includes two commits -- one of which\n> > implements the directmgr API and uses it and the other which adds\n> > functionality to use either directmgr or bufmgr API during index build.\n> >\n> > Also registering for march commitfest.\n> \n> Forgot directmgr.h. Attached v4\n\nThanks - I had reconstructed it first ;)\n\nI think the ifndef should be outside the includes:\n\n> +++ b/src/include/storage/directmgr.h\n..\n> +#include \"access/xlogdefs.h\"\n..\n> +#ifndef DIRECTMGR_H\n> +#define DIRECTMGR_H\n\nThis is failing on windows CI when I use initdb --data-checksums, as attached.\n\nhttps://cirrus-ci.com/task/5612464120266752\nhttps://api.cirrus-ci.com/v1/artifact/task/5612464120266752/regress_diffs/src/test/regress/regression.diffs\n\n+++ c:/cirrus/src/test/regress/results/bitmapops.out\t2022-01-13 00:47:46.704621200 +0000\n..\n+ERROR: could not read block 0 in file \"base/16384/30310\": read only 0 of 8192 bytes\n\n-- \nJustin", "msg_date": "Thu, 13 Jan 2022 09:52:55 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Wed, Sep 29, 2021 at 2:36 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> unbuffered_write() and unbuffered_extend() might be able to be used even\n> if unbuffered_prep() and unbuffered_finish() are not used -- for example\n> hash indexes do something I don't entirely understand in which they call\n> smgrextend() directly when allocating buckets but then initialize the\n> new bucket pages using the bufmgr machinery.\n\nMy first thought was that someone might do this to make sure that we\ndon't run out of disk space after initializing some but not all of the\nbuckets. Someone might have some reason for wanting to avoid that\ncorner case. However, in _hash_init() that explanation doesn't make\nany sense, because an abort would destroy the entire relation. And in\n_hash_alloc_buckets() the variable \"zerobuf\" points to a buffer that\nis not, in fact, all zeroes. So my guess is this is just old, crufty\ncode - either whatever reasons somebody had for doing it that way are\nno longer valid, or there wasn't any good reason even at the time.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jan 2022 12:18:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Thu, Jan 13, 2022 at 09:52:55AM -0600, Justin Pryzby wrote:\n> This is failing on windows CI when I use initdb --data-checksums, as attached.\n> \n> https://cirrus-ci.com/task/5612464120266752\n> https://api.cirrus-ci.com/v1/artifact/task/5612464120266752/regress_diffs/src/test/regress/regression.diffs\n> \n> +++ c:/cirrus/src/test/regress/results/bitmapops.out\t2022-01-13 00:47:46.704621200 +0000\n> ..\n> +ERROR: could not read block 0 in file \"base/16384/30310\": read only 0 of 8192 bytes\n\nThe failure isn't consistent, so I double checked my report. I have some more\ndetails:\n\nThe problem occurs maybe only ~25% of the time.\n\nThe issue is in the 0001 patch.\n\ndata-checksums isn't necessary to hit the issue.\n\nerrlocation says: LOCATION: mdread, md.c:686 (the only place the error\nexists)\n\nWith Andres' windows crash patch, I obtained a backtrace - attached.\nhttps://cirrus-ci.com/task/5978171861368832\nhttps://api.cirrus-ci.com/v1/artifact/task/5978171861368832/crashlog/crashlog-postgres.exe_0fa8_2022-01-16_02-54-35-291.txt\n\nMaybe its a race condition or synchronization problem that nowhere else tends\nto hit.\n\nSeparate from this issue, I wonder if it'd be useful to write a DEBUG log\nshowing when btree uses shared_buffers vs fsync. And a regression test which\nfirst SETs client_min_messages=debug to capture the debug log to demonstrate\nwhen/that new code path is being hit. I'm not sure if that would be good to\nmerge, but it may be useful for now.\n\n-- \nJustin", "msg_date": "Sun, 16 Jan 2022 14:25:59 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Sun, Jan 16, 2022 at 02:25:59PM -0600, Justin Pryzby wrote:\n> On Thu, Jan 13, 2022 at 09:52:55AM -0600, Justin Pryzby wrote:\n> > This is failing on windows CI when I use initdb --data-checksums, as attached.\n> > \n> > https://cirrus-ci.com/task/5612464120266752\n> > https://api.cirrus-ci.com/v1/artifact/task/5612464120266752/regress_diffs/src/test/regress/regression.diffs\n> > \n> > +++ c:/cirrus/src/test/regress/results/bitmapops.out\t2022-01-13 00:47:46.704621200 +0000\n> > ..\n> > +ERROR: could not read block 0 in file \"base/16384/30310\": read only 0 of 8192 bytes\n> \n> The failure isn't consistent, so I double checked my report. I have some more\n> details:\n> \n> The problem occurs maybe only ~25% of the time.\n> \n> The issue is in the 0001 patch.\n> \n> data-checksums isn't necessary to hit the issue.\n> \n> errlocation says: LOCATION: mdread, md.c:686 (the only place the error\n> exists)\n> \n> With Andres' windows crash patch, I obtained a backtrace - attached.\n> https://cirrus-ci.com/task/5978171861368832\n> https://api.cirrus-ci.com/v1/artifact/task/5978171861368832/crashlog/crashlog-postgres.exe_0fa8_2022-01-16_02-54-35-291.txt\n> \n> Maybe its a race condition or synchronization problem that nowhere else tends\n> to hit.\n\nI meant to say that I had not seen this issue anywhere but windows.\n\nBut now, by chance, I still had the 0001 patch in my tree, and hit the same\nissue on linux:\n\nhttps://cirrus-ci.com/task/4550618281934848\n+++ /tmp/cirrus-ci-build/src/bin/pg_upgrade/tmp_check/regress/results/tuplesort.out\t2022-01-17 16:06:35.759108172 +0000\n EXPLAIN (COSTS OFF)\n SELECT id, noabort_increasing, noabort_decreasing FROM abbrev_abort_uuids ORDER BY noabort_increasing LIMIT 5;\n+ERROR: could not read block 0 in file \"base/16387/t3_36794\": read only 0 of 8192 bytes\n\n\n", "msg_date": "Mon, 17 Jan 2022 11:22:07 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "Hi,\n\nOn 2022-01-11 12:10:54 -0500, Melanie Plageman wrote:\n> On Mon, Jan 10, 2022 at 5:50 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > I have attached a v3 which includes two commits -- one of which\n> > implements the directmgr API and uses it and the other which adds\n> > functionality to use either directmgr or bufmgr API during index build.\n> >\n> > Also registering for march commitfest.\n> \n> Forgot directmgr.h. Attached v4\n\nAre you looking at the failures Justin pointed out? Something isn't quite\nright yet. See https://postgr.es/m/20220116202559.GW14051%40telsasoft.com and\nthe subsequent mail about it also triggering on once on linux.\n\n\n> Thus, the backend must ensure that\n> either the Redo pointer has not moved or that the data is fsync'd before\n> freeing the page.\n\n\"freeing\"?\n\n\n> This is not a problem with pages written in shared buffers because the\n> checkpointer will block until all buffers that were dirtied before it\n> began finish before it moves the Redo pointer past their associated WAL\n> entries.\n\n> This commit makes two main changes:\n> \n> 1) It wraps smgrextend() and smgrwrite() in functions from a new API\n> for writing data outside of shared buffers, directmgr.\n> \n> 2) It saves the XLOG Redo pointer location before doing the write or\n> extend. It also adds an fsync request for the page to the\n> checkpointer's pending-ops table. Then, after doing the write or\n> extend, if the Redo pointer has moved (meaning a checkpoint has\n> started since it saved it last), then the backend fsync's the page\n> itself. Otherwise, it lets the checkpointer take care of fsync'ing\n> the page the next time it processes the pending-ops table.\n\nWhy combine those two into one commit?\n\n\n> @@ -654,9 +657,8 @@ vm_extend(Relation rel, BlockNumber vm_nblocks)\n> \t/* Now extend the file */\n> \twhile (vm_nblocks_now < vm_nblocks)\n> \t{\n> -\t\tPageSetChecksumInplace((Page) pg.data, vm_nblocks_now);\n> -\n> -\t\tsmgrextend(reln, VISIBILITYMAP_FORKNUM, vm_nblocks_now, pg.data, false);\n> +\t\t// TODO: aren't these pages empty? why checksum them\n> +\t\tunbuffered_extend(&ub_wstate, VISIBILITYMAP_FORKNUM, vm_nblocks_now, (Page) pg.data, false);\n\nYea, it's a bit odd. PageSetChecksumInplace() will just return immediately:\n\n\t/* If we don't need a checksum, just return */\n\tif (PageIsNew(page) || !DataChecksumsEnabled())\n\t\treturn;\n\nOTOH, it seems easier to have it there than to later forget it, when\ne.g. adding some actual initial content to the pages during the smgrextend().\n\n\n\n> @@ -560,6 +562,8 @@ _bt_leafbuild(BTSpool *btspool, BTSpool *btspool2)\n> \n> \twstate.heap = btspool->heap;\n> \twstate.index = btspool->index;\n> +\twstate.ub_wstate.smgr_rel = RelationGetSmgr(btspool->index);\n> +\twstate.ub_wstate.redo = InvalidXLogRecPtr;\n> \twstate.inskey = _bt_mkscankey(wstate.index, NULL);\n> \t/* _bt_mkscankey() won't set allequalimage without metapage */\n> \twstate.inskey->allequalimage = _bt_allequalimage(wstate.index, true);\n> @@ -656,31 +660,19 @@ _bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)\n> \t\tif (!wstate->btws_zeropage)\n> \t\t\twstate->btws_zeropage = (Page) palloc0(BLCKSZ);\n> \t\t/* don't set checksum for all-zero page */\n> -\t\tsmgrextend(RelationGetSmgr(wstate->index), MAIN_FORKNUM,\n> -\t\t\t\t wstate->btws_pages_written++,\n> -\t\t\t\t (char *) wstate->btws_zeropage,\n> -\t\t\t\t true);\n> +\t\tunbuffered_extend(&wstate->ub_wstate, MAIN_FORKNUM, wstate->btws_pages_written++, wstate->btws_zeropage, true);\n> \t}\n\nThere's a bunch of long lines in here...\n\n\n> -\t/*\n> -\t * When we WAL-logged index pages, we must nonetheless fsync index files.\n> -\t * Since we're building outside shared buffers, a CHECKPOINT occurring\n> -\t * during the build has no way to flush the previously written data to\n> -\t * disk (indeed it won't know the index even exists). A crash later on\n> -\t * would replay WAL from the checkpoint, therefore it wouldn't replay our\n> -\t * earlier WAL entries. If we do not fsync those pages here, they might\n> -\t * still not be on disk when the crash occurs.\n> -\t */\n> \tif (wstate->btws_use_wal)\n> -\t\tsmgrimmedsync(RelationGetSmgr(wstate->index), MAIN_FORKNUM);\n> +\t\tunbuffered_finish(&wstate->ub_wstate, MAIN_FORKNUM);\n> }\n\nThe API of unbuffered_finish() only sometimes getting called, but\nunbuffered_prep() being unconditional, strikes me as prone to bugs. Perhaps\nit'd make sense to pass in whether the relation needs to be synced or not instead?\n\n\n\n> spgbuildempty(Relation index)\n> {\n> \tPage\t\tpage;\n> +\tUnBufferedWriteState wstate;\n> +\n> +\twstate.smgr_rel = RelationGetSmgr(index);\n> +\tunbuffered_prep(&wstate);\n\nI don't think that's actually safe, and one of the instances could be the\ncause cause of the bug CI is seeing:\n\n * Note: since a relcache flush can cause the file handle to be closed again,\n * it's unwise to hold onto the pointer returned by this function for any\n * long period. Recommended practice is to just re-execute RelationGetSmgr\n * each time you need to access the SMgrRelation. It's quite cheap in\n * comparison to whatever an smgr function is going to do.\n */\nstatic inline SMgrRelation\nRelationGetSmgr(Relation rel)\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 13:55:41 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "Hi,\nv5 attached and all email feedback addressed below\n\nOn Thu, Jan 13, 2022 at 12:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Sep 29, 2021 at 2:36 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > unbuffered_write() and unbuffered_extend() might be able to be used even\n> > if unbuffered_prep() and unbuffered_finish() are not used -- for example\n> > hash indexes do something I don't entirely understand in which they call\n> > smgrextend() directly when allocating buckets but then initialize the\n> > new bucket pages using the bufmgr machinery.\n>\n> My first thought was that someone might do this to make sure that we\n> don't run out of disk space after initializing some but not all of the\n> buckets. Someone might have some reason for wanting to avoid that\n> corner case. However, in _hash_init() that explanation doesn't make\n> any sense, because an abort would destroy the entire relation. And in\n> _hash_alloc_buckets() the variable \"zerobuf\" points to a buffer that\n> is not, in fact, all zeroes. So my guess is this is just old, crufty\n> code - either whatever reasons somebody had for doing it that way are\n> no longer valid, or there wasn't any good reason even at the time.\n\nI notice in the comment before _hash_alloc_buckets() is called, it says\n\n/*\n * We treat allocation of buckets as a separate WAL-logged action.\n * Even if we fail after this operation, won't leak bucket pages;\n * rather, the next split will consume this space. In any case, even\n * without failure we don't use all the space in one split operation.\n */\n\nDoes this mean that it is okay that these pages are written outside of\nshared buffers and, though skipFsync is passed as false, a checkpoint\nstarting and finishing between writing the WAL and\nregister_dirty_segment() followed by a crash could result in lost data?\n\nOn Thu, Jan 13, 2022 at 10:52 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I think the ifndef should be outside the includes:\n\nThanks, fixed!\n\nOn Sun, Jan 16, 2022 at 3:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Separate from this issue, I wonder if it'd be useful to write a DEBUG log\n> showing when btree uses shared_buffers vs fsync. And a regression test which\n> first SETs client_min_messages=debug to capture the debug log to demonstrate\n> when/that new code path is being hit. I'm not sure if that would be good to\n> merge, but it may be useful for now.\n\nI will definitely think about doing this.\n\nOn Mon, Jan 17, 2022 at 12:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sun, Jan 16, 2022 at 02:25:59PM -0600, Justin Pryzby wrote:\n> > On Thu, Jan 13, 2022 at 09:52:55AM -0600, Justin Pryzby wrote:\n> > > This is failing on windows CI when I use initdb --data-checksums, as attached.\n> > >\n> > > https://cirrus-ci.com/task/5612464120266752\n> > > https://api.cirrus-ci.com/v1/artifact/task/5612464120266752/regress_diffs/src/test/regress/regression.diffs\n> > >\n> > > +++ c:/cirrus/src/test/regress/results/bitmapops.out 2022-01-13 00:47:46.704621200 +0000\n> > > ..\n> > > +ERROR: could not read block 0 in file \"base/16384/30310\": read only 0 of 8192 bytes\n> >\n> > The failure isn't consistent, so I double checked my report. I have some more\n> > details:\n> >\n> > The problem occurs maybe only ~25% of the time.\n> >\n> > The issue is in the 0001 patch.\n> >\n> > data-checksums isn't necessary to hit the issue.\n> >\n> > errlocation says: LOCATION: mdread, md.c:686 (the only place the error\n> > exists)\n> >\n> > With Andres' windows crash patch, I obtained a backtrace - attached.\n> > https://cirrus-ci.com/task/5978171861368832\n> > https://api.cirrus-ci.com/v1/artifact/task/5978171861368832/crashlog/crashlog-postgres.exe_0fa8_2022-01-16_02-54-35-291.txt\n> >\n> > Maybe its a race condition or synchronization problem that nowhere else tends\n> > to hit.\n>\n> I meant to say that I had not seen this issue anywhere but windows.\n>\n> But now, by chance, I still had the 0001 patch in my tree, and hit the same\n> issue on linux:\n>\n> https://cirrus-ci.com/task/4550618281934848\n> +++ /tmp/cirrus-ci-build/src/bin/pg_upgrade/tmp_check/regress/results/tuplesort.out 2022-01-17 16:06:35.759108172 +0000\n> EXPLAIN (COSTS OFF)\n> SELECT id, noabort_increasing, noabort_decreasing FROM abbrev_abort_uuids ORDER BY noabort_increasing LIMIT 5;\n> +ERROR: could not read block 0 in file \"base/16387/t3_36794\": read only 0 of 8192 bytes\n\nYes, I think this is due to the problem Andres mentioned with my saving\nthe SMgrRelation and then trying to use it after a relcache flush. The\nnew patch version addresses this by always re-executing\nRelationGetSmgr() as recommended in the comments.\n\nOn Sun, Jan 23, 2022 at 4:55 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-11 12:10:54 -0500, Melanie Plageman wrote:\n> > On Mon, Jan 10, 2022 at 5:50 PM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > Thus, the backend must ensure that\n> > either the Redo pointer has not moved or that the data is fsync'd before\n> > freeing the page.\n>\n> \"freeing\"?\n\nYes, I agree this wording was confusing/incorrect. I meant before it\nmoves on (I said freeing because it usually pfrees() the page in memory\nthat it was writing from). I've changed the commit message.\n\n>\n> > This is not a problem with pages written in shared buffers because the\n> > checkpointer will block until all buffers that were dirtied before it\n> > began finish before it moves the Redo pointer past their associated WAL\n> > entries.\n>\n> > This commit makes two main changes:\n> >\n> > 1) It wraps smgrextend() and smgrwrite() in functions from a new API\n> > for writing data outside of shared buffers, directmgr.\n> >\n> > 2) It saves the XLOG Redo pointer location before doing the write or\n> > extend. It also adds an fsync request for the page to the\n> > checkpointer's pending-ops table. Then, after doing the write or\n> > extend, if the Redo pointer has moved (meaning a checkpoint has\n> > started since it saved it last), then the backend fsync's the page\n> > itself. Otherwise, it lets the checkpointer take care of fsync'ing\n> > the page the next time it processes the pending-ops table.\n>\n> Why combine those two into one commit?\n\nI've separated it into three commits -- the above two + a separate\ncommit that actually has the btree index use the self-fsync\noptimization.\n\n> > @@ -654,9 +657,8 @@ vm_extend(Relation rel, BlockNumber vm_nblocks)\n> > /* Now extend the file */\n> > while (vm_nblocks_now < vm_nblocks)\n> > {\n> > - PageSetChecksumInplace((Page) pg.data, vm_nblocks_now);\n> > -\n> > - smgrextend(reln, VISIBILITYMAP_FORKNUM, vm_nblocks_now, pg.data, false);\n> > + // TODO: aren't these pages empty? why checksum them\n> > + unbuffered_extend(&ub_wstate, VISIBILITYMAP_FORKNUM, vm_nblocks_now, (Page) pg.data, false);\n>\n> Yea, it's a bit odd. PageSetChecksumInplace() will just return immediately:\n>\n> /* If we don't need a checksum, just return */\n> if (PageIsNew(page) || !DataChecksumsEnabled())\n> return;\n>\n> OTOH, it seems easier to have it there than to later forget it, when\n> e.g. adding some actual initial content to the pages during the smgrextend().\n\nI've left these as is and removed the comment.\n\n> > @@ -560,6 +562,8 @@ _bt_leafbuild(BTSpool *btspool, BTSpool *btspool2)\n> >\n> > wstate.heap = btspool->heap;\n> > wstate.index = btspool->index;\n> > + wstate.ub_wstate.smgr_rel = RelationGetSmgr(btspool->index);\n> > + wstate.ub_wstate.redo = InvalidXLogRecPtr;\n> > wstate.inskey = _bt_mkscankey(wstate.index, NULL);\n> > /* _bt_mkscankey() won't set allequalimage without metapage */\n> > wstate.inskey->allequalimage = _bt_allequalimage(wstate.index, true);\n> > @@ -656,31 +660,19 @@ _bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)\n> > if (!wstate->btws_zeropage)\n> > wstate->btws_zeropage = (Page) palloc0(BLCKSZ);\n> > /* don't set checksum for all-zero page */\n> > - smgrextend(RelationGetSmgr(wstate->index), MAIN_FORKNUM,\n> > - wstate->btws_pages_written++,\n> > - (char *) wstate->btws_zeropage,\n> > - true);\n> > + unbuffered_extend(&wstate->ub_wstate, MAIN_FORKNUM, wstate->btws_pages_written++, wstate->btws_zeropage, true);\n> > }\n>\n> There's a bunch of long lines in here...\n\nFixed.\n\n> > - /*\n> > - * When we WAL-logged index pages, we must nonetheless fsync index files.\n> > - * Since we're building outside shared buffers, a CHECKPOINT occurring\n> > - * during the build has no way to flush the previously written data to\n> > - * disk (indeed it won't know the index even exists). A crash later on\n> > - * would replay WAL from the checkpoint, therefore it wouldn't replay our\n> > - * earlier WAL entries. If we do not fsync those pages here, they might\n> > - * still not be on disk when the crash occurs.\n> > - */\n> > if (wstate->btws_use_wal)\n> > - smgrimmedsync(RelationGetSmgr(wstate->index), MAIN_FORKNUM);\n> > + unbuffered_finish(&wstate->ub_wstate, MAIN_FORKNUM);\n> > }\n>\n> The API of unbuffered_finish() only sometimes getting called, but\n> unbuffered_prep() being unconditional, strikes me as prone to bugs. Perhaps\n> it'd make sense to pass in whether the relation needs to be synced or not instead?\n\nI've fixed this. Now unbuffered_prep() and unbuffered_finish() will\nalways be called. I've added a few options to unbuffered_prep() to\nindicate whether or not the smgrimmedsync() should be called in the end\nas well as whether or not skipFsync should be passed as true or false to\nsmgrextend() and smgrwrite() and whether or not the avoiding self-fsync\noptimization should be used.\n\nI found it best to do it this way because simply passing whether or not\nto do the sync to unbuffered_finish() did not allow me to distinguish\nbetween the case in which the sync should not be done ever (because the\ncaller did not call smgrimmedsync() or because the relation does not\nrequire WAL) and when smgrimmedsync() should only be done if the redo\npointer has changed (in the case of the optimization).\n\nI thought it actually made for a better API to specify up front (in\nunbuffered_prep()) whether or not the caller should be prepared to do\nthe fsync itself or not and whether it not it wanted to do the\noptimization. It feels less prone to error and omission.\n\n> > spgbuildempty(Relation index)\n> > {\n> > Page page;\n> > + UnBufferedWriteState wstate;\n> > +\n> > + wstate.smgr_rel = RelationGetSmgr(index);\n> > + unbuffered_prep(&wstate);\n>\n> I don't think that's actually safe, and one of the instances could be the\n> cause cause of the bug CI is seeing:\n>\n> * Note: since a relcache flush can cause the file handle to be closed again,\n> * it's unwise to hold onto the pointer returned by this function for any\n> * long period. Recommended practice is to just re-execute RelationGetSmgr\n> * each time you need to access the SMgrRelation. It's quite cheap in\n> * comparison to whatever an smgr function is going to do.\n> */\n> static inline SMgrRelation\n> RelationGetSmgr(Relation rel)\n\nYes, I've changed this in the attached v5.\n\nOne question I have is whether or not other callers than btree index\ncould benefit from the self-fsync avoidance optimization.\n\nAlso, after taking another look at gist index build, I notice that\nsmgrimmedsync() is not done anywhere and skipFsync is always passed as\ntrue, so what happens if a full checkpoint and a crash happens between\nWAL-logging and whenever the dirty pages make it to permanent storage?\n\n- Melanie", "msg_date": "Wed, 9 Feb 2022 13:49:30 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "> On Wed, Feb 09, 2022 at 01:49:30PM -0500, Melanie Plageman wrote:\n> Hi,\n> v5 attached and all email feedback addressed below\n\nThanks for the patch, it looks quite good.\n\nI don't see it in the discussion, so naturally curious -- why directmgr\nis not used for bloom index, e.g. in blbuildempty?\n\n> On Sun, Jan 16, 2022 at 3:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Separate from this issue, I wonder if it'd be useful to write a DEBUG log\n> > showing when btree uses shared_buffers vs fsync. And a regression test which\n> > first SETs client_min_messages=debug to capture the debug log to demonstrate\n> > when/that new code path is being hit. I'm not sure if that would be good to\n> > merge, but it may be useful for now.\n\nI can't find the thread right away, but I vaguely remember a similar\nsituation where such approach, as a main way to test the patch, had\ncaused some disagreement. Of course for the development phase it would\nbe indeed convenient.\n\n\n", "msg_date": "Sun, 13 Feb 2022 15:33:13 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "Rebased to appease cfbot.\n\nI ran these paches under a branch which shows code coverage in cirrus. It\nlooks good to my eyes.\nhttps://api.cirrus-ci.com/v1/artifact/task/5212346552418304/coverage/coverage/00-index.html\n\nAre these patches being considered for v15 ?", "msg_date": "Wed, 2 Mar 2022 19:09:49 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On Wed, Mar 2, 2022 at 8:09 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Rebased to appease cfbot.\n>\n> I ran these paches under a branch which shows code coverage in cirrus. It\n> looks good to my eyes.\n> https://api.cirrus-ci.com/v1/artifact/task/5212346552418304/coverage/coverage/00-index.html\n\nthanks for doing that and for the rebase! since I made updates, the\nattached version 6 is also rebased.\n\nTo Dmitry's question:\n\nOn Sun, Feb 13, 2022 at 9:33 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Wed, Feb 09, 2022 at 01:49:30PM -0500, Melanie Plageman wrote:\n>\n> I don't see it in the discussion, so naturally curious -- why directmgr\n> is not used for bloom index, e.g. in blbuildempty?\n\nthanks for pointing this out. blbuildempty() is also included now. bloom\ndoesn't seem to use smgr* anywhere except blbuildempty(), so that is the\nonly place I made changes in bloom index build.\n\nv6 has the following updates/changes:\n\n- removed erroneous extra calls to unbuffered_prep() and\n unbuffered_finish() in GiST and btree index builds\n\n- removed unnecessary includes\n\n- some comments were updated to accurately reflect use of directmgr\n\n- smgrwrite doesn't WAL-log anymore. This one I'm not sure about. I\n think it makes sense that unbuffered_extend() of non-empty pages of\n WAL-logged relations (or the init fork of unlogged relations) do\n log_newpage(), but I wasn't so sure about smgrwrite().\n\n Currently all callers of smgrwrite() do log_newpage() and anyone using\n mdwrite() will end up writing the whole page. However, it seems\n possible that a caller of the directmgr API might wish to do a write\n to a particular offset and, either because of that, or, for some other\n reason, require different logging than that done in log_newpage().\n\n I didn't want to make a separate wrapper function for WAL-logging in\n directmgr because it felt like one more step to forget.\n\n- heapam_relation_set_new_filenode doesn't use directmgr API anymore for\n unlogged relations. It doesn't extend or write, so it seemed like a\n special case better left alone.\n\n Note that the ambuildempty() functions which also write to the init\n fork of an unlogged relation still use the directmgr API. It is a bit\n confusing because they pass do_wal=true to unbuffered_prep() even\n though they are unlogged relations because they must log and fsync.\n\n- interface changes to unbuffered_prep()\n I removed the parameters to unbuffered_prep() which required the user\n to specify if fsync should be added to pendingOps or done with\n smgrimmedsync(). Understanding all of the combinations of these\n parameters and when they were needed was confusing and the interface\n felt like a foot gun. Special cases shouldn't use this interface.\n\n I prefer the idea that users of this API expect that\n 1) empty pages won't be checksummed or WAL logged\n 2) for relations that are WAL-logged, when the build is done, the\n relation will be fsync'd by the backend (unless the fsync optimization\n is being used)\n 3) the only case in which fsync requests are added to the pendingOps\n queue is when the fsync optimization is being used (which saves the\n redo pointer and check it later to determine if it needs to fsync\n itself)\n\n I also added the parameter \"do_wal\" to unbuffered_prep() and the\n UnBufferedWriteState struct. This is used when extending the file to\n determine whether or not to log_newpage(). unbuffered_extend() and\n unbuffered_write() no longer take do_wal as a parameter.\n\n Note that callers need to pass do_wal=true/false to unbuffered_prep()\n based on whether or not they want log_newpage() called during\n unbuffered_extend()--not simply based on whether or not the relation\n in question is WAL-logged.\n\n do_wal is the only member of the UnBufferedWriteState struct in the\n first patch in the set, but I think it makes sense to keep the struct\n around since I anticipate that the patch containing the other members\n needed for the fsync optimization will be committed at the same time.\n\n One final note on unbuffered_prep() -- I am thinking of renaming\n \"do_optimization\" to \"try_optimization\" or maybe\n \"request_fsync_optimization\". The interface (of unbuffered_prep())\n would be better if we always tried to do the optimization when\n relevant (when the relation is WAL-logged).\n\n- freespace map, visimap, and hash index don't use directmgr API anymore\n For most use cases writing/extending outside shared buffers, it isn't\n safe to rely on requesting fsync from checkpointer.\n\n visimap, fsm, and hash index all request fsync from checkpointer for\n the pages they write with smgrextend() and don't smgrimmedsync() when\n finished adding pages (even when the hash index is WAL-logged).\n\n Supporting these exceptions made the interface incoherent, so I cut\n them.\n\n- added unbuffered_extend_range()\n This one is a bit unfortunate. Because GiST index build uses\n log_newpages() to log a whole page range but calls smgrextend() for\n each of those pages individually, I couldn't use the\n unbuffered_extend() function easily.\n\n So, I thought it might be useful in other contexts as well to have a\n function which calls smgrextend() for a range of pages and then calls\n log_newpages(). I've added this.\n\n However, there are two parts of GiST index build flush ready pages\n that didn't work with this either.\n\n The first is that it does an error check on the block numbers one at a\n time while looping through them to write the pages. To retain this\n check, I loop through the ready pages an extra time before calling\n unbuffered_extend(), which is probably not acceptable.\n\n Also, GiST needs to use a custom LSN for the pages. To achieve this, I\n added a \"custom_lsn\" parameter to unbuffered_extend_range(). This\n isn't great either. If this was a more common case, I could pass the\n custom LSN to unbuffered_prep().\n\n- Melanie", "msg_date": "Fri, 4 Mar 2022 17:03:09 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "Hi,\n\n> From a06407b19c8d168ea966e45c0e483b38d49ddc12 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Fri, 4 Mar 2022 14:48:39 -0500\n> Subject: [PATCH v6 1/4] Add unbuffered IO API\n\nI think this or one of the following patches should update src/backend/access/transam/README\n\n\n> @@ -164,6 +164,16 @@ void\n> blbuildempty(Relation index)\n> {\n> \tPage\t\tmetapage;\n> +\tUnBufferedWriteState wstate;\n> +\n> +\t/*\n> +\t * Though this is an unlogged relation, pass do_wal=true since the init\n> +\t * fork of an unlogged index must be wal-logged and fsync'd. This currently\n> +\t * has no effect, as unbuffered_write() does not do log_newpage()\n> +\t * internally. However, were this to be replaced with unbuffered_extend(),\n> +\t * do_wal must be true to ensure the data is logged and fsync'd.\n> +\t */\n> +\tunbuffered_prep(&wstate, true);\n\nWonder if unbuffered_write should have an assert checking that no writes to\nINIT_FORKNUM are non-durable? Looks like it's pretty easy to forget...\n\nI'd choose unbuffered_begin over _prep().\n\n\n> \t/* Construct metapage. */\n> \tmetapage = (Page) palloc(BLCKSZ);\n> @@ -176,18 +186,13 @@ blbuildempty(Relation index)\n> \t * XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record. Therefore, we need\n> \t * this even when wal_level=minimal.\n> \t */\n> -\tPageSetChecksumInplace(metapage, BLOOM_METAPAGE_BLKNO);\n> -\tsmgrwrite(RelationGetSmgr(index), INIT_FORKNUM, BLOOM_METAPAGE_BLKNO,\n> -\t\t\t (char *) metapage, true);\n> +\tunbuffered_write(&wstate, RelationGetSmgr(index), INIT_FORKNUM,\n> +\t\t\tBLOOM_METAPAGE_BLKNO, metapage);\n> +\n> \tlog_newpage(&(RelationGetSmgr(index))->smgr_rnode.node, INIT_FORKNUM,\n> \t\t\t\tBLOOM_METAPAGE_BLKNO, metapage, true);\n> \n> -\t/*\n> -\t * An immediate sync is required even if we xlog'd the page, because the\n> -\t * write did not go through shared_buffers and therefore a concurrent\n> -\t * checkpoint may have moved the redo pointer past our xlog record.\n> -\t */\n> -\tsmgrimmedsync(RelationGetSmgr(index), INIT_FORKNUM);\n> +\tunbuffered_finish(&wstate, RelationGetSmgr(index), INIT_FORKNUM);\n> }\n\nI mildly prefer complete over finish, but ...\n\n\n\n> - * We can't use the normal heap_insert function to insert into the new\n> - * heap, because heap_insert overwrites the visibility information.\n> - * We use a special-purpose raw_heap_insert function instead, which\n> - * is optimized for bulk inserting a lot of tuples, knowing that we have\n> - * exclusive access to the heap. raw_heap_insert builds new pages in\n> - * local storage. When a page is full, or at the end of the process,\n> - * we insert it to WAL as a single record and then write it to disk\n> - * directly through smgr. Note, however, that any data sent to the new\n> - * heap's TOAST table will go through the normal bufmgr.\n> + * We can't use the normal heap_insert function to insert into the new heap,\n> + * because heap_insert overwrites the visibility information. We use a\n> + * special-purpose raw_heap_insert function instead, which is optimized for\n> + * bulk inserting a lot of tuples, knowing that we have exclusive access to the\n> + * heap. raw_heap_insert builds new pages in local storage. When a page is\n> + * full, or at the end of the process, we insert it to WAL as a single record\n> + * and then write it to disk directly through directmgr. Note, however, that\n> + * any data sent to the new heap's TOAST table will go through the normal\n> + * bufmgr.\n\nPart of this just reflows existing lines that seem otherwise unchanged, making\nit harder to see the actual change.\n\n\n\n> @@ -643,13 +644,6 @@ _bt_blnewpage(uint32 level)\n> static void\n> _bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)\n> {\n> -\t/* XLOG stuff */\n> -\tif (wstate->btws_use_wal)\n> -\t{\n> -\t\t/* We use the XLOG_FPI record type for this */\n> -\t\tlog_newpage(&wstate->index->rd_node, MAIN_FORKNUM, blkno, page, true);\n> -\t}\n> -\n> \t/*\n> \t * If we have to write pages nonsequentially, fill in the space with\n> \t * zeroes until we come back and overwrite. This is not logically\n> @@ -661,33 +655,33 @@ _bt_blwritepage(BTWriteState *wstate, Page page, BlockNumber blkno)\n> \t{\n> \t\tif (!wstate->btws_zeropage)\n> \t\t\twstate->btws_zeropage = (Page) palloc0(BLCKSZ);\n> -\t\t/* don't set checksum for all-zero page */\n> -\t\tsmgrextend(RelationGetSmgr(wstate->index), MAIN_FORKNUM,\n> -\t\t\t\t wstate->btws_pages_written++,\n> -\t\t\t\t (char *) wstate->btws_zeropage,\n> -\t\t\t\t true);\n> +\n> +\t\tunbuffered_extend(&wstate->ub_wstate, RelationGetSmgr(wstate->index),\n> +\t\t\t\tMAIN_FORKNUM, wstate->btws_pages_written++,\n> +\t\t\t\twstate->btws_zeropage, true);\n> \t}\n\nFor a bit I thought the true argument to unbuffered_extend was about\ndurability or registering fsyncs. Perhaps worth making it flags argument with\nan enum for flag arguments?\n\n\n> diff --git a/src/backend/storage/direct/directmgr.c b/src/backend/storage/direct/directmgr.c\n> new file mode 100644\n> index 0000000000..42c37daa7a\n> --- /dev/null\n> +++ b/src/backend/storage/direct/directmgr.c\n> @@ -0,0 +1,98 @@\n\nNow that the API is called unbuffered, the filename / directory seem\nconfusing.\n\n\n> +void\n> +unbuffered_prep(UnBufferedWriteState *wstate, bool do_wal)\n> +{\n> +\twstate->do_wal = do_wal;\n> +}\n> +\n> +void\n> +unbuffered_extend(UnBufferedWriteState *wstate, SMgrRelation\n> +\t\tsmgrrel, ForkNumber forknum, BlockNumber blocknum, Page page, bool\n> +\t\tempty)\n> +{\n> +\t/*\n> +\t * Don't checksum empty pages\n> +\t */\n> +\tif (!empty)\n> +\t\tPageSetChecksumInplace(page, blocknum);\n> +\n> +\tsmgrextend(smgrrel, forknum, blocknum, (char *) page, true);\n> +\n> +\t/*\n> +\t * Don't WAL-log empty pages\n> +\t */\n> +\tif (!empty && wstate->do_wal)\n> +\t\tlog_newpage(&(smgrrel)->smgr_rnode.node, forknum,\n> +\t\t\t\t\tblocknum, page, true);\n> +}\n> +\n> +void\n> +unbuffered_extend_range(UnBufferedWriteState *wstate, SMgrRelation smgrrel,\n> +\t\tForkNumber forknum, int num_pages, BlockNumber *blocknums, Page *pages,\n> +\t\tbool empty, XLogRecPtr custom_lsn)\n> +{\n> +\tfor (int i = 0; i < num_pages; i++)\n> +\t{\n> +\t\tPage\t\tpage = pages[i];\n> +\t\tBlockNumber blkno = blocknums[i];\n> +\n> +\t\tif (!XLogRecPtrIsInvalid(custom_lsn))\n> +\t\t\tPageSetLSN(page, custom_lsn);\n> +\n> +\t\tif (!empty)\n> +\t\t\tPageSetChecksumInplace(page, blkno);\n> +\n> +\t\tsmgrextend(smgrrel, forknum, blkno, (char *) page, true);\n> +\t}\n> +\n> +\tif (!empty && wstate->do_wal)\n> +\t\tlog_newpages(&(smgrrel)->smgr_rnode.node, forknum, num_pages,\n> +\t\t\t\tblocknums, pages, true);\n> +}\n> +\n> +void\n> +unbuffered_write(UnBufferedWriteState *wstate, SMgrRelation smgrrel, ForkNumber\n> +\t\tforknum, BlockNumber blocknum, Page page)\n> +{\n> +\tPageSetChecksumInplace(page, blocknum);\n> +\n> +\tsmgrwrite(smgrrel, forknum, blocknum, (char *) page, true);\n> +}\n\nSeem several of these should have some minimal documentation?\n\n\n> +/*\n> + * When writing data outside shared buffers, a concurrent CHECKPOINT can move\n> + * the redo pointer past our WAL entries and won't flush our data to disk. If\n> + * the database crashes before the data makes it to disk, our WAL won't be\n> + * replayed and the data will be lost.\n> + * Thus, if a CHECKPOINT begins between unbuffered_prep() and\n> + * unbuffered_finish(), the backend must fsync the data itself.\n> + */\n\nHm. The last sentence sounds like this happens conditionally, but it doesn't\nat this point.\n\n\n\n> +typedef struct UnBufferedWriteState\n> +{\n> +\t/*\n> +\t * When writing WAL-logged relation data outside of shared buffers, there\n> +\t * is a risk of a concurrent CHECKPOINT moving the redo pointer past the\n> +\t * data's associated WAL entries. To avoid this, callers in this situation\n> +\t * must fsync the pages they have written themselves. This is necessary\n> +\t * only if the relation is WAL-logged or in special cases such as the init\n> +\t * fork of an unlogged index.\n> +\t */\n> +\tbool do_wal;\n> +} UnBufferedWriteState;\n> +/*\n> + * prototypes for functions in directmgr.c\n> + */\n\nNewline in between end of struct and comment.\n\n> +extern void\n> +unbuffered_prep(UnBufferedWriteState *wstate, bool do_wal);\n\nIn headers we don't put the return type on a separate line :/\n\n\n\n\n\n> --- a/contrib/bloom/blinsert.c\n> +++ b/contrib/bloom/blinsert.c\n> @@ -173,7 +173,7 @@ blbuildempty(Relation index)\n> \t * internally. However, were this to be replaced with unbuffered_extend(),\n> \t * do_wal must be true to ensure the data is logged and fsync'd.\n> \t */\n> -\tunbuffered_prep(&wstate, true);\n> +\tunbuffered_prep(&wstate, true, false);\n\nThis makes me think this really should be a flag argument...\n\nI also don't like the current name of the parameter \"do_optimization\" doesn't\nexplain much.\n\n\n> +bool RedoRecPtrChanged(XLogRecPtr comparator_ptr)\n> +{\n\nnewline after return type.\n\n> void\n> -unbuffered_prep(UnBufferedWriteState *wstate, bool do_wal)\n> +unbuffered_prep(UnBufferedWriteState *wstate, bool do_wal, bool\n> +\t\tdo_optimization)\n\nSee earlier comments about documentation and parameter naming...\n\n\n> +\t * These callers can optionally use the following optimization: attempt to\n> +\t * use the sync request queue and fall back to fsync'ing the pages\n> +\t * themselves if the Redo pointer moves between the start and finish of\n> +\t * their write. In order to do this, they must set do_optimization to true\n> +\t * so that the redo pointer is saved before the write begins.\n> \t */\n\nWhen do we not want this?\n\n\n\n> From 17fb22142ade65fdbe8c90889e49d0be60ba45e4 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Fri, 4 Mar 2022 15:53:05 -0500\n> Subject: [PATCH v6 3/4] BTree index use unbuffered IO optimization\n> \n> While building a btree index, the backend can avoid fsync'ing all of the\n> pages if it uses the optimization introduced in a prior commit.\n> \n> This can substantially improve performance when many indexes are being\n> built during DDL operations.\n> ---\n> src/backend/access/nbtree/nbtree.c | 2 +-\n> src/backend/access/nbtree/nbtsort.c | 6 +++++-\n> 2 files changed, 6 insertions(+), 2 deletions(-)\n> \n> diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c\n> index 6b78acefbe..fc5cce4603 100644\n> --- a/src/backend/access/nbtree/nbtree.c\n> +++ b/src/backend/access/nbtree/nbtree.c\n> @@ -161,7 +161,7 @@ btbuildempty(Relation index)\n> \t * internally. However, were this to be replaced with unbuffered_extend(),\n> \t * do_wal must be true to ensure the data is logged and fsync'd.\n> \t */\n> -\tunbuffered_prep(&wstate, true, false);\n> +\tunbuffered_prep(&wstate, true, true);\n> \n> \t/* Construct metapage. */\n> \tmetapage = (Page) palloc(BLCKSZ);\n> diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c\n> index d6d0d4b361..f1b9e2e24e 100644\n> --- a/src/backend/access/nbtree/nbtsort.c\n> +++ b/src/backend/access/nbtree/nbtsort.c\n> @@ -1189,7 +1189,11 @@ _bt_load(BTWriteState *wstate, BTSpool *btspool, BTSpool *btspool2)\n> \tint64\t\ttuples_done = 0;\n> \tbool\t\tdeduplicate;\n> \n> -\tunbuffered_prep(&wstate->ub_wstate, wstate->btws_use_wal, false);\n> +\t/*\n> +\t * The fsync optimization done by directmgr is only relevant if\n> +\t * WAL-logging, so pass btws_use_wal for this parameter.\n> +\t */\n> +\tunbuffered_prep(&wstate->ub_wstate, wstate->btws_use_wal, wstate->btws_use_wal);\n> \n> \tdeduplicate = wstate->inskey->allequalimage && !btspool->isunique &&\n> \t\tBTGetDeduplicateItems(wstate->index);\n\nWhy just here?\n\n\n\n> From 377c195bccf2dd2529e64d0d453104485f7662b7 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Fri, 4 Mar 2022 15:52:45 -0500\n> Subject: [PATCH v6 4/4] Use shared buffers when possible for index build\n> \n> When there are not too many tuples, building the index in shared buffers\n> makes sense. It allows the buffer manager to handle how best to do the\n> IO.\n> ---\n\nPerhaps it'd be worth making this an independent patch that could be applied\nseparately?\n\n\n> src/backend/access/nbtree/nbtree.c | 32 ++--\n> src/backend/access/nbtree/nbtsort.c | 273 +++++++++++++++++++++-------\n> 2 files changed, 223 insertions(+), 82 deletions(-)\n> \n> diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c\n> index fc5cce4603..d3982b9388 100644\n> --- a/src/backend/access/nbtree/nbtree.c\n> +++ b/src/backend/access/nbtree/nbtree.c\n> @@ -152,34 +152,24 @@ void\n> btbuildempty(Relation index)\n> {\n> \tPage\t\tmetapage;\n> -\tUnBufferedWriteState wstate;\n> +\tBuffer metabuf;\n> \n> \t/*\n> -\t * Though this is an unlogged relation, pass do_wal=true since the init\n> -\t * fork of an unlogged index must be wal-logged and fsync'd. This currently\n> -\t * has no effect, as unbuffered_write() does not do log_newpage()\n> -\t * internally. However, were this to be replaced with unbuffered_extend(),\n> -\t * do_wal must be true to ensure the data is logged and fsync'd.\n> +\t * Allocate a buffer for metapage and initialize metapage.\n> \t */\n> -\tunbuffered_prep(&wstate, true, true);\n> -\n> -\t/* Construct metapage. */\n> -\tmetapage = (Page) palloc(BLCKSZ);\n> +\tmetabuf = ReadBufferExtended(index, INIT_FORKNUM, P_NEW, RBM_ZERO_AND_LOCK,\n> +\t\t\tNULL);\n> +\tmetapage = BufferGetPage(metabuf);\n> \t_bt_initmetapage(metapage, P_NONE, 0, _bt_allequalimage(index, false));\n> \n> \t/*\n> -\t * Write the page and log it. It might seem that an immediate sync would\n> -\t * be sufficient to guarantee that the file exists on disk, but recovery\n> -\t * itself might remove it while replaying, for example, an\n> -\t * XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record. Therefore, we need\n> -\t * this even when wal_level=minimal.\n> +\t * Mark metapage buffer as dirty and XLOG it\n> \t */\n> -\tunbuffered_write(&wstate, RelationGetSmgr(index), INIT_FORKNUM,\n> -\t\t\tBTREE_METAPAGE, metapage);\n> -\tlog_newpage(&RelationGetSmgr(index)->smgr_rnode.node, INIT_FORKNUM,\n> -\t\t\t\tBTREE_METAPAGE, metapage, true);\n> -\n> -\tunbuffered_finish(&wstate, RelationGetSmgr(index), INIT_FORKNUM);\n> +\tSTART_CRIT_SECTION();\n> +\tMarkBufferDirty(metabuf);\n> +\tlog_newpage_buffer(metabuf, true);\n> +\tEND_CRIT_SECTION();\n> +\t_bt_relbuf(index, metabuf);\n> }\n\nI don't understand why this patch changes btbuildempty()? That data is never\naccessed again, so it doesn't really seem beneficial to shuffle it through\nshared buffers, even if we benefit from using s_b for the main fork...\n\n\n\n> +\t/*\n> +\t * If not using shared buffers, for a WAL-logged relation, save the redo\n> +\t * pointer location in case a checkpoint begins during the index build.\n> +\t */\n> +\tif (wstate->_bt_bl_unbuffered_prep)\n> +\t\twstate->_bt_bl_unbuffered_prep(&wstate->ub_wstate,\n> +\t\t\t\twstate->btws_use_wal, wstate->btws_use_wal);\n\nCode would probably look cleaner if an empty callback were used when no\n_bt_bl_unbuffered_prep callback is needed.\n\n\n\n> /*\n> - * allocate workspace for a new, clean btree page, not linked to any siblings.\n> + * Set up workspace for a new, clean btree page, not linked to any siblings.\n> + * Caller must allocate the passed in page.\n\nMore interesting bit seems to be whether the passed in page needs to be\ninitialized?\n\n\n> @@ -1154,20 +1285,24 @@ _bt_uppershutdown(BTWriteState *wstate, BTPageState *state)\n> \t\t * back one slot. Then we can dump out the page.\n> \t\t */\n> \t\t_bt_slideleft(s->btps_page);\n> -\t\t_bt_blwritepage(wstate, s->btps_page, s->btps_blkno);\n> +\t\twstate->_bt_blwritepage(wstate, s->btps_page, s->btps_blkno, s->btps_buf);\n> +\t\ts->btps_buf = InvalidBuffer;\n> \t\ts->btps_page = NULL;\t/* writepage freed the workspace */\n> \t}\n\nDo we really have to add underscores even to struct members? That just looks\nwrong.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Mar 2022 10:32:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "It looks like this patch received a review from Andres and hasn't been\nupdated since. I'm not sure but the review looks to me like it's not\nready to commit and needs some cleanup or at least to check on a few\nthings.\n\nI guess it's not going to get bumped in the next few days so I'll move\nit to the next CF.\n\n\n", "msg_date": "Wed, 30 Mar 2022 14:54:23 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "On 05/03/2022 00:03, Melanie Plageman wrote:\n> On Wed, Mar 2, 2022 at 8:09 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> Rebased to appease cfbot.\n>>\n>> I ran these paches under a branch which shows code coverage in cirrus. It\n>> looks good to my eyes.\n>> https://api.cirrus-ci.com/v1/artifact/task/5212346552418304/coverage/coverage/00-index.html\n> \n> thanks for doing that and for the rebase! since I made updates, the\n> attached version 6 is also rebased.\n\nI'm surprised by all the changes in nbtsort.c to choose between using \nthe buffer manager or the new API. I would've expected the new API to \nabstract that away. Otherwise we need to copy that logic to all the \nindex AMs.\n\nI'd suggest an API along the lines of:\n\n/*\n * Start building a relation in bulk.\n *\n * If the relation is going to be small, we will use the buffer manager,\n * but if it's going to be large, this will skip the buffer manager and\n * write the pages directly to disk.\n */\nbulk_init(SmgrRelation smgr, BlockNumber estimated_size)\n\n/*\n * Extend relation by one page\n */\nbulk_extend(SmgrRelation, BlockNumber, Page)\n\n/*\n * Finish building the relation\n *\n * This will fsync() the data to disk, if required.\n */\nbulk_finish()\n\n\nBehind this interface, you can encapsulate the logic to choose whether \nto use the buffer manager or not. And I think bulk_extend() could do the \nWAL-logging too.\n\nOr you could make the interface look more like the buffer manager:\n\n/* as above */\nbulk_init(SmgrRelation smgr, BlockNumber estimated_size)\nbulk_finish()\n\n/*\n * Get a buffer for writing out a page.\n *\n * The contents of the buffer are uninitialized. The caller\n * must fill it in before releasing it.\n */\nBulkBuffer\nbulk_getbuf(SmgrRelation smgr, BlockNumber blkno)\n\n/*\n * Release buffer. It will be WAL-logged and written out to disk.\n * Not necessarily immediately, but at bulk_finish() at latest.\n *\n * NOTE: There is no way to read back the page after you release\n * it, until you finish the build with bulk_finish().\n */\nvoid\nbulk_releasebuf(SmgrRelation smgr, BulkBuffer buf)\n\n\nWith this interface, you could also batch multiple pages and WAL-log \nthem all in one WAL record with log_newpage_range(), for example.\n\n- Heikki\n\n\n", "msg_date": "Sat, 23 Jul 2022 12:34:54 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3508/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:54:21 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding smgrimmedsync() during nbtree index builds" } ]
[ { "msg_contents": "I happened to notice that PQreset is documented thus:\n\n This function will close the connection to the server and attempt to\n reestablish a new connection to the same server, using all the same\n parameters previously used.\n\nSince we invented multi-host connection parameters, a reasonable person\nwould assume that \"to the same server\" means we promise to reconnect to\nthe same host we selected the first time. There is no such guarantee\nthough; the new connection attempt is done just like the first one,\nso it will select the first suitable server in the list.\n\nI think we should just drop that phrase. Alternatively we could decide\nthat the code's behavior is buggy, but I don't think it is. If, say,\nthe reason you need to reset is that your existing host died, you don't\nreally want libpq to refuse to select an alternative server.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 Jan 2021 17:32:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Very misleading documentation for PQreset()" }, { "msg_contents": "On Thu, Jan 21, 2021 at 05:32:56PM -0500, Tom Lane wrote:\n> I happened to notice that PQreset is documented thus:\n> \n> This function will close the connection to the server and attempt to\n> reestablish a new connection to the same server, using all the same\n> parameters previously used.\n> \n> Since we invented multi-host connection parameters, a reasonable person\n> would assume that \"to the same server\" means we promise to reconnect to\n> the same host we selected the first time. There is no such guarantee\n> though; the new connection attempt is done just like the first one,\n> so it will select the first suitable server in the list.\n> \n> I think we should just drop that phrase.\n\nI agree that dropping the phrase strictly improves that documentation.\n\n\n", "msg_date": "Thu, 21 Jan 2021 22:32:03 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Very misleading documentation for PQreset()" } ]
[ { "msg_contents": "hi,\n\nno noticed after the assignment with union ( https://www.postgresql.org/message-id/flat/20210105201257.f0d76aff%40mail.verfriemelt.org ), that the assignment with distinct is broken aswell.\n\n\n\n DO $$\n DECLARE \n _test bool;\n BEGIN\n\n _test := DISTINCT a FROM ( VALUES ( (true), ( true ) ) )t(a);\n \n END $$;\n\ni would argue, that thats a way more common usecase than the union, which was merely bad code.\n\ntested with version 14~~devel~20210111.0540-1~299.gitce6a71f.pgdg110+1 from the apt repo\n\nwith kind redards,\nrichard\n\n\n", "msg_date": "Fri, 22 Jan 2021 09:21:19 +0100", "msg_from": "easteregg@verfriemelt.org", "msg_from_op": true, "msg_subject": "plpgsql variable assignment not supporting distinct anymore" }, { "msg_contents": "pá 22. 1. 2021 v 9:21 odesílatel <easteregg@verfriemelt.org> napsal:\n\n> hi,\n>\n> no noticed after the assignment with union (\n> https://www.postgresql.org/message-id/flat/20210105201257.f0d76aff%40mail.verfriemelt.org\n> ), that the assignment with distinct is broken aswell.\n>\n>\n>\n> DO $$\n> DECLARE\n> _test bool;\n> BEGIN\n>\n> _test := DISTINCT a FROM ( VALUES ( (true), ( true ) ) )t(a);\n>\n> END $$;\n>\n> i would argue, that thats a way more common usecase than the union, which\n> was merely bad code.\n>\n\nWhat is the sense of this code?\n\nThis is strange with not well defined behavior (in dependency on data type\nthe result can depend on collate).\n\nMore - because this breaks simple expression optimization (10x), then the\ncode will be significantly slower, than you use IF or CASE.\n\nRegards\n\nPavel\n\n\n> tested with version 14~~devel~20210111.0540-1~299.gitce6a71f.pgdg110+1\n> from the apt repo\n>\n> with kind redards,\n> richard\n>\n\npá 22. 1. 2021 v 9:21 odesílatel <easteregg@verfriemelt.org> napsal:hi,\n\nno noticed after the assignment with union ( https://www.postgresql.org/message-id/flat/20210105201257.f0d76aff%40mail.verfriemelt.org ), that the assignment with distinct is broken aswell.\n\n\n\n  DO $$\n  DECLARE \n    _test bool;\n  BEGIN\n\n    _test := DISTINCT a FROM ( VALUES ( (true), ( true ) ) )t(a);\n\n  END $$;\n\ni would argue, that thats a way more common usecase than the union, which was merely bad code.What is the sense of this code?This is strange with not well defined behavior (in dependency on data type the result can depend on collate).More - because this breaks simple expression optimization (10x), then the code will be significantly slower, than you use IF or CASE.RegardsPavel \n\ntested with version 14~~devel~20210111.0540-1~299.gitce6a71f.pgdg110+1 from the apt repo\n\nwith kind redards,\nrichard", "msg_date": "Fri, 22 Jan 2021 09:43:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plpgsql variable assignment not supporting distinct anymore" } ]
[ { "msg_contents": "Hi!\n\nin src/tools/make_diff/ there is a reference:\n\n\"If I use mkid (from ftp.postgreSQL.org), I can do:\"\n\nThere is no such thing on our download site, and I can't find what it\neven was at one point.\n\nWas this part of some other package, since removed?\n\nAnd maybe even more interestnig -- is there a point to this whole\nmake_diff directory at all in these days of git? Or should we just\nremove it rather than try to fix it?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 22 Jan 2021 12:56:10 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "mkid reference" }, { "msg_contents": "> On 22 Jan 2021, at 12:56, Magnus Hagander <magnus@hagander.net> wrote:\n\n> And maybe even more interestnig -- is there a point to this whole\n> make_diff directory at all in these days of git? Or should we just\n> remove it rather than try to fix it?\n\nThere's also src/tools/make_mkid which use this mkid tool. +1 for removing.\nIf anything, it seems better replaced by extended documentation on the existing\nwiki article [0] on how to use \"git format-patch\".\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://wiki.postgresql.org/wiki/Working_with_Git\n\n\n", "msg_date": "Fri, 22 Jan 2021 13:32:59 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: mkid reference" }, { "msg_contents": "Le ven. 22 janv. 2021 à 20:33, Daniel Gustafsson <daniel@yesql.se> a écrit :\n\n> > On 22 Jan 2021, at 12:56, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> > And maybe even more interestnig -- is there a point to this whole\n> > make_diff directory at all in these days of git? Or should we just\n> > remove it rather than try to fix it?\n>\n> There's also src/tools/make_mkid which use this mkid tool. +1 for\n> removing.\n> If anything, it seems better replaced by extended documentation on the\n> existing\n> wiki article [0] on how to use \"git format-patch\".\n>\n\ndefinitely +1\n\n>\n\nLe ven. 22 janv. 2021 à 20:33, Daniel Gustafsson <daniel@yesql.se> a écrit :> On 22 Jan 2021, at 12:56, Magnus Hagander <magnus@hagander.net> wrote:\n\n> And maybe even more interestnig -- is there a point to this whole\n> make_diff directory at all in these days of git? Or should we just\n> remove it rather than try to fix it?\n\nThere's also src/tools/make_mkid which use this mkid tool.  +1 for removing.\nIf anything, it seems better replaced by extended documentation on the existing\nwiki article [0] on how to use \"git format-patch\".definitely +1", "msg_date": "Fri, 22 Jan 2021 22:06:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: mkid reference" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 22 Jan 2021, at 12:56, Magnus Hagander <magnus@hagander.net> wrote:\n>> And maybe even more interestnig -- is there a point to this whole\n>> make_diff directory at all in these days of git? Or should we just\n>> remove it rather than try to fix it?\n\n> There's also src/tools/make_mkid which use this mkid tool. +1 for removing.\n> If anything, it seems better replaced by extended documentation on the existing\n> wiki article [0] on how to use \"git format-patch\".\n\nI found man pages for mkid online --- it's apparently a ctags-like\ncode indexing tool, not something for patches. So maybe Bruce still\nuses it, or maybe not. But as long as we've also got make_ctags and\nmake_etags in there, I don't have a problem with leaving make_mkid.\n\nmake_diff, on the other hand, certainly looks like technology whose\ntime has passed. I wonder about pgtest, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jan 2021 13:07:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mkid reference" }, { "msg_contents": "On Fri, Jan 22, 2021 at 7:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 22 Jan 2021, at 12:56, Magnus Hagander <magnus@hagander.net> wrote:\n> >> And maybe even more interestnig -- is there a point to this whole\n> >> make_diff directory at all in these days of git? Or should we just\n> >> remove it rather than try to fix it?\n>\n> > There's also src/tools/make_mkid which use this mkid tool. +1 for removing.\n> > If anything, it seems better replaced by extended documentation on the existing\n> > wiki article [0] on how to use \"git format-patch\".\n>\n> I found man pages for mkid online --- it's apparently a ctags-like\n> code indexing tool, not something for patches. So maybe Bruce still\n> uses it, or maybe not. But as long as we've also got make_ctags and\n> make_etags in there, I don't have a problem with leaving make_mkid.\n>\n> make_diff, on the other hand, certainly looks like technology whose\n> time has passed. I wonder about pgtest, too.\n\nI'll go kill make_diff then -- quicker than fixing the docs of it.\n\nAs for pgtest, that one looks a bit interesting as well -- but it's\nbeen patched on as late as 9.5 and in 2018, so it seems at least Bruce\nuses it :)\n\nWhile at it, what point is \"codelines\" adding?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 24 Jan 2021 14:20:58 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: mkid reference" }, { "msg_contents": "On Fri, Jan 22, 2021 at 01:07:36PM -0500, Tom Lane wrote:\n> > There's also src/tools/make_mkid which use this mkid tool. +1 for removing.\n> > If anything, it seems better replaced by extended documentation on the existing\n> > wiki article [0] on how to use \"git format-patch\".\n> \n> I found man pages for mkid online --- it's apparently a ctags-like\n> code indexing tool, not something for patches. So maybe Bruce still\n> uses it, or maybe not. But as long as we've also got make_ctags and\n\nYes, I do still use it, so I thought having a script to generate its\nindex files might be helpful to someone.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 25 Jan 2021 10:38:40 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: mkid reference" }, { "msg_contents": "On Sun, Jan 24, 2021 at 02:20:58PM +0100, Magnus Hagander wrote:\n> > I found man pages for mkid online --- it's apparently a ctags-like\n> > code indexing tool, not something for patches. So maybe Bruce still\n> > uses it, or maybe not. But as long as we've also got make_ctags and\n> > make_etags in there, I don't have a problem with leaving make_mkid.\n> >\n> > make_diff, on the other hand, certainly looks like technology whose\n> > time has passed. I wonder about pgtest, too.\n> \n> I'll go kill make_diff then -- quicker than fixing the docs of it.\n> \n> As for pgtest, that one looks a bit interesting as well -- but it's\n> been patched on as late as 9.5 and in 2018, so it seems at least Bruce\n> uses it :)\n\nYes, that is how I noticed the ecpg/preproc.y warning this past weekend.\n\n> While at it, what point is \"codelines\" adding?\n\nThat is the script I use to generate code line counts when comparing\nreleases. I thought it should be in the tree so others can reproduce my\nnumbers.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 25 Jan 2021 10:40:43 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: mkid reference" }, { "msg_contents": "On Mon, Jan 25, 2021 at 4:38 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Jan 22, 2021 at 01:07:36PM -0500, Tom Lane wrote:\n> > > There's also src/tools/make_mkid which use this mkid tool. +1 for removing.\n> > > If anything, it seems better replaced by extended documentation on the existing\n> > > wiki article [0] on how to use \"git format-patch\".\n> >\n> > I found man pages for mkid online --- it's apparently a ctags-like\n> > code indexing tool, not something for patches. So maybe Bruce still\n> > uses it, or maybe not. But as long as we've also got make_ctags and\n>\n> Yes, I do still use it, so I thought having a script to generate its\n> index files might be helpful to someone.\n\nWhere do you actually get it? The old docs (now removed) suggested\ngetting it off ftp.postgresql.org...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 26 Jan 2021 17:03:30 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: mkid reference" }, { "msg_contents": "On Tue, Jan 26, 2021 at 05:03:30PM +0100, Magnus Hagander wrote:\n> On Mon, Jan 25, 2021 at 4:38 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Fri, Jan 22, 2021 at 01:07:36PM -0500, Tom Lane wrote:\n> > > > There's also src/tools/make_mkid which use this mkid tool. +1 for removing.\n> > > > If anything, it seems better replaced by extended documentation on the existing\n> > > > wiki article [0] on how to use \"git format-patch\".\n> > >\n> > > I found man pages for mkid online --- it's apparently a ctags-like\n> > > code indexing tool, not something for patches. So maybe Bruce still\n> > > uses it, or maybe not. But as long as we've also got make_ctags and\n> >\n> > Yes, I do still use it, so I thought having a script to generate its\n> > index files might be helpful to someone.\n> \n> Where do you actually get it? The old docs (now removed) suggested\n> getting it off ftp.postgresql.org...\n\nNot sure why it was on our ftp site, since it is a GNU download:\n\n\thttps://www.gnu.org/software/idutils/\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 26 Jan 2021 12:58:27 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: mkid reference" }, { "msg_contents": "On Tue, Jan 26, 2021 at 6:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Jan 26, 2021 at 05:03:30PM +0100, Magnus Hagander wrote:\n> > On Mon, Jan 25, 2021 at 4:38 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Fri, Jan 22, 2021 at 01:07:36PM -0500, Tom Lane wrote:\n> > > > > There's also src/tools/make_mkid which use this mkid tool. +1 for removing.\n> > > > > If anything, it seems better replaced by extended documentation on the existing\n> > > > > wiki article [0] on how to use \"git format-patch\".\n> > > >\n> > > > I found man pages for mkid online --- it's apparently a ctags-like\n> > > > code indexing tool, not something for patches. So maybe Bruce still\n> > > > uses it, or maybe not. But as long as we've also got make_ctags and\n> > >\n> > > Yes, I do still use it, so I thought having a script to generate its\n> > > index files might be helpful to someone.\n> >\n> > Where do you actually get it? The old docs (now removed) suggested\n> > getting it off ftp.postgresql.org...\n>\n> Not sure why it was on our ftp site, since it is a GNU download:\n>\n> https://www.gnu.org/software/idutils/\n\nAh, good. Then at least we now have it in the list archives for\nreference if somebody else searches for it :)\n\nAnd no, it wasn't actually on our ftp server. But it might have been\nat some point far far in the past...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 26 Jan 2021 22:05:34 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: mkid reference" }, { "msg_contents": "On Mon, Jan 25, 2021 at 4:40 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sun, Jan 24, 2021 at 02:20:58PM +0100, Magnus Hagander wrote:\n> > While at it, what point is \"codelines\" adding?\n>\n> That is the script I use to generate code line counts when comparing\n> releases. I thought it should be in the tree so others can reproduce my\n> numbers.\n\nNot that it particularly matters to keep it, but wouldn't something\nlike cloc give a much better number?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 26 Jan 2021 22:19:44 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: mkid reference" }, { "msg_contents": "On Tue, Jan 26, 2021 at 10:19:44PM +0100, Magnus Hagander wrote:\n> On Mon, Jan 25, 2021 at 4:40 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Sun, Jan 24, 2021 at 02:20:58PM +0100, Magnus Hagander wrote:\n> > > While at it, what point is \"codelines\" adding?\n> >\n> > That is the script I use to generate code line counts when comparing\n> > releases. I thought it should be in the tree so others can reproduce my\n> > numbers.\n> \n> Not that it particularly matters to keep it, but wouldn't something\n> like cloc give a much better number?\n\nYes, we could, but we didn't really have any criteria on exactly what to\ncount, so I just counted physical lines.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 26 Jan 2021 17:55:14 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: mkid reference" } ]
[ { "msg_contents": "I was recently surprised by the following inconsistencies in returned\ncommand tags for CTAS:\n\n\npostgres=# create table a as select 123;\nSELECT 1\n\npostgres=# create table b as select 123 with data;\nSELECT 1\n\npostgres=# create table c as select 123 with no data;\nCREATE TABLE AS\n\n\nShouldn't we make the first two tags (which are likely the same code\npath; I haven't looked) the same as the third? I can look into writing\nthe patch if desired.\n-- \nVik Fearing\n\n\n", "msg_date": "Fri, 22 Jan 2021 14:14:29 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "CTAS command tags" }, { "msg_contents": "Having row count right away is very useful in CTAS in analytical and GIS\nusage scenarios.\n\nпт, 22 сту 2021, 16:14 карыстальнік Vik Fearing <vik@postgresfriends.org>\nнапісаў:\n\n> I was recently surprised by the following inconsistencies in returned\n> command tags for CTAS:\n>\n>\n> postgres=# create table a as select 123;\n> SELECT 1\n>\n> postgres=# create table b as select 123 with data;\n> SELECT 1\n>\n> postgres=# create table c as select 123 with no data;\n> CREATE TABLE AS\n>\n>\n> Shouldn't we make the first two tags (which are likely the same code\n> path; I haven't looked) the same as the third? I can look into writing\n> the patch if desired.\n> --\n> Vik Fearing\n>\n>\n>\n\nHaving row count right away is very useful in CTAS in analytical and GIS usage scenarios. пт, 22 сту 2021, 16:14 карыстальнік Vik Fearing <vik@postgresfriends.org> напісаў:I was recently surprised by the following inconsistencies in returned\ncommand tags for CTAS:\n\n\npostgres=# create table a as select 123;\nSELECT 1\n\npostgres=# create table b as select 123 with data;\nSELECT 1\n\npostgres=# create table c as select 123 with no data;\nCREATE TABLE AS\n\n\nShouldn't we make the first two tags (which are likely the same code\npath; I haven't looked) the same as the third?  I can look into writing\nthe patch if desired.\n-- \nVik Fearing", "msg_date": "Fri, 22 Jan 2021 16:19:06 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: CTAS command tags" }, { "msg_contents": "On 1/22/21 2:19 PM, Darafei \"Komяpa\" Praliaskouski wrote:\n> Having row count right away is very useful in CTAS in analytical and GIS \n> usage scenarios.\n\nI can see that, but would it not work if it was:\n\nCREATE TABLE AS 1\n\nDisclaimer: I have not looked at the code so maybe there is some good \nreason that would not work.\n\nAndreas\n\n\n", "msg_date": "Fri, 22 Jan 2021 16:40:49 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: CTAS command tags" }, { "msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 1/22/21 2:19 PM, Darafei \"Komяpa\" Praliaskouski wrote:\n>> Having row count right away is very useful in CTAS in analytical and GIS \n>> usage scenarios.\n\n> I can see that, but would it not work if it was:\n> CREATE TABLE AS 1\n\nChanging the set of command tags that have counts attached would amount\nto a wire-protocol break, because clients such as libpq know which ones\ndo. So to standardize this as Vik wants, we'd have to make the WITH\nNO DATA case return \"SELECT 0\" (not 1, surely). That seems a little\nweird.\n\nI have a vague recollection that this has been discussed before,\nthough I lack the energy to go digging in the archives right now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jan 2021 10:52:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CTAS command tags" } ]
[ { "msg_contents": "the code provided is just a little poc to get the error ( which i have not included with my first mail sorry. )\n\n ERROR: syntax error at or near \"DISTINCT\"\n LINE 8: _test := DISTINCT a FROM ( VALUES ( (true), ( true ) ) )...\n\n\nthe code in production looked like this:\n\n\n _resource_id := \n DISTINCT ti_resource_id\n FROM tabk.resource_timeline \n WHERE ti_a2_id = _ab2_id\n AND ti_type = 'task'\n ;\n\nthis is backed up by a trigger function, that will ensure to every instance with the same ti_a2_id exists only one ti_resource_id, hence the query can never fail due to more than one row beeing returned. but this syntax is not supported anymore, which will break BC. up until PG 13, the assignment statement was just an implizit SELECT <expression> Query.\nSince Tom Lane didn't mentioned this change in the other thread, i figured the devteam might not be aware of this chance.\n\ni can refactor this line into\n\n _resource_id := \n ti_resource_id\n FROM tabk.resource_timeline \n WHERE ti_a2_id = _ab2_id\n AND ti_type = 'task'\n GROUP BY ti_resource_id\n ;\n\nbut concerns about BC was already raised, although with UNION there might be far less people affected.\nwith kind regards, richard\n\n\n", "msg_date": "Fri, 22 Jan 2021 14:41:06 +0100", "msg_from": "easteregg@verfriemelt.org", "msg_from_op": true, "msg_subject": "Re: plpgsql variable assignment not supporting distinct anymore" }, { "msg_contents": "pá 22. 1. 2021 v 14:41 odesílatel <easteregg@verfriemelt.org> napsal:\n\n> the code provided is just a little poc to get the error ( which i have not\n> included with my first mail sorry. )\n>\n> ERROR: syntax error at or near \"DISTINCT\"\n> LINE 8: _test := DISTINCT a FROM ( VALUES ( (true), ( true ) ) )...\n>\n>\n> the code in production looked like this:\n>\n>\n> _resource_id :=\n> DISTINCT ti_resource_id\n> FROM tabk.resource_timeline\n> WHERE ti_a2_id = _ab2_id\n> AND ti_type = 'task'\n> ;\n>\n> this is backed up by a trigger function, that will ensure to every\n> instance with the same ti_a2_id exists only one ti_resource_id, hence the\n> query can never fail due to more than one row beeing returned. but this\n> syntax is not supported anymore, which will break BC. up until PG 13, the\n> assignment statement was just an implizit SELECT <expression> Query.\n> Since Tom Lane didn't mentioned this change in the other thread, i figured\n> the devteam might not be aware of this chance.\n>\n> i can refactor this line into\n>\n> _resource_id :=\n> ti_resource_id\n> FROM tabk.resource_timeline\n> WHERE ti_a2_id = _ab2_id\n> AND ti_type = 'task'\n> GROUP BY ti_resource_id\n> ;\n>\n> but concerns about BC was already raised, although with UNION there might\n> be far less people affected.\n> with kind regards, richard\n>\n\nProbably the fix is not hard, but it is almost the same situation as the\nUNION case. The result of your code is not deterministic\n\nIf there are more different ti_resource_id then some values can be randomly\nignored - when hash agg is used.\n\nThe safe fix should be\n\n_resource_id := (SELECT ti_resource_id\n FROM tabk.resource_timeline\n WHERE ti_a2_id = _ab2_id\n AND ti_type = 'task');\n\nand you get an exception if some values are ignored. Or if you want to\nignore some values, then you can write\n\n_resource_id := (SELECT MIN(ti_resource_id) -- or MAX\n FROM tabk.resource_timeline\n WHERE ti_a2_id = _ab2_id\n AND ti_type = 'task');\n\nUsing DISTINCT is not a good solution.\n\npá 22. 1. 2021 v 14:41 odesílatel <easteregg@verfriemelt.org> napsal:the code provided is just a little poc to get the error ( which i have not included with my first mail sorry. )\n\n   ERROR:  syntax error at or near \"DISTINCT\"\n   LINE 8:     _test := DISTINCT a FROM ( VALUES ( (true), ( true ) ) )...\n\n\nthe code in production looked like this:\n\n\n    _resource_id := \n        DISTINCT ti_resource_id\n           FROM tabk.resource_timeline \n          WHERE ti_a2_id = _ab2_id\n            AND ti_type = 'task'\n    ;\n\nthis is backed up by a trigger function, that will ensure to every instance with the same ti_a2_id exists only one ti_resource_id, hence the query can never fail due to more than one row beeing returned. but this syntax is not supported anymore, which will break BC. up until PG 13, the assignment statement was just an implizit SELECT <expression> Query.\nSince Tom Lane didn't mentioned this change in the other thread, i figured the devteam might not be aware of this chance.\n\ni can refactor this line into\n\n    _resource_id := \n        ti_resource_id\n       FROM tabk.resource_timeline \n      WHERE ti_a2_id = _ab2_id\n        AND ti_type = 'task'\n      GROUP BY ti_resource_id\n    ;\n\nbut concerns about BC was already raised, although with UNION there might be far less people affected.\nwith kind regards, richardProbably the fix is not hard, but it is almost the same situation as the UNION case. The result of your code is not deterministicIf there are more different ti_resource_id then some values can be randomly ignored - when hash agg is used.The safe fix should be_resource_id := (SELECT ti_resource_id\n       FROM tabk.resource_timeline \n      WHERE ti_a2_id = _ab2_id\n        AND ti_type = 'task');and you get an exception if some values are ignored. Or if you want to ignore some values, then you can write_resource_id := (SELECT MIN(ti_resource_id) -- or MAX\n       FROM tabk.resource_timeline \n      WHERE ti_a2_id = _ab2_id\n        AND ti_type = 'task');Using DISTINCT is not a good solution.", "msg_date": "Fri, 22 Jan 2021 14:58:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plpgsql variable assignment not supporting distinct anymore" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 22. 1. 2021 v 14:41 odesílatel <easteregg@verfriemelt.org> napsal:\n>> ERROR: syntax error at or near \"DISTINCT\"\n>> LINE 8: _test := DISTINCT a FROM ( VALUES ( (true), ( true ) ) )...\n\n> Using DISTINCT is not a good solution.\n\nYeah. It wouldn't be as painful to support this in the grammar\nas it would be for UNION et al, so maybe we should just do it.\nBut I still find this to be mighty ugly plpgsql code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jan 2021 13:59:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plpgsql variable assignment not supporting distinct anymore" } ]
[ { "msg_contents": "> Probably the fix is not hard, but it is almost the same situation as the\n> UNION case. The result of your code is not deterministic\n> \n> If there are more different ti_resource_id then some values can be randomly\n> ignored - when hash agg is used.\n> \n> The safe fix should be\n> \n> _resource_id := (SELECT ti_resource_id\n> FROM tabk.resource_timeline\n> WHERE ti_a2_id = _ab2_id\n> AND ti_type = 'task');\n> \n> and you get an exception if some values are ignored. Or if you want to\n> ignore some values, then you can write\n> \n> _resource_id := (SELECT MIN(ti_resource_id) -- or MAX\n> FROM tabk.resource_timeline\n> WHERE ti_a2_id = _ab2_id\n> AND ti_type = 'task');\n> \n> Using DISTINCT is not a good solution.\n> \n\nin my usecase it was perfectly fine, because there is a constraint ensuring that here can never be more than on ti_resource_id at any given time for a given _ab2_id.\nalso, whenever there would be more data ( for example if the constraint trigger would have a bug ) you will get an error like this:\n\n\n create table a ( t int );\n insert into a values (1),(2);\n\n do $$\n declare _t int;\n begin\n _t := distinct t from a;\n end $$;\n\n Query failed: ERROR: query \"SELECT distinct t from a\" returned more than one row\n CONTEXT: PL/pgSQL function inline_code_block line 4 at assignment\n\nno doubt, that this piece of code might not look optimal at first glance, but i like my code to fail fast. because with the min() approach, you will not notice, that the constraint trigger is not doing its job, until you get other strange sideeffects down the road.\n\nrichard\n\n\n", "msg_date": "Fri, 22 Jan 2021 15:10:23 +0100", "msg_from": "easteregg@verfriemelt.org", "msg_from_op": true, "msg_subject": "Re: plpgsql variable assignment not supporting distinct anymore" }, { "msg_contents": "pá 22. 1. 2021 v 15:10 odesílatel <easteregg@verfriemelt.org> napsal:\n\n> > Probably the fix is not hard, but it is almost the same situation as the\n> > UNION case. The result of your code is not deterministic\n> >\n> > If there are more different ti_resource_id then some values can be\n> randomly\n> > ignored - when hash agg is used.\n> >\n> > The safe fix should be\n> >\n> > _resource_id := (SELECT ti_resource_id\n> > FROM tabk.resource_timeline\n> > WHERE ti_a2_id = _ab2_id\n> > AND ti_type = 'task');\n> >\n> > and you get an exception if some values are ignored. Or if you want to\n> > ignore some values, then you can write\n> >\n> > _resource_id := (SELECT MIN(ti_resource_id) -- or MAX\n> > FROM tabk.resource_timeline\n> > WHERE ti_a2_id = _ab2_id\n> > AND ti_type = 'task');\n> >\n> > Using DISTINCT is not a good solution.\n> >\n>\n> in my usecase it was perfectly fine, because there is a constraint\n> ensuring that here can never be more than on ti_resource_id at any given\n> time for a given _ab2_id.\n> also, whenever there would be more data ( for example if the constraint\n> trigger would have a bug ) you will get an error like this:\n>\n>\n> create table a ( t int );\n> insert into a values (1),(2);\n>\n> do $$\n> declare _t int;\n> begin\n> _t := distinct t from a;\n> end $$;\n>\n> Query failed: ERROR: query \"SELECT distinct t from a\" returned more\n> than one row\n> CONTEXT: PL/pgSQL function inline_code_block line 4 at assignment\n>\n> no doubt, that this piece of code might not look optimal at first glance,\n> but i like my code to fail fast. because with the min() approach, you will\n> not notice, that the constraint trigger is not doing its job, until you get\n> other strange sideeffects down the road.\n>\n\nok\n\nthen you don't need to use group by or DISTINCT\n\njust use\n\n_t := (SELECT ...);\n\nThe performance will be same and less obfuscate and you will not use\nundocumented feature\n\nRegards\n\nPavel\n\n\n\n\n> richard\n>\n\npá 22. 1. 2021 v 15:10 odesílatel <easteregg@verfriemelt.org> napsal:> Probably the fix is not hard, but it is almost the same situation as the\n> UNION case. The result of your code is not deterministic\n> \n> If there are more different ti_resource_id then some values can be randomly\n> ignored - when hash agg is used.\n> \n> The safe fix should be\n> \n> _resource_id := (SELECT ti_resource_id\n>        FROM tabk.resource_timeline\n>       WHERE ti_a2_id = _ab2_id\n>         AND ti_type = 'task');\n> \n> and you get an exception if some values are ignored. Or if you want to\n> ignore some values, then you can write\n> \n> _resource_id := (SELECT MIN(ti_resource_id) -- or MAX\n>        FROM tabk.resource_timeline\n>       WHERE ti_a2_id = _ab2_id\n>         AND ti_type = 'task');\n> \n> Using DISTINCT is not a good solution.\n> \n\nin my usecase it was perfectly fine, because there is a constraint ensuring that here can never be more than on ti_resource_id at any given time for a given _ab2_id.\nalso, whenever there would be more data ( for example if the constraint trigger would have a bug ) you will get an error like this:\n\n\n  create table a ( t int );\n  insert into a values (1),(2);\n\n  do $$\n  declare _t int;\n  begin\n    _t := distinct t from a;\n  end $$;\n\n  Query failed: ERROR:  query \"SELECT distinct t from a\" returned more than one row\n  CONTEXT:  PL/pgSQL function inline_code_block line 4 at assignment\n\nno doubt, that this piece of code might not look optimal at first glance, but i like my code to fail fast. because with the min() approach, you will not notice, that the constraint trigger is not doing its job, until you get other strange sideeffects down the road.okthen you don't need to use group by or DISTINCTjust use _t := (SELECT ...);The performance will be same and less obfuscate and you will not use undocumented featureRegardsPavel\n\nrichard", "msg_date": "Fri, 22 Jan 2021 15:27:00 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plpgsql variable assignment not supporting distinct anymore" } ]
[ { "msg_contents": "Fixes:\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -I../../../../src/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -c -o fd.o fd.c\nfd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n part = pg_pwritev(fd, iov, iovcnt, offset);\n ^~~~~~~~~~\n../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'\n ^~~~~~~\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9: note: 'pwritev' has been marked as being introduced in macOS 11.0\n here, but the deployment target is macOS 10.15.0\nssize_t pwritev(int, const struct iovec *, int, off_t) __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0));\n ^\nfd.c:3661:10: note: enclose 'pwritev' in a __builtin_available check to silence this warning\n part = pg_pwritev(fd, iov, iovcnt, offset);\n ^~~~~~~~~~\n../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'\n ^~~~~~~\n1 warning generated.\n\nThis results in a runtime error:\nrunning bootstrap script ... dyld: lazy symbol binding failed: Symbol not found: _pwritev\n Referenced from: /usr/local/pgsql/bin/postgres\n Expected in: /usr/lib/libSystem.B.dylib\n\ndyld: Symbol not found: _pwritev\n Referenced from: /usr/local/pgsql/bin/postgres\n Expected in: /usr/lib/libSystem.B.dylib\n\nchild process was terminated by signal 6: Abort trap: 6\n\nTo fix this we set -Werror=unguarded-availability-new so that a declaration\ncheck for preadv/pwritev will fail if the symbol is unavailable on the requested\nSDK version.\n---\nChanges v2 -> v3:\n - Replace compile check with AC_CHECK_DECLS\n - Fix preadv detection as well\nChanges v1 -> v2:\n - Add AC_LIBOBJ(pwritev) when pwritev not available\n - set -Werror=unguarded-availability-new for CXX flags as well\n---\n configure | 164 ++++++++++++++++++++++++++++++------\n configure.ac | 9 +-\n src/include/pg_config.h.in | 14 +--\n src/include/port/pg_iovec.h | 4 +-\n src/tools/msvc/Solution.pm | 4 +-\n 5 files changed, 157 insertions(+), 38 deletions(-)\n\ndiff --git a/configure b/configure\nindex 8af4b99021..07a9b08d80 100755\n--- a/configure\n+++ b/configure\n@@ -5373,6 +5373,98 @@ if test x\"$pgac_cv_prog_CC_cflags__Werror_vla\" = x\"yes\"; then\n fi\n \n \n+ # Prevent usage of symbols marked as newer than our target.\n+\n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Werror=unguarded-availability-new, for CFLAGS\" >&5\n+$as_echo_n \"checking whether ${CC} supports -Werror=unguarded-availability-new, for CFLAGS... \" >&6; }\n+if ${pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new+:} false; then :\n+ $as_echo_n \"(cached) \" >&6\n+else\n+ pgac_save_CFLAGS=$CFLAGS\n+pgac_save_CC=$CC\n+CC=${CC}\n+CFLAGS=\"${CFLAGS} -Werror=unguarded-availability-new\"\n+ac_save_c_werror_flag=$ac_c_werror_flag\n+ac_c_werror_flag=yes\n+cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n+/* end confdefs.h. */\n+\n+int\n+main ()\n+{\n+\n+ ;\n+ return 0;\n+}\n+_ACEOF\n+if ac_fn_c_try_compile \"$LINENO\"; then :\n+ pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new=yes\n+else\n+ pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new=no\n+fi\n+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n+ac_c_werror_flag=$ac_save_c_werror_flag\n+CFLAGS=\"$pgac_save_CFLAGS\"\n+CC=\"$pgac_save_CC\"\n+fi\n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" >&5\n+$as_echo \"$pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" >&6; }\n+if test x\"$pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" = x\"yes\"; then\n+ CFLAGS=\"${CFLAGS} -Werror=unguarded-availability-new\"\n+fi\n+\n+\n+ { $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CXX} supports -Werror=unguarded-availability-new, for CXXFLAGS\" >&5\n+$as_echo_n \"checking whether ${CXX} supports -Werror=unguarded-availability-new, for CXXFLAGS... \" >&6; }\n+if ${pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new+:} false; then :\n+ $as_echo_n \"(cached) \" >&6\n+else\n+ pgac_save_CXXFLAGS=$CXXFLAGS\n+pgac_save_CXX=$CXX\n+CXX=${CXX}\n+CXXFLAGS=\"${CXXFLAGS} -Werror=unguarded-availability-new\"\n+ac_save_cxx_werror_flag=$ac_cxx_werror_flag\n+ac_cxx_werror_flag=yes\n+ac_ext=cpp\n+ac_cpp='$CXXCPP $CPPFLAGS'\n+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\n+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\n+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n+\n+cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n+/* end confdefs.h. */\n+\n+int\n+main ()\n+{\n+\n+ ;\n+ return 0;\n+}\n+_ACEOF\n+if ac_fn_cxx_try_compile \"$LINENO\"; then :\n+ pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new=yes\n+else\n+ pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new=no\n+fi\n+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n+ac_ext=c\n+ac_cpp='$CPP $CPPFLAGS'\n+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\n+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\n+ac_compiler_gnu=$ac_cv_c_compiler_gnu\n+\n+ac_cxx_werror_flag=$ac_save_cxx_werror_flag\n+CXXFLAGS=\"$pgac_save_CXXFLAGS\"\n+CXX=\"$pgac_save_CXX\"\n+fi\n+{ $as_echo \"$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" >&5\n+$as_echo \"$pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" >&6; }\n+if test x\"$pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" = x\"yes\"; then\n+ CXXFLAGS=\"${CXXFLAGS} -Werror=unguarded-availability-new\"\n+fi\n+\n+\n # -Wvla is not applicable for C++\n \n { $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Wendif-labels, for CFLAGS\" >&5\n@@ -15646,6 +15738,52 @@ cat >>confdefs.h <<_ACEOF\n _ACEOF\n \n \n+# AC_REPLACE_FUNCS does not respect the deployment target on macOS\n+ac_fn_c_check_decl \"$LINENO\" \"preadv\" \"ac_cv_have_decl_preadv\" \"#include <sys/uio.h>\n+\"\n+if test \"x$ac_cv_have_decl_preadv\" = xyes; then :\n+ ac_have_decl=1\n+else\n+ ac_have_decl=0\n+fi\n+\n+cat >>confdefs.h <<_ACEOF\n+#define HAVE_DECL_PREADV $ac_have_decl\n+_ACEOF\n+if test $ac_have_decl = 1; then :\n+\n+else\n+ case \" $LIBOBJS \" in\n+ *\" preadv.$ac_objext \"* ) ;;\n+ *) LIBOBJS=\"$LIBOBJS preadv.$ac_objext\"\n+ ;;\n+esac\n+\n+fi\n+\n+ac_fn_c_check_decl \"$LINENO\" \"pwritev\" \"ac_cv_have_decl_pwritev\" \"#include <sys/uio.h>\n+\"\n+if test \"x$ac_cv_have_decl_pwritev\" = xyes; then :\n+ ac_have_decl=1\n+else\n+ ac_have_decl=0\n+fi\n+\n+cat >>confdefs.h <<_ACEOF\n+#define HAVE_DECL_PWRITEV $ac_have_decl\n+_ACEOF\n+if test $ac_have_decl = 1; then :\n+\n+else\n+ case \" $LIBOBJS \" in\n+ *\" pwritev.$ac_objext \"* ) ;;\n+ *) LIBOBJS=\"$LIBOBJS pwritev.$ac_objext\"\n+ ;;\n+esac\n+\n+fi\n+\n+\n ac_fn_c_check_decl \"$LINENO\" \"RTLD_GLOBAL\" \"ac_cv_have_decl_RTLD_GLOBAL\" \"#include <dlfcn.h>\n \"\n if test \"x$ac_cv_have_decl_RTLD_GLOBAL\" = xyes; then :\n@@ -15845,19 +15983,6 @@ esac\n \n fi\n \n-ac_fn_c_check_func \"$LINENO\" \"preadv\" \"ac_cv_func_preadv\"\n-if test \"x$ac_cv_func_preadv\" = xyes; then :\n- $as_echo \"#define HAVE_PREADV 1\" >>confdefs.h\n-\n-else\n- case \" $LIBOBJS \" in\n- *\" preadv.$ac_objext \"* ) ;;\n- *) LIBOBJS=\"$LIBOBJS preadv.$ac_objext\"\n- ;;\n-esac\n-\n-fi\n-\n ac_fn_c_check_func \"$LINENO\" \"pwrite\" \"ac_cv_func_pwrite\"\n if test \"x$ac_cv_func_pwrite\" = xyes; then :\n $as_echo \"#define HAVE_PWRITE 1\" >>confdefs.h\n@@ -15871,19 +15996,6 @@ esac\n \n fi\n \n-ac_fn_c_check_func \"$LINENO\" \"pwritev\" \"ac_cv_func_pwritev\"\n-if test \"x$ac_cv_func_pwritev\" = xyes; then :\n- $as_echo \"#define HAVE_PWRITEV 1\" >>confdefs.h\n-\n-else\n- case \" $LIBOBJS \" in\n- *\" pwritev.$ac_objext \"* ) ;;\n- *) LIBOBJS=\"$LIBOBJS pwritev.$ac_objext\"\n- ;;\n-esac\n-\n-fi\n-\n ac_fn_c_check_func \"$LINENO\" \"random\" \"ac_cv_func_random\"\n if test \"x$ac_cv_func_random\" = xyes; then :\n $as_echo \"#define HAVE_RANDOM 1\" >>confdefs.h\ndiff --git a/configure.ac b/configure.ac\nindex 868a94c9ba..0cd1ee8909 100644\n--- a/configure.ac\n+++ b/configure.ac\n@@ -494,6 +494,9 @@ if test \"$GCC\" = yes -a \"$ICC\" = no; then\n AC_SUBST(PERMIT_DECLARATION_AFTER_STATEMENT)\n # Really don't want VLAs to be used in our dialect of C\n PGAC_PROG_CC_CFLAGS_OPT([-Werror=vla])\n+ # Prevent usage of symbols marked as newer than our target.\n+ PGAC_PROG_CC_CFLAGS_OPT([-Werror=unguarded-availability-new])\n+ PGAC_PROG_CXX_CFLAGS_OPT([-Werror=unguarded-availability-new])\n # -Wvla is not applicable for C++\n PGAC_PROG_CC_CFLAGS_OPT([-Wendif-labels])\n PGAC_PROG_CXX_CFLAGS_OPT([-Wendif-labels])\n@@ -1705,6 +1708,10 @@ AC_CHECK_DECLS([strlcat, strlcpy, strnlen])\n # This is probably only present on macOS, but may as well check always\n AC_CHECK_DECLS(F_FULLFSYNC, [], [], [#include <fcntl.h>])\n \n+# AC_REPLACE_FUNCS does not respect the deployment target on macOS\n+AC_CHECK_DECLS([preadv], [], [AC_LIBOBJ(preadv)], [#include <sys/uio.h>])\n+AC_CHECK_DECLS([pwritev], [], [AC_LIBOBJ(pwritev)], [#include <sys/uio.h>])\n+\n AC_CHECK_DECLS([RTLD_GLOBAL, RTLD_NOW], [], [], [#include <dlfcn.h>])\n \n AC_CHECK_TYPE([struct sockaddr_in6],\n@@ -1737,9 +1744,7 @@ AC_REPLACE_FUNCS(m4_normalize([\n \tlink\n \tmkdtemp\n \tpread\n-\tpreadv\n \tpwrite\n-\tpwritev\n \trandom\n \tsrandom\n \tstrlcat\ndiff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in\nindex f4d9f3b408..9c30d008c9 100644\n--- a/src/include/pg_config.h.in\n+++ b/src/include/pg_config.h.in\n@@ -142,6 +142,14 @@\n don't. */\n #undef HAVE_DECL_POSIX_FADVISE\n \n+/* Define to 1 if you have the declaration of `preadv', and to 0 if you don't.\n+ */\n+#undef HAVE_DECL_PREADV\n+\n+/* Define to 1 if you have the declaration of `pwritev', and to 0 if you\n+ don't. */\n+#undef HAVE_DECL_PWRITEV\n+\n /* Define to 1 if you have the declaration of `RTLD_GLOBAL', and to 0 if you\n don't. */\n #undef HAVE_DECL_RTLD_GLOBAL\n@@ -412,9 +420,6 @@\n /* Define to 1 if you have the `pread' function. */\n #undef HAVE_PREAD\n \n-/* Define to 1 if you have the `preadv' function. */\n-#undef HAVE_PREADV\n-\n /* Define to 1 if you have the `pstat' function. */\n #undef HAVE_PSTAT\n \n@@ -433,9 +438,6 @@\n /* Define to 1 if you have the `pwrite' function. */\n #undef HAVE_PWRITE\n \n-/* Define to 1 if you have the `pwritev' function. */\n-#undef HAVE_PWRITEV\n-\n /* Define to 1 if you have the `random' function. */\n #undef HAVE_RANDOM\n \ndiff --git a/src/include/port/pg_iovec.h b/src/include/port/pg_iovec.h\nindex 365d605a9b..760ec4980c 100644\n--- a/src/include/port/pg_iovec.h\n+++ b/src/include/port/pg_iovec.h\n@@ -39,13 +39,13 @@ struct iovec\n /* Define a reasonable maximum that is safe to use on the stack. */\n #define PG_IOV_MAX Min(IOV_MAX, 32)\n \n-#ifdef HAVE_PREADV\n+#if HAVE_DECL_PREADV\n #define pg_preadv preadv\n #else\n extern ssize_t pg_preadv(int fd, const struct iovec *iov, int iovcnt, off_t offset);\n #endif\n \n-#ifdef HAVE_PWRITEV\n+#if HAVE_DECL_PWRITEV\n #define pg_pwritev pwritev\n #else\n extern ssize_t pg_pwritev(int fd, const struct iovec *iov, int iovcnt, off_t offset);\ndiff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm\nindex 2f28de0355..7c235f227d 100644\n--- a/src/tools/msvc/Solution.pm\n+++ b/src/tools/msvc/Solution.pm\n@@ -329,14 +329,14 @@ sub GenerateFiles\n \t\tHAVE_PPC_LWARX_MUTEX_HINT => undef,\n \t\tHAVE_PPOLL => undef,\n \t\tHAVE_PREAD => undef,\n-\t\tHAVE_PREADV => undef,\n+\t\tHAVE_DECL_PREADV => 0,\n \t\tHAVE_PSTAT => undef,\n \t\tHAVE_PS_STRINGS => undef,\n \t\tHAVE_PTHREAD => undef,\n \t\tHAVE_PTHREAD_IS_THREADED_NP => undef,\n \t\tHAVE_PTHREAD_PRIO_INHERIT => undef,\n \t\tHAVE_PWRITE => undef,\n-\t\tHAVE_PWRITEV => undef,\n+\t\tHAVE_DECL_PWRITEV => 0,\n \t\tHAVE_RANDOM => undef,\n \t\tHAVE_READLINE_H => undef,\n \t\tHAVE_READLINE_HISTORY_H => undef,\n-- \n2.30.0\n\n\n\n", "msg_date": "Fri, 22 Jan 2021 12:32:30 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Fri, Jan 22, 2021 at 12:32 PM James Hilliard\n<james.hilliard1@gmail.com> wrote:\n>\n> Fixes:\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -I../../../../src/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -c -o fd.o fd.c\n> fd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n> part = pg_pwritev(fd, iov, iovcnt, offset);\n> ^~~~~~~~~~\n> ../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'\n> ^~~~~~~\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9: note: 'pwritev' has been marked as being introduced in macOS 11.0\n> here, but the deployment target is macOS 10.15.0\n> ssize_t pwritev(int, const struct iovec *, int, off_t) __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0));\n> ^\n> fd.c:3661:10: note: enclose 'pwritev' in a __builtin_available check to silence this warning\n> part = pg_pwritev(fd, iov, iovcnt, offset);\n> ^~~~~~~~~~\n> ../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'\n> ^~~~~~~\n> 1 warning generated.\n>\n> This results in a runtime error:\n> running bootstrap script ... dyld: lazy symbol binding failed: Symbol not found: _pwritev\n> Referenced from: /usr/local/pgsql/bin/postgres\n> Expected in: /usr/lib/libSystem.B.dylib\n>\n> dyld: Symbol not found: _pwritev\n> Referenced from: /usr/local/pgsql/bin/postgres\n> Expected in: /usr/lib/libSystem.B.dylib\n>\n> child process was terminated by signal 6: Abort trap: 6\n>\n> To fix this we set -Werror=unguarded-availability-new so that a declaration\n> check for preadv/pwritev will fail if the symbol is unavailable on the requested\n> SDK version.\n> ---\n> Changes v2 -> v3:\n> - Replace compile check with AC_CHECK_DECLS\n> - Fix preadv detection as well\n> Changes v1 -> v2:\n> - Add AC_LIBOBJ(pwritev) when pwritev not available\n> - set -Werror=unguarded-availability-new for CXX flags as well\n> ---\n> configure | 164 ++++++++++++++++++++++++++++++------\n> configure.ac | 9 +-\n> src/include/pg_config.h.in | 14 +--\n> src/include/port/pg_iovec.h | 4 +-\n> src/tools/msvc/Solution.pm | 4 +-\n> 5 files changed, 157 insertions(+), 38 deletions(-)\n>\n> diff --git a/configure b/configure\n> index 8af4b99021..07a9b08d80 100755\n> --- a/configure\n> +++ b/configure\n> @@ -5373,6 +5373,98 @@ if test x\"$pgac_cv_prog_CC_cflags__Werror_vla\" = x\"yes\"; then\n> fi\n>\n>\n> + # Prevent usage of symbols marked as newer than our target.\n> +\n> +{ $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Werror=unguarded-availability-new, for CFLAGS\" >&5\n> +$as_echo_n \"checking whether ${CC} supports -Werror=unguarded-availability-new, for CFLAGS... \" >&6; }\n> +if ${pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new+:} false; then :\n> + $as_echo_n \"(cached) \" >&6\n> +else\n> + pgac_save_CFLAGS=$CFLAGS\n> +pgac_save_CC=$CC\n> +CC=${CC}\n> +CFLAGS=\"${CFLAGS} -Werror=unguarded-availability-new\"\n> +ac_save_c_werror_flag=$ac_c_werror_flag\n> +ac_c_werror_flag=yes\n> +cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n> +/* end confdefs.h. */\n> +\n> +int\n> +main ()\n> +{\n> +\n> + ;\n> + return 0;\n> +}\n> +_ACEOF\n> +if ac_fn_c_try_compile \"$LINENO\"; then :\n> + pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new=yes\n> +else\n> + pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new=no\n> +fi\n> +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n> +ac_c_werror_flag=$ac_save_c_werror_flag\n> +CFLAGS=\"$pgac_save_CFLAGS\"\n> +CC=\"$pgac_save_CC\"\n> +fi\n> +{ $as_echo \"$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" >&5\n> +$as_echo \"$pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" >&6; }\n> +if test x\"$pgac_cv_prog_CC_cflags__Werror_unguarded_availability_new\" = x\"yes\"; then\n> + CFLAGS=\"${CFLAGS} -Werror=unguarded-availability-new\"\n> +fi\n> +\n> +\n> + { $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CXX} supports -Werror=unguarded-availability-new, for CXXFLAGS\" >&5\n> +$as_echo_n \"checking whether ${CXX} supports -Werror=unguarded-availability-new, for CXXFLAGS... \" >&6; }\n> +if ${pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new+:} false; then :\n> + $as_echo_n \"(cached) \" >&6\n> +else\n> + pgac_save_CXXFLAGS=$CXXFLAGS\n> +pgac_save_CXX=$CXX\n> +CXX=${CXX}\n> +CXXFLAGS=\"${CXXFLAGS} -Werror=unguarded-availability-new\"\n> +ac_save_cxx_werror_flag=$ac_cxx_werror_flag\n> +ac_cxx_werror_flag=yes\n> +ac_ext=cpp\n> +ac_cpp='$CXXCPP $CPPFLAGS'\n> +ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\n> +ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\n> +ac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n> +\n> +cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n> +/* end confdefs.h. */\n> +\n> +int\n> +main ()\n> +{\n> +\n> + ;\n> + return 0;\n> +}\n> +_ACEOF\n> +if ac_fn_cxx_try_compile \"$LINENO\"; then :\n> + pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new=yes\n> +else\n> + pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new=no\n> +fi\n> +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext\n> +ac_ext=c\n> +ac_cpp='$CPP $CPPFLAGS'\n> +ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\n> +ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\n> +ac_compiler_gnu=$ac_cv_c_compiler_gnu\n> +\n> +ac_cxx_werror_flag=$ac_save_cxx_werror_flag\n> +CXXFLAGS=\"$pgac_save_CXXFLAGS\"\n> +CXX=\"$pgac_save_CXX\"\n> +fi\n> +{ $as_echo \"$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" >&5\n> +$as_echo \"$pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" >&6; }\n> +if test x\"$pgac_cv_prog_CXX_cxxflags__Werror_unguarded_availability_new\" = x\"yes\"; then\n> + CXXFLAGS=\"${CXXFLAGS} -Werror=unguarded-availability-new\"\n> +fi\n> +\n> +\n> # -Wvla is not applicable for C++\n>\n> { $as_echo \"$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -Wendif-labels, for CFLAGS\" >&5\n> @@ -15646,6 +15738,52 @@ cat >>confdefs.h <<_ACEOF\n> _ACEOF\n>\n>\n> +# AC_REPLACE_FUNCS does not respect the deployment target on macOS\n> +ac_fn_c_check_decl \"$LINENO\" \"preadv\" \"ac_cv_have_decl_preadv\" \"#include <sys/uio.h>\n> +\"\n> +if test \"x$ac_cv_have_decl_preadv\" = xyes; then :\n> + ac_have_decl=1\n> +else\n> + ac_have_decl=0\n> +fi\n> +\n> +cat >>confdefs.h <<_ACEOF\n> +#define HAVE_DECL_PREADV $ac_have_decl\n> +_ACEOF\n> +if test $ac_have_decl = 1; then :\n> +\n> +else\n> + case \" $LIBOBJS \" in\n> + *\" preadv.$ac_objext \"* ) ;;\n> + *) LIBOBJS=\"$LIBOBJS preadv.$ac_objext\"\n> + ;;\n> +esac\n> +\n> +fi\n> +\n> +ac_fn_c_check_decl \"$LINENO\" \"pwritev\" \"ac_cv_have_decl_pwritev\" \"#include <sys/uio.h>\n> +\"\n> +if test \"x$ac_cv_have_decl_pwritev\" = xyes; then :\n> + ac_have_decl=1\n> +else\n> + ac_have_decl=0\n> +fi\n> +\n> +cat >>confdefs.h <<_ACEOF\n> +#define HAVE_DECL_PWRITEV $ac_have_decl\n> +_ACEOF\n> +if test $ac_have_decl = 1; then :\n> +\n> +else\n> + case \" $LIBOBJS \" in\n> + *\" pwritev.$ac_objext \"* ) ;;\n> + *) LIBOBJS=\"$LIBOBJS pwritev.$ac_objext\"\n> + ;;\n> +esac\n> +\n> +fi\n> +\n> +\n> ac_fn_c_check_decl \"$LINENO\" \"RTLD_GLOBAL\" \"ac_cv_have_decl_RTLD_GLOBAL\" \"#include <dlfcn.h>\n> \"\n> if test \"x$ac_cv_have_decl_RTLD_GLOBAL\" = xyes; then :\n> @@ -15845,19 +15983,6 @@ esac\n>\n> fi\n>\n> -ac_fn_c_check_func \"$LINENO\" \"preadv\" \"ac_cv_func_preadv\"\n> -if test \"x$ac_cv_func_preadv\" = xyes; then :\n> - $as_echo \"#define HAVE_PREADV 1\" >>confdefs.h\n> -\n> -else\n> - case \" $LIBOBJS \" in\n> - *\" preadv.$ac_objext \"* ) ;;\n> - *) LIBOBJS=\"$LIBOBJS preadv.$ac_objext\"\n> - ;;\n> -esac\n> -\n> -fi\n> -\n> ac_fn_c_check_func \"$LINENO\" \"pwrite\" \"ac_cv_func_pwrite\"\n> if test \"x$ac_cv_func_pwrite\" = xyes; then :\n> $as_echo \"#define HAVE_PWRITE 1\" >>confdefs.h\n> @@ -15871,19 +15996,6 @@ esac\n>\n> fi\n>\n> -ac_fn_c_check_func \"$LINENO\" \"pwritev\" \"ac_cv_func_pwritev\"\n> -if test \"x$ac_cv_func_pwritev\" = xyes; then :\n> - $as_echo \"#define HAVE_PWRITEV 1\" >>confdefs.h\n> -\n> -else\n> - case \" $LIBOBJS \" in\n> - *\" pwritev.$ac_objext \"* ) ;;\n> - *) LIBOBJS=\"$LIBOBJS pwritev.$ac_objext\"\n> - ;;\n> -esac\n> -\n> -fi\n> -\n> ac_fn_c_check_func \"$LINENO\" \"random\" \"ac_cv_func_random\"\n> if test \"x$ac_cv_func_random\" = xyes; then :\n> $as_echo \"#define HAVE_RANDOM 1\" >>confdefs.h\n> diff --git a/configure.ac b/configure.ac\n> index 868a94c9ba..0cd1ee8909 100644\n> --- a/configure.ac\n> +++ b/configure.ac\n> @@ -494,6 +494,9 @@ if test \"$GCC\" = yes -a \"$ICC\" = no; then\n> AC_SUBST(PERMIT_DECLARATION_AFTER_STATEMENT)\n> # Really don't want VLAs to be used in our dialect of C\n> PGAC_PROG_CC_CFLAGS_OPT([-Werror=vla])\n> + # Prevent usage of symbols marked as newer than our target.\n> + PGAC_PROG_CC_CFLAGS_OPT([-Werror=unguarded-availability-new])\n> + PGAC_PROG_CXX_CFLAGS_OPT([-Werror=unguarded-availability-new])\n> # -Wvla is not applicable for C++\n> PGAC_PROG_CC_CFLAGS_OPT([-Wendif-labels])\n> PGAC_PROG_CXX_CFLAGS_OPT([-Wendif-labels])\n> @@ -1705,6 +1708,10 @@ AC_CHECK_DECLS([strlcat, strlcpy, strnlen])\n> # This is probably only present on macOS, but may as well check always\n> AC_CHECK_DECLS(F_FULLFSYNC, [], [], [#include <fcntl.h>])\n>\n> +# AC_REPLACE_FUNCS does not respect the deployment target on macOS\n> +AC_CHECK_DECLS([preadv], [], [AC_LIBOBJ(preadv)], [#include <sys/uio.h>])\n> +AC_CHECK_DECLS([pwritev], [], [AC_LIBOBJ(pwritev)], [#include <sys/uio.h>])\nDoes this approach using a different standard autoconf probe adequately address\nthe maintainability issue regarding the AC_LANG_PROGRAM autoconf probes in my\nprevious patch brought up in https://postgr.es/m/915981.1611254324@sss.pgh.pa.us\nor should I look for another alternative?\n> +\n> AC_CHECK_DECLS([RTLD_GLOBAL, RTLD_NOW], [], [], [#include <dlfcn.h>])\n>\n> AC_CHECK_TYPE([struct sockaddr_in6],\n> @@ -1737,9 +1744,7 @@ AC_REPLACE_FUNCS(m4_normalize([\n> link\n> mkdtemp\n> pread\n> - preadv\n> pwrite\n> - pwritev\n> random\n> srandom\n> strlcat\n> diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in\n> index f4d9f3b408..9c30d008c9 100644\n> --- a/src/include/pg_config.h.in\n> +++ b/src/include/pg_config.h.in\n> @@ -142,6 +142,14 @@\n> don't. */\n> #undef HAVE_DECL_POSIX_FADVISE\n>\n> +/* Define to 1 if you have the declaration of `preadv', and to 0 if you don't.\n> + */\n> +#undef HAVE_DECL_PREADV\n> +\n> +/* Define to 1 if you have the declaration of `pwritev', and to 0 if you\n> + don't. */\n> +#undef HAVE_DECL_PWRITEV\n> +\n> /* Define to 1 if you have the declaration of `RTLD_GLOBAL', and to 0 if you\n> don't. */\n> #undef HAVE_DECL_RTLD_GLOBAL\n> @@ -412,9 +420,6 @@\n> /* Define to 1 if you have the `pread' function. */\n> #undef HAVE_PREAD\n>\n> -/* Define to 1 if you have the `preadv' function. */\n> -#undef HAVE_PREADV\n> -\n> /* Define to 1 if you have the `pstat' function. */\n> #undef HAVE_PSTAT\n>\n> @@ -433,9 +438,6 @@\n> /* Define to 1 if you have the `pwrite' function. */\n> #undef HAVE_PWRITE\n>\n> -/* Define to 1 if you have the `pwritev' function. */\n> -#undef HAVE_PWRITEV\n> -\n> /* Define to 1 if you have the `random' function. */\n> #undef HAVE_RANDOM\n>\n> diff --git a/src/include/port/pg_iovec.h b/src/include/port/pg_iovec.h\n> index 365d605a9b..760ec4980c 100644\n> --- a/src/include/port/pg_iovec.h\n> +++ b/src/include/port/pg_iovec.h\n> @@ -39,13 +39,13 @@ struct iovec\n> /* Define a reasonable maximum that is safe to use on the stack. */\n> #define PG_IOV_MAX Min(IOV_MAX, 32)\n>\n> -#ifdef HAVE_PREADV\n> +#if HAVE_DECL_PREADV\nI could rework this to use HAVE_PWRITEV like before if that is\npreferable, I just\nwent with this since it is the default for AC_CHECK_DECLS.\n> #define pg_preadv preadv\n> #else\n> extern ssize_t pg_preadv(int fd, const struct iovec *iov, int iovcnt, off_t offset);\n> #endif\n>\n> -#ifdef HAVE_PWRITEV\n> +#if HAVE_DECL_PWRITEV\n> #define pg_pwritev pwritev\n> #else\n> extern ssize_t pg_pwritev(int fd, const struct iovec *iov, int iovcnt, off_t offset);\n> diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm\n> index 2f28de0355..7c235f227d 100644\n> --- a/src/tools/msvc/Solution.pm\n> +++ b/src/tools/msvc/Solution.pm\n> @@ -329,14 +329,14 @@ sub GenerateFiles\n> HAVE_PPC_LWARX_MUTEX_HINT => undef,\n> HAVE_PPOLL => undef,\n> HAVE_PREAD => undef,\n> - HAVE_PREADV => undef,\n> + HAVE_DECL_PREADV => 0,\n> HAVE_PSTAT => undef,\n> HAVE_PS_STRINGS => undef,\n> HAVE_PTHREAD => undef,\n> HAVE_PTHREAD_IS_THREADED_NP => undef,\n> HAVE_PTHREAD_PRIO_INHERIT => undef,\n> HAVE_PWRITE => undef,\n> - HAVE_PWRITEV => undef,\n> + HAVE_DECL_PWRITEV => 0,\n> HAVE_RANDOM => undef,\n> HAVE_READLINE_H => undef,\n> HAVE_READLINE_HISTORY_H => undef,\n> --\n> 2.30.0\n>\n\n\n", "msg_date": "Sat, 30 Jan 2021 23:59:51 -0700", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "Hi James,\n\nOn 1/31/21 1:59 AM, James Hilliard wrote:\n> On Fri, Jan 22, 2021 at 12:32 PM James Hilliard\n> <james.hilliard1@gmail.com> wrote:\n>>\n>> Fixes:\n>> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -I../../../../src/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -c -o fd.o fd.c\n>> fd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n>> part = pg_pwritev(fd, iov, iovcnt, offset);\n>> ^~~~~~~~~~\n\nIt would be better to provide this patch as an attachment so the cfbot \n(http://commitfest.cputube.org/) can test it.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 29 Mar 2021 09:52:10 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "Should it work if I just attach it to the thread like this?\n\nOn Mon, Mar 29, 2021 at 7:52 AM David Steele <david@pgmasters.net> wrote:\n>\n> Hi James,\n>\n> On 1/31/21 1:59 AM, James Hilliard wrote:\n> > On Fri, Jan 22, 2021 at 12:32 PM James Hilliard\n> > <james.hilliard1@gmail.com> wrote:\n> >>\n> >> Fixes:\n> >> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -O2 -I../../../../src/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -c -o fd.o fd.c\n> >> fd.c:3661:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]\n> >> part = pg_pwritev(fd, iov, iovcnt, offset);\n> >> ^~~~~~~~~~\n>\n> It would be better to provide this patch as an attachment so the cfbot\n> (http://commitfest.cputube.org/) can test it.\n>\n> Regards,\n> --\n> -David\n> david@pgmasters.net", "msg_date": "Mon, 29 Mar 2021 11:37:16 -0600", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Tue, Mar 30, 2021 at 6:37 AM James Hilliard\n<james.hilliard1@gmail.com> wrote:\n> Should it work if I just attach it to the thread like this?\n\nYes. It automatically tries patches that are attached to threads that\nare registered on commitfest.postgresql.org on 4 OSes, and we can see\nthat it succeeded, and we can inspect the configure output and see\nthat only the two clang-based systems detected and used the new\nunguarded-availability-new flags and used them.\n\nThis should be alphabetised better:\n\n HAVE_PREAD => undef,\n- HAVE_PREADV => undef,\n+ HAVE_DECL_PREADV => 0,\n HAVE_PSTAT => undef,\n\nSo the question here is really: do we want to support Apple cross-SDK\nbuilds, in our configure scripts? It costs very little to switch from\ntraditional \"does-this-symbol-exist?\" tests to testing declarations,\nso no objections here.\n\nI doubt people will remember to do this for other new syscall probes\nthough, so it might be a matter of discussing it case-by-case when a\nproblem shows up. For example, I recently added another new test,\nspecifically targeting macOS: pthread_barrier_wait. One day they\nmight add it to libSystem and we might need to tweak that one\nsimilarly.\n\n\n", "msg_date": "Tue, 30 Mar 2021 11:10:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Mon, Mar 29, 2021 at 4:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Mar 30, 2021 at 6:37 AM James Hilliard\n> <james.hilliard1@gmail.com> wrote:\n> > Should it work if I just attach it to the thread like this?\n>\n> Yes. It automatically tries patches that are attached to threads that\n> are registered on commitfest.postgresql.org on 4 OSes, and we can see\n> that it succeeded, and we can inspect the configure output and see\n> that only the two clang-based systems detected and used the new\n> unguarded-availability-new flags and used them.\nThat sounds right, the flag is clang specific from my understanding.\n>\n> This should be alphabetised better:\n>\n> HAVE_PREAD => undef,\n> - HAVE_PREADV => undef,\n> + HAVE_DECL_PREADV => 0,\n> HAVE_PSTAT => undef,\nShould I resend with that changed or can it just be fixed when applied?\n>\n> So the question here is really: do we want to support Apple cross-SDK\n> builds, in our configure scripts? It costs very little to switch from\n> traditional \"does-this-symbol-exist?\" tests to testing declarations,\n> so no objections here.\nWell this adds support for the target availability setting, which applies\nto effectively all Apple SDK's, so it's really more of a cross target issue\nrather than a cross SDK issue than anything from what I can tell.\n\nEffectively without this change setting MACOSX_DEPLOYMENT_TARGET\nwould not work properly on any Apple SDK's. Currently postgres essentially\nis relying on the command line tools not supporting newer targets than\nthe host system, which is not something that appears to be guaranteed\nat all by Apple from my understanding. This is because the current\ndetection technique is unable to detect if a symbol is restricted by\nMACOSX_DEPLOYMENT_TARGET, so it will essentially always\nuse the newest SDK symbols even if they are only available to a\nMACOSX_DEPLOYMENT_TARGET newer than the configured\ndeployment target.\n\nSay for example if I want to build for a 10.14 target from a 10.15 host\nwith the standard 10.15 command line tools, with this change that is\npossible simply by setting MACOSX_DEPLOYMENT_TARGET,\notherwise it will only build for a 10.14 target from a SDK that does not\nhave 10.15 only symbols present at all, even if that SDK has full\nsupport for 10.14 targets.\n>\n> I doubt people will remember to do this for other new syscall probes\n> though, so it might be a matter of discussing it case-by-case when a\n> problem shows up. For example, I recently added another new test,\n> specifically targeting macOS: pthread_barrier_wait. One day they\n> might add it to libSystem and we might need to tweak that one\n> similarly.\nYeah, this technique should allow for trivially supporting new symbol\ntarget availability in the OSX SDK fairly easily, as long as you have\nthe unguarded-availability-new flag set the compile tests respect the\ncompilers target availability settings if the appropriate header is included.\n\n\n", "msg_date": "Mon, 29 Mar 2021 17:31:51 -0600", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Tue, Mar 30, 2021 at 12:32 PM James Hilliard\n<james.hilliard1@gmail.com> wrote:\n> Should I resend with that changed or can it just be fixed when applied?\n\nI'll move it when committing. I'll let this patch sit for another day\nto see if any other objections show up.\n\n\n", "msg_date": "Tue, 30 Mar 2021 12:54:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I'll move it when committing. I'll let this patch sit for another day\n> to see if any other objections show up.\n\nFWIW, I remain fairly strongly against this, precisely because of the\npoint that it requires us to start using a randomly different\nfeature-probing technology anytime Apple decides that they're going to\nimplement some standard API that they didn't before. Even if it works\neverywhere for preadv/pwritev (which we won't know in advance of\nbuildfarm testing, and maybe not then, since detection failures will\nprobably be silent), it seems likely that we'll hit some case in the\nfuture where this interacts badly with some other platform's weirdness.\nWe haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,\nand I'm not sure we should start now. How many people actually care\nabout that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 01:58:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Mon, Mar 29, 2021 at 11:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I'll move it when committing. I'll let this patch sit for another day\n> > to see if any other objections show up.\n>\n> FWIW, I remain fairly strongly against this, precisely because of the\n> point that it requires us to start using a randomly different\n> feature-probing technology anytime Apple decides that they're going to\n> implement some standard API that they didn't before.\n\nThis effectively works automatically as long as the feature check includes\nthe appropriate header. This works with AC_CHECK_DECLS or any\nother autoconf probes that include the appropriate headers.\n\n> Even if it works\n> everywhere for preadv/pwritev (which we won't know in advance of\n> buildfarm testing, and maybe not then, since detection failures will\n> probably be silent), it seems likely that we'll hit some case in the\n> future where this interacts badly with some other platform's weirdness.\n\nWell part of the motivation for setting unguarded-availability-new is so\nthat we get a hard error instead of just a deployment target version\nincompatibility warning, the current situation does actually result in\nweird runtime errors, this change makes those much more obvious\nbuild errors.\n\nTo ensure these errors surface quickly one could have some of the\nbuildfarm tests set different MACOSX_DEPLOYMENT_TARGET's\nwhich should then hard error during the build if incompatible symbols are\naccidentally included.\n\nI don't think this is likely to cause issues for other platforms unless they use\nthe availability attribute incorrectly see: https://reviews.llvm.org/D34264\n\n> We haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,\n> and I'm not sure we should start now. How many people actually care\n> about that?\n\nSeems kinda important for anyone who wants to build postgres\ncompatible with targets older than the host system.\n\n\n", "msg_date": "Tue, 30 Mar 2021 00:39:42 -0600", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Tue, Mar 30, 2021 at 7:39 PM James Hilliard\n<james.hilliard1@gmail.com> wrote:\n> On Mon, Mar 29, 2021 at 11:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > We haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,\n> > and I'm not sure we should start now. How many people actually care\n> > about that?\n>\n> Seems kinda important for anyone who wants to build postgres\n> compatible with targets older than the host system.\n\nPersonally I'm mostly concerned about making it easy for new\ncontributors to get a working dev system going on a super common\nplatform without dealing with hard-to-diagnose errors, than people who\nactually want a different target as a deliberate choice. Do I\nunderstand correctly that there a period of time each year when major\nupgrades come out of sync and lots of people finish up running a\ntoolchain and OS with this problem for a while due to the default\ntarget not matching? If so I wonder if other projects are running\ninto this with AC_REPLACE_FUNCS and what they're doing.\n\nI suppose an alternative strategy would be to try to detect the\nmismatch and spit out a friendlier warning, if we decide we're not\ngoing to support such builds.\n\n\n", "msg_date": "Wed, 31 Mar 2021 13:43:15 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Tue, Mar 30, 2021 at 6:43 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Mar 30, 2021 at 7:39 PM James Hilliard\n> <james.hilliard1@gmail.com> wrote:\n> > On Mon, Mar 29, 2021 at 11:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > We haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,\n> > > and I'm not sure we should start now. How many people actually care\n> > > about that?\n> >\n> > Seems kinda important for anyone who wants to build postgres\n> > compatible with targets older than the host system.\n>\n> Personally I'm mostly concerned about making it easy for new\n> contributors to get a working dev system going on a super common\n> platform without dealing with hard-to-diagnose errors, than people who\n> actually want a different target as a deliberate choice.\n\nYeah, that's where I was running into this. Currently we build for the max\ndeployment target available in the SDK, regardless of if that is the deployment\ntarget actually set, the compiler by default automatically sets the deployment\ntarget to the build host but if the SDK supports newer deployment targets\nthat's where things break down.\n\n> Do I\n> understand correctly that there a period of time each year when major\n> upgrades come out of sync and lots of people finish up running a\n> toolchain and OS with this problem for a while due to the default\n> target not matching?\n\nWell you can hit this if you try and build against a toolchain that supports\ntargets newer than the host pretty easily, although I think postgres\ntries to use the cli tools SDK by default which appears somewhat less\nprone to this issue(although I don't think this behavior is guaranteed).\n\n> If so I wonder if other projects are running\n> into this with AC_REPLACE_FUNCS and what they're doing.\n\nWell I did come up with another approach, which uses AC_LANG_PROGRAM\ninstead of AC_CHECK_DECLS that might be better\nhttps://lists.gnu.org/archive/html/autoconf-patches/2021-02/msg00007.html\n\nHowever I didn't submit that version here since it uses a custom probe via\nAC_LANG_PROGRAM instead of a standard probe like AC_CHECK_DECLS\nwhich Tom Lane said would be a maintenance issue, at least with this\nAC_CHECK_DECLS method we can avoid using any non-standard probes:\nhttps://postgr.es/m/915981.1611254324%40sss.pgh.pa.us\n\n\n>\n> I suppose an alternative strategy would be to try to detect the\n> mismatch and spit out a friendlier warning, if we decide we're not\n> going to support such builds.\n\n\n", "msg_date": "Tue, 30 Mar 2021 19:15:10 -0600", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Personally I'm mostly concerned about making it easy for new\n> contributors to get a working dev system going on a super common\n> platform without dealing with hard-to-diagnose errors, than people who\n> actually want a different target as a deliberate choice. Do I\n> understand correctly that there a period of time each year when major\n> upgrades come out of sync and lots of people finish up running a\n> toolchain and OS with this problem for a while due to the default\n> target not matching? If so I wonder if other projects are running\n> into this with AC_REPLACE_FUNCS and what they're doing.\n\nYeah, we've seen this happen at least a couple of times, though\nit was only during this past cycle that we (I anyway) entirely\nunderstood what was happening.\n\nThe patches we committed in January (4823621db, 9d23c15a0, 50bebc1ae)\nto improve our PG_SYSROOT selection heuristics should theoretically\nimprove the situation ... though I admit I won't have a lot of\nconfidence in them till we've been through a couple more rounds of\nasynchronous-XCode-and-macOS releases. Still, I feel that we\nought to leave that code alone until we see how it does.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 21:51:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Tue, Mar 30, 2021 at 7:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Personally I'm mostly concerned about making it easy for new\n> > contributors to get a working dev system going on a super common\n> > platform without dealing with hard-to-diagnose errors, than people who\n> > actually want a different target as a deliberate choice. Do I\n> > understand correctly that there a period of time each year when major\n> > upgrades come out of sync and lots of people finish up running a\n> > toolchain and OS with this problem for a while due to the default\n> > target not matching? If so I wonder if other projects are running\n> > into this with AC_REPLACE_FUNCS and what they're doing.\n>\n> Yeah, we've seen this happen at least a couple of times, though\n> it was only during this past cycle that we (I anyway) entirely\n> understood what was happening.\n>\n> The patches we committed in January (4823621db, 9d23c15a0, 50bebc1ae)\n> to improve our PG_SYSROOT selection heuristics should theoretically\n> improve the situation ... though I admit I won't have a lot of\n> confidence in them till we've been through a couple more rounds of\n> asynchronous-XCode-and-macOS releases. Still, I feel that we\n> ought to leave that code alone until we see how it does.\n\nI mean, we know that it will still break under a number of common\ncircumstances so I think we should be fixing the root cause(improper\ndetection of target deployment API availability in probes) in some\nway as this will probably continue to be an issue otherwise, we already\nknow that improving PG_SYSROOT selection can not fix the root issue\nbut rather tries to workaround it in a way that is pretty much guaranteed\nto be brittle.\n\nIs there any approach to fixing the root cause of this issue that you think\nwould be acceptable?\n\n>\n> regards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 20:19:22 -0600", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "Hi,\n\nI see this issue persist when I compile PG v14 beta1 on macOS Apple M1\nusing macOS 11.1 SDK. Even though the build didn't fail, the execution of\ninitdb on macOS 10.15 failed with the same error. Here is the snippet of\nthe build log:\n\n--\n\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv\n> -Wno-unused-command-line-argument -g -isysroot\n> /Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk\n> -mmacosx-version-min=10.14 -arch x86_64 -arch arm64 -O2 -I. -I.\n> -I../../../src/include -I/opt/local/Current/include -isysroot\n> /Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk\n> -I/opt/local/20210406/include/libxml2\n> -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include\n> -I/opt/local/Current/include/openssl/ -c -o backup_manifest.o\n> backup_manifest.c\n> fd.c:3740:10: warning: 'pwritev' is only available on macOS 11.0 or newer\n> [-Wunguarded-availability-new]\n> part = pg_pwritev(fd, iov, iovcnt, offset);\n> ^~~~~~~~~~\n> ../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro\n> 'pg_pwritev'\n> #define pg_pwritev pwritev\n> ^~~~~~~\n> /Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9:\n> note: 'pwritev' has been marked as being introduced in macOS 11.0 here, but\n> the deployment target is macOS 10.14.0\n> ssize_t pwritev(int, const struct iovec *, int, off_t)\n> __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0),\n> watchos(7.0), tvos(14.0));\n> ^\n> fd.c:3740:10: note: enclose 'pwritev' in a __builtin_available check to\n> silence this warning\n> part = pg_pwritev(fd, iov, iovcnt, offset);\n> ^~~~~~~~~~\n> ../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro\n> 'pg_pwritev'\n> #define pg_pwritev pwritev\n\n--\n\ninitdb failure:\n--\n\n> The database cluster will be initialized with locales\n> COLLATE: C\n> CTYPE: UTF-8\n> MESSAGES: C\n> MONETARY: C\n> NUMERIC: C\n> TIME: C\n> The default database encoding has accordingly been set to \"UTF8\".\n> initdb: could not find suitable text search configuration for locale\n> \"UTF-8\"\n> The default text search configuration will be set to \"simple\".\n> Data page checksums are disabled.\n> creating directory /tmp/data ... ok\n> creating subdirectories ... ok\n> selecting dynamic shared memory implementation ... posix\n> selecting default max_connections ... 100\n> selecting default shared_buffers ... 128MB\n> selecting default time zone ... Asia/Kolkata\n> creating configuration files ... ok\n> running bootstrap script ... dyld: lazy symbol binding failed: Symbol not\n> found: _pwritev\n> Referenced from: /Library/PostgreSQL/14/bin/postgres\n> Expected in: /usr/lib/libSystem.B.dylib\n> dyld: Symbol not found: _pwritev\n> Referenced from: /Library/PostgreSQL/14/bin/postgres\n> Expected in: /usr/lib/libSystem.B.dylib\n> child process was terminated by signal 6: Abort trap: 6\n> initdb: removing data directory \"/tmp/data\"\n\n--\n\nOn Wed, Mar 31, 2021 at 7:49 AM James Hilliard <james.hilliard1@gmail.com>\nwrote:\n\n> On Tue, Mar 30, 2021 at 7:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Personally I'm mostly concerned about making it easy for new\n> > > contributors to get a working dev system going on a super common\n> > > platform without dealing with hard-to-diagnose errors, than people who\n> > > actually want a different target as a deliberate choice. Do I\n> > > understand correctly that there a period of time each year when major\n> > > upgrades come out of sync and lots of people finish up running a\n> > > toolchain and OS with this problem for a while due to the default\n> > > target not matching? If so I wonder if other projects are running\n> > > into this with AC_REPLACE_FUNCS and what they're doing.\n> >\n> > Yeah, we've seen this happen at least a couple of times, though\n> > it was only during this past cycle that we (I anyway) entirely\n> > understood what was happening.\n> >\n> > The patches we committed in January (4823621db, 9d23c15a0, 50bebc1ae)\n> > to improve our PG_SYSROOT selection heuristics should theoretically\n> > improve the situation ... though I admit I won't have a lot of\n> > confidence in them till we've been through a couple more rounds of\n> > asynchronous-XCode-and-macOS releases. Still, I feel that we\n> > ought to leave that code alone until we see how it does.\n>\n> I mean, we know that it will still break under a number of common\n> circumstances so I think we should be fixing the root cause(improper\n> detection of target deployment API availability in probes) in some\n> way as this will probably continue to be an issue otherwise, we already\n> know that improving PG_SYSROOT selection can not fix the root issue\n> but rather tries to workaround it in a way that is pretty much guaranteed\n> to be brittle.\n>\n> Is there any approach to fixing the root cause of this issue that you think\n> would be acceptable?\n>\n> >\n> > regards, tom lane\n>\n>\n>\n\n-- \nSandeep Thakkar\n\nHi,I see this issue persist when I compile PG v14 beta1 on macOS Apple M1 using macOS 11.1 SDK. Even though the build didn't fail, the execution of initdb on macOS 10.15 failed with the same error. Here is the snippet of the build log:--gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -g -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk -mmacosx-version-min=10.14 -arch x86_64 -arch arm64 -O2 -I. -I. -I../../../src/include -I/opt/local/Current/include -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk  -I/opt/local/20210406/include/libxml2  -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include -I/opt/local/Current/include/openssl/  -c -o backup_manifest.o backup_manifest.cfd.c:3740:10: warning: 'pwritev' is only available on macOS 11.0 or newer [-Wunguarded-availability-new]                part = pg_pwritev(fd, iov, iovcnt, offset);                       ^~~~~~~~~~../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'#define pg_pwritev pwritev                   ^~~~~~~/Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9: note: 'pwritev' has been marked as being introduced in macOS 11.0 here, but the deployment target is macOS 10.14.0ssize_t pwritev(int, const struct iovec *, int, off_t) __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0));        ^fd.c:3740:10: note: enclose 'pwritev' in a __builtin_available check to silence this warning                part = pg_pwritev(fd, iov, iovcnt, offset);                       ^~~~~~~~~~../../../../src/include/port/pg_iovec.h:49:20: note: expanded from macro 'pg_pwritev'#define pg_pwritev pwritev--initdb failure:--The database cluster will be initialized with locales  COLLATE:  C  CTYPE:    UTF-8  MESSAGES: C  MONETARY: C  NUMERIC:  C  TIME:     CThe default database encoding has accordingly been set to \"UTF8\".initdb: could not find suitable text search configuration for locale \"UTF-8\"The default text search configuration will be set to \"simple\".Data page checksums are disabled.creating directory /tmp/data ... okcreating subdirectories ... okselecting dynamic shared memory implementation ... posixselecting default max_connections ... 100selecting default shared_buffers ... 128MBselecting default time zone ... Asia/Kolkatacreating configuration files ... okrunning bootstrap script ... dyld: lazy symbol binding failed: Symbol not found: _pwritev  Referenced from: /Library/PostgreSQL/14/bin/postgres  Expected in: /usr/lib/libSystem.B.dylibdyld: Symbol not found: _pwritev  Referenced from: /Library/PostgreSQL/14/bin/postgres  Expected in: /usr/lib/libSystem.B.dylibchild process was terminated by signal 6: Abort trap: 6initdb: removing data directory \"/tmp/data\"--On Wed, Mar 31, 2021 at 7:49 AM James Hilliard <james.hilliard1@gmail.com> wrote:On Tue, Mar 30, 2021 at 7:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Personally I'm mostly concerned about making it easy for new\n> > contributors to get a working dev system going on a super common\n> > platform without dealing with hard-to-diagnose errors, than people who\n> > actually want a different target as a deliberate choice.  Do I\n> > understand correctly that there a period of time each year when major\n> > upgrades come out of sync and lots of people finish up running a\n> > toolchain and OS with this problem for a while due to the default\n> > target not matching?   If so I wonder if other projects are running\n> > into this with AC_REPLACE_FUNCS and what they're doing.\n>\n> Yeah, we've seen this happen at least a couple of times, though\n> it was only during this past cycle that we (I anyway) entirely\n> understood what was happening.\n>\n> The patches we committed in January (4823621db, 9d23c15a0, 50bebc1ae)\n> to improve our PG_SYSROOT selection heuristics should theoretically\n> improve the situation ... though I admit I won't have a lot of\n> confidence in them till we've been through a couple more rounds of\n> asynchronous-XCode-and-macOS releases.  Still, I feel that we\n> ought to leave that code alone until we see how it does.\n\nI mean, we know that it will still break under a number of common\ncircumstances so I think we should be fixing the root cause(improper\ndetection of target deployment API availability in probes) in some\nway as this will probably continue to be an issue otherwise, we already\nknow that improving PG_SYSROOT selection can not fix the root issue\nbut rather tries to workaround it in a way that is pretty much guaranteed\nto be brittle.\n\nIs there any approach to fixing the root cause of this issue that you think\nwould be acceptable?\n\n>\n>                         regards, tom lane\n\n\n-- Sandeep Thakkar", "msg_date": "Tue, 18 May 2021 13:43:21 +0530", "msg_from": "Sandeep Thakkar <sandeep.thakkar@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Tue, Mar 30, 2021 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I'll move it when committing. I'll let this patch sit for another day\n> > to see if any other objections show up.\n>\n> FWIW, I remain fairly strongly against this, precisely because of the\n> point that it requires us to start using a randomly different\n> feature-probing technology anytime Apple decides that they're going to\n> implement some standard API that they didn't before. Even if it works\n> everywhere for preadv/pwritev (which we won't know in advance of\n> buildfarm testing, and maybe not then, since detection failures will\n> probably be silent), it seems likely that we'll hit some case in the\n> future where this interacts badly with some other platform's weirdness.\n> We haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,\n> and I'm not sure we should start now. How many people actually care\n> about that?\n>\n\nI missed this earlier - it's come to my attention through a thread on the\n-packagers list. Adding my response on that thread here for this audience:\n\nThe ability to target older releases with a newer SDK is essential for\npackages such as the EDB PostgreSQL installers and the pgAdmin community\ninstallers. It's very difficult (sometimes impossible) to get older OS\nversions on new machines now - Apple make it very hard to download old\nversions of macOS (some can be found, others not), and they won't always\nwork on newer hardware anyway so it's really not feasible to have all the\nbuild machines running the oldest version that needs to be supported.\n\nFYI, the pgAdmin and PG installer buildfarms have\n-mmacosx-version-min=10.12 in CFLAGS etc. to handle this, which is\nsynonymous with MACOSX_DEPLOYMENT_TARGET. We've been successfully building\npackages that way for a decade or more.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Tue, Mar 30, 2021 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Thomas Munro <thomas.munro@gmail.com> writes:\n> I'll move it when committing.  I'll let this patch sit for another day\n> to see if any other objections show up.\n\nFWIW, I remain fairly strongly against this, precisely because of the\npoint that it requires us to start using a randomly different\nfeature-probing technology anytime Apple decides that they're going to\nimplement some standard API that they didn't before.  Even if it works\neverywhere for preadv/pwritev (which we won't know in advance of\nbuildfarm testing, and maybe not then, since detection failures will\nprobably be silent), it seems likely that we'll hit some case in the\nfuture where this interacts badly with some other platform's weirdness.\nWe haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,\nand I'm not sure we should start now.  How many people actually care\nabout that?I missed this earlier - it's come to my attention through a thread on the -packagers list. Adding my response on that thread here for this audience:The ability to target older releases with a newer SDK is essential for packages such as the EDB PostgreSQL installers and the pgAdmin community installers. It's very difficult (sometimes impossible) to get older OS versions on new machines now - Apple make it very hard to download old versions of macOS (some can be found, others not), and they won't always work on newer hardware anyway so it's really not feasible to have all the build machines running the oldest version that needs to be supported.FYI, the pgAdmin and PG installer buildfarms have -mmacosx-version-min=10.12 in CFLAGS etc. to handle this, which is synonymous with MACOSX_DEPLOYMENT_TARGET. We've been successfully building packages that way for a decade or more. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Thu, 20 May 2021 09:34:20 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "Hi,\n\nDo we see any solution to this issue? or using the older SDK is the way to\ngo?\n\nOn Thu, May 20, 2021 at 2:04 PM Dave Page <dpage@pgadmin.org> wrote:\n\n>\n>\n> On Tue, Mar 30, 2021 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Thomas Munro <thomas.munro@gmail.com> writes:\n>> > I'll move it when committing. I'll let this patch sit for another day\n>> > to see if any other objections show up.\n>>\n>> FWIW, I remain fairly strongly against this, precisely because of the\n>> point that it requires us to start using a randomly different\n>> feature-probing technology anytime Apple decides that they're going to\n>> implement some standard API that they didn't before. Even if it works\n>> everywhere for preadv/pwritev (which we won't know in advance of\n>> buildfarm testing, and maybe not then, since detection failures will\n>> probably be silent), it seems likely that we'll hit some case in the\n>> future where this interacts badly with some other platform's weirdness.\n>> We haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,\n>> and I'm not sure we should start now. How many people actually care\n>> about that?\n>>\n>\n> I missed this earlier - it's come to my attention through a thread on the\n> -packagers list. Adding my response on that thread here for this audience:\n>\n> The ability to target older releases with a newer SDK is essential for\n> packages such as the EDB PostgreSQL installers and the pgAdmin community\n> installers. It's very difficult (sometimes impossible) to get older OS\n> versions on new machines now - Apple make it very hard to download old\n> versions of macOS (some can be found, others not), and they won't always\n> work on newer hardware anyway so it's really not feasible to have all the\n> build machines running the oldest version that needs to be supported.\n>\n> FYI, the pgAdmin and PG installer buildfarms have\n> -mmacosx-version-min=10.12 in CFLAGS etc. to handle this, which is\n> synonymous with MACOSX_DEPLOYMENT_TARGET. We've been successfully building\n> packages that way for a decade or more.\n>\n> --\n> Dave Page\n> Blog: https://pgsnake.blogspot.com\n> Twitter: @pgsnake\n>\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nSandeep Thakkar\n\nHi,Do we see any solution to this issue? or using the older SDK is the way to go?On Thu, May 20, 2021 at 2:04 PM Dave Page <dpage@pgadmin.org> wrote:On Tue, Mar 30, 2021 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Thomas Munro <thomas.munro@gmail.com> writes:\n> I'll move it when committing.  I'll let this patch sit for another day\n> to see if any other objections show up.\n\nFWIW, I remain fairly strongly against this, precisely because of the\npoint that it requires us to start using a randomly different\nfeature-probing technology anytime Apple decides that they're going to\nimplement some standard API that they didn't before.  Even if it works\neverywhere for preadv/pwritev (which we won't know in advance of\nbuildfarm testing, and maybe not then, since detection failures will\nprobably be silent), it seems likely that we'll hit some case in the\nfuture where this interacts badly with some other platform's weirdness.\nWe haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,\nand I'm not sure we should start now.  How many people actually care\nabout that?I missed this earlier - it's come to my attention through a thread on the -packagers list. Adding my response on that thread here for this audience:The ability to target older releases with a newer SDK is essential for packages such as the EDB PostgreSQL installers and the pgAdmin community installers. It's very difficult (sometimes impossible) to get older OS versions on new machines now - Apple make it very hard to download old versions of macOS (some can be found, others not), and they won't always work on newer hardware anyway so it's really not feasible to have all the build machines running the oldest version that needs to be supported.FYI, the pgAdmin and PG installer buildfarms have -mmacosx-version-min=10.12 in CFLAGS etc. to handle this, which is synonymous with MACOSX_DEPLOYMENT_TARGET. We've been successfully building packages that way for a decade or more. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com\n-- Sandeep Thakkar", "msg_date": "Mon, 21 Jun 2021 10:02:44 +0530", "msg_from": "Sandeep Thakkar <sandeep.thakkar@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Mon, Jun 21, 2021 at 4:32 PM Sandeep Thakkar\n<sandeep.thakkar@enterprisedb.com> wrote:\n> Do we see any solution to this issue? or using the older SDK is the way to go?\n\n> On Thu, May 20, 2021 at 2:04 PM Dave Page <dpage@pgadmin.org> wrote:\n>> The ability to target older releases with a newer SDK is essential for packages such as the EDB PostgreSQL installers and the pgAdmin community installers. It's very difficult (sometimes impossible) to get older OS versions on new machines now - Apple make it very hard to download old versions of macOS (some can be found, others not), and they won't always work on newer hardware anyway so it's really not feasible to have all the build machines running the oldest version that needs to be supported.\n>>\n>> FYI, the pgAdmin and PG installer buildfarms have -mmacosx-version-min=10.12 in CFLAGS etc. to handle this, which is synonymous with MACOSX_DEPLOYMENT_TARGET. We've been successfully building packages that way for a decade or more.\n\nI'm not personally against the proposed change. I'll admit there is\nsomething annoying about Apple's environment working in a way that\ndoesn't suit traditional configure macros that have been the basis of\nportable software for a few decades, but when all's said and done,\nconfigure is a Unix wars era way to make things work across all the\nUnixes, and most of them are long gone, configure itself is on the way\nout, and Apple's still here, so...\n\nOn a more practical note, rereading Tom's objection and Dave's\ncounter-objection, I'm left wondering if people would be happy with a\nmanual control for this, something you can pass to configure to stop\nit from using pwritev/preadv even if detected. That would at least\nlocalise the effects, avoiding the sorts of potential unintended\nconsequences Tom mentioned.\n\n\n", "msg_date": "Mon, 21 Jun 2021 17:22:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On 21.06.21 07:22, Thomas Munro wrote:\n> I'm not personally against the proposed change. I'll admit there is\n> something annoying about Apple's environment working in a way that\n> doesn't suit traditional configure macros that have been the basis of\n> portable software for a few decades, but when all's said and done,\n> configure is a Unix wars era way to make things work across all the\n> Unixes, and most of them are long gone, configure itself is on the way\n> out, and Apple's still here, so...\n\nI think this change is perfectly appropriate (modulo some small cleanups).\n\nThe objection was that you cannot reliably use AC_CHECK_FUNCS (and \ntherefore AC_REPLACE_FUNCS) anymore, but that has always been true, \nsince AC_CHECK_FUNCS doesn't handle macros, compiler built-ins, and \nfunctions that are not declared, and any other situation where looking \nfor a symbol in a library is not the same as checking whether the symbol \nactual works for your purpose. This is not too different from the long \ntransition from \"does this header file exists\" to \"can I compile this \nheader file\".\n\nSo in fact the correct way forward would be to get rid of all uses of \nAC_CHECK_FUNCS and related, and then this problem goes away by itself.\n\n\n\n", "msg_date": "Sat, 3 Jul 2021 13:38:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I think this change is perfectly appropriate (modulo some small cleanups).\n\nI think there are a couple of issues here.\n\n1. People who are already using MACOSX_DEPLOYMENT_TARGET to control\ntheir builds would like to keep on doing so, but the AC_CHECK_FUNCS\nprobe doesn't work with that. James' latest patch seems like a\nreasonable fix for that (it's a lot less invasive than where we\nstarted from). There is a worry about side-effects on other\nplatforms, but I don't see an answer to that that's better than\n\"throw it on the buildfarm and see if anything breaks\".\n\nHowever ...\n\n2. We'd really like to use preadv/pwritev where available. I\nmaintain that MACOSX_DEPLOYMENT_TARGET is not only not the right\napproach to that, but it's actually counterproductive. It forces\nyou to build for the lowest common denominator, ie the oldest macOS\nrelease you want to support. Even when running on a release that\nhas pwritev, your build will never use it.\n\nAs far as I can tell, the only way to really deal with #2 is to\nperform a runtime dlsym() probe to see whether pwritev exists, and\nthen fall back to our src/port/ implementation if not. This does\nnot look particularly hard (especially since the lookup code only\nneeds to work on macOS), though I admit I've not tried to code it.\n\nWhat's unclear to me at the moment is whether #1 and #2 interact,\nie is there still any point in changing configure's probes if\nwe put in a runtime check on Darwin? I think that we might want\nto pay no attention to what the available header files say about\npwritev, as long as we can get the correct 'struct iovec'\ndeclaration from them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jul 2021 16:34:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Tue, Jul 6, 2021 at 2:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > I think this change is perfectly appropriate (modulo some small cleanups).\n>\n> I think there are a couple of issues here.\n>\n> 1. People who are already using MACOSX_DEPLOYMENT_TARGET to control\n> their builds would like to keep on doing so, but the AC_CHECK_FUNCS\n> probe doesn't work with that. James' latest patch seems like a\n> reasonable fix for that (it's a lot less invasive than where we\n> started from). There is a worry about side-effects on other\n> platforms, but I don't see an answer to that that's better than\n> \"throw it on the buildfarm and see if anything breaks\".\n>\n> However ...\n>\n> 2. We'd really like to use preadv/pwritev where available. I\n> maintain that MACOSX_DEPLOYMENT_TARGET is not only not the right\n> approach to that, but it's actually counterproductive. It forces\n> you to build for the lowest common denominator, ie the oldest macOS\n> release you want to support. Even when running on a release that\n> has pwritev, your build will never use it.\n>\n> As far as I can tell, the only way to really deal with #2 is to\n> perform a runtime dlsym() probe to see whether pwritev exists, and\n> then fall back to our src/port/ implementation if not. This does\n> not look particularly hard (especially since the lookup code only\n> needs to work on macOS), though I admit I've not tried to code it.\n\nMaybe we just need to do something like this?:\nhttps://lists.gnu.org/archive/html/qemu-devel/2021-01/msg06499.html\n\n>\n> What's unclear to me at the moment is whether #1 and #2 interact,\n> ie is there still any point in changing configure's probes if\n> we put in a runtime check on Darwin? I think that we might want\n> to pay no attention to what the available header files say about\n> pwritev, as long as we can get the correct 'struct iovec'\n> declaration from them.\n>\n> regards, tom lane\n\n\n", "msg_date": "Tue, 6 Jul 2021 14:49:21 -0600", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "James Hilliard <james.hilliard1@gmail.com> writes:\n> On Tue, Jul 6, 2021 at 2:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> As far as I can tell, the only way to really deal with #2 is to\n>> perform a runtime dlsym() probe to see whether pwritev exists, and\n>> then fall back to our src/port/ implementation if not. This does\n>> not look particularly hard (especially since the lookup code only\n>> needs to work on macOS), though I admit I've not tried to code it.\n\n> Maybe we just need to do something like this?:\n> https://lists.gnu.org/archive/html/qemu-devel/2021-01/msg06499.html\n\nHm. I don't trust __builtin_available too much --- does it exist\non really old macOS? dlsym() seems less likely to create problems\nin that respect. Of course, if there are scenarios where\n__builtin_available knows that the symbol is available but doesn't\nwork, then we might have to use it.\n\nIn any case, I don't like their superstructure around the probe.\nWhat I'd prefer is a function pointer that initially points to\nthe probe code, and then we change it to point at either pwritev\nor the pg_pwritev emulation. Certainly failing with ENOSYS is\nexactly what *not* to do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jul 2021 16:58:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On 06.07.21 22:34, Tom Lane wrote:\n> 2. We'd really like to use preadv/pwritev where available.\n\nA couple of things that I haven't seen made clear in this thread yet:\n\n- Where is the availability boundary for preadv/pwritev on macOS?\n\n- What is the impact of having vs. not having these functions?\n\n> I\n> maintain that MACOSX_DEPLOYMENT_TARGET is not only not the right\n> approach to that, but it's actually counterproductive. It forces\n> you to build for the lowest common denominator, ie the oldest macOS\n> release you want to support. Even when running on a release that\n> has pwritev, your build will never use it.\n\nI think this is just the way that building backward-compatible binaries \non macOS (and Windows) works. You have to pick a target that is old \nenough to capture enough of your audience but not too old to miss out on \ninteresting new OS features. People who build GUI applications for \nmacOS, iOS, etc. face this trade-off all the time; for POSIX-level \nprogramming things just move slower so that the questions present \nthemselves less often. I don't think we need to go out of our way to \nfight this system. This is something users will have opted into after \nall. Users who want Linux-style rebuilt-for-every-release binaries have \nthose options available on macOS as well.\n\n\n", "msg_date": "Mon, 12 Jul 2021 21:39:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "On Tue, Jul 13, 2021 at 7:39 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 06.07.21 22:34, Tom Lane wrote:\n> > 2. We'd really like to use preadv/pwritev where available.\n>\n> A couple of things that I haven't seen made clear in this thread yet:\n>\n> - Where is the availability boundary for preadv/pwritev on macOS?\n\nBig Sur (11) added these.\n\n> - What is the impact of having vs. not having these functions?\n\nThe impact is very low. PG14 only uses pg_pwritev() to fill in new\nWAL files as a sort of a warm-up exercise, and it'll happy use lseek()\n+ writev() instead. In future proposals they would be used to do\ngeneral scatter/gather I/O for data files as I showed in another\nemail[1], but that's way off and far from certain, and even then it's\njust a matter of avoiding an lseek() call on vectored I/O. As for how\nlong Apple will support 10.15, they don't seem to publish a roadmap,\nbut people seem to say the pattern would have security updates ending\nsome time in 2022 (?). I don't know if EDB targets macOS older than\nApple supports, but given the very low impact and all these time\nframes it seems OK to just not use the new syscalls on macOS for a\ncouple more years at least, whatever mechanism is chosen for that.\n\nClearly there is a more general question though, which is \"should we\nbuy into Apple's ABI management system or not\", and I don't have a\nstrong opinion on that. One thing I do know is that\npthread_barrier_XXX changed from option to required in a recentish\nPOSIX update so I expect the question to come up again eventually.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGK-563RQWQQF4NLajbQk%2B65gYHdb1q%3D7p3Ob0Uvrxoa9g%40mail.gmail.com\n\n\n", "msg_date": "Tue, 13 Jul 2021 08:44:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Clearly there is a more general question though, which is \"should we\n> buy into Apple's ABI management system or not\", and I don't have a\n> strong opinion on that.\n\nWell, I definitely don't wish to clutter our core code with any\nexplicit dependencies on MACOSX_DEPLOYMENT_TARGET. However,\nif we can work around the issue by switching from AC_REPLACE_FUNCS\nto AC_CHECK_DECLS, maybe we should just do that and quit arguing\nabout it. It seems like there's not a lot of enthusiasm for my idea\nabout installing a run-time probe, so I won't push for that.\n\n> One thing I do know is that\n> pthread_barrier_XXX changed from option to required in a recentish\n> POSIX update so I expect the question to come up again eventually.\n\nYeah, we can expect that the issue will arise again, which is why\nI was so unhappy with the rather-invasive patches we started with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Jul 2021 18:03:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX." } ]
[ { "msg_contents": "Greetings -hackers,\n\nGoogle Summer of Code is back for 2021! They have changed some of how\nGSoC is going to work for this year, for a variety of reasons, so please\nbe sure to read this email and posts linked for the updates if you're\ninterested!\n\nEveryone interested in suggesting projects or mentoring should review\nthe blog post here regarding the changes:\n\nhttps://opensource.googleblog.com/2020/10/google-summer-of-code-2021-is-bringing.html\n\nNow is the time to work on getting together a set of projects we'd\nlike to have GSoC students work on over the summer. Similar to last\nyear, we need to have a good set of projects for students to choose from\nin advance of the deadline for mentoring organizations.\n\nHOWEVER, as noted in the blog post above, project length expectations\nhave changed. Projects for GSoC 2021 are to be 175-hours and be run\nover a 10-week period. This is a reduction from 30 hours per week to\nonly 18 hours per week, with the coding part being only 10 weeks instead\nof 12. With this, there will also only be two evaluation periods\ninstead of three.\n\nGSoC timeline: https://developers.google.com/open-source/gsoc/timeline\n\nOne other thing to note is that \"bootcamp\" enrolled students will be\neligible in 2021 in addition to university students, broadening the pool\nof potential applicants.\n\nThe deadline for Mentoring organizations to apply is: February 19.\n\nThe list of accepted organization will be published around March 9\n\nUnsurprisingly, we'll need to have an Ideas page again, so I've gone\nahead and created one (copying last year's):\n\nhttps://wiki.postgresql.org/wiki/GSoC_2021\n\nGoogle discusses what makes a good \"Ideas\" list here:\n\nhttps://google.github.io/gsocguides/mentor/defining-a-project-ideas-list.html\n\nAll the entries are marked with '2020' to indicate they were pulled from\nlast year. If the project from last year is still relevant, please\nupdate it to be '2021' and make sure to update all of the information\n(in particular, make sure to list yourself as a mentor and remove the\nother mentors, as appropriate). Please also be sure to update the\nproject's scope to be appropriate for the reduced time that's being\nasked of students this year.\n\nNew entries are certainly welcome and encouraged, just be sure to note\nthem as '2021' when you add it.\n\nProjects from last year which were worked on but have significant\nfollow-on work to be completed are absolutely welcome as well- simply\nupdate the description appropriately and mark it as being for '2021'.\n\nWhen we get closer to actually submitting our application, I'll clean\nout the '2020' entries that didn't get any updates. Also- if there are\nany projects that are no longer appropriate (maybe they were completed,\nfor example and no longer need work), please feel free to remove them.\nI took a whack at that myself but it's entirely possible I missed some\nupdates where a GSoC project was completed independently of GSoC (and\nif I removed any that shouldn't have been- feel free to add them back\nby copying from the 2020 page).\n\nAs a reminder, each idea on the page should be in the format that the\nother entries are in and should include:\n\n- Project title/one-line description\n- Brief, 2-5 sentence, description of the project (remember, these are\n 10-week projects with only 18 hours per week this year)\n- Description of programming skills needed and estimation of the\n difficulty level\n- List of potential mentors\n- Expected Outcomes\n\nAs with last year, please consider PostgreSQL to be an \"Umbrella\"\nproject and that anything which would be considered \"PostgreSQL Family\"\nper the News/Announce policy [1] is likely to be acceptable as a\nPostgreSQL GSoC project.\n\nIn other words, if you're a contributor or developer on WAL-G, barman,\npgBackRest, the PostgreSQL website (pgweb), the PgEU/PgUS website code\n(pgeu-system), pgAdmin4, pgbouncer, pldebugger, the PG RPMs (pgrpms),\nthe JDBC driver, the ODBC driver, or any of the many other PG Family\nprojects, please feel free to add a project for consideration! If we\nget quite a few, we can organize the page further based on which\nproject or maybe what skills are needed or similar.\n\nLet's have another great year of GSoC with PostgreSQL!\n\nThanks!\n\nStephen\n\n[1]: https://www.postgresql.org/about/policies/news-and-events/", "msg_date": "Fri, 22 Jan 2021 16:40:53 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "GSoC 2021" } ]
[ { "msg_contents": "Hi all\n\nI was surprised to see that there's no way to get `VACUUM VERBOSE`-like\noutput from autovacuum. Is there any interest in enabling this?\n\nAdditionally, is there any interest in exposing more vacuum options to be\nrun by autovac? Right now it runs FREEZE and ANALYZE, which leaves the\nVERBOSE, SKIP_LOCKED, INDEX_CLEANUP, and TRUNCATE unconfigurable. I skipped\nFULL in that list because I'm assuming no one would ever want autovac to\nrun VACUUM FULL.\n\n\nTommy\n\nHi allI was surprised to see that there's no way to get `VACUUM VERBOSE`-like output from autovacuum. Is there any interest in enabling this?Additionally, is there any interest in exposing more vacuum options to be run by autovac? Right now it runs FREEZE and ANALYZE, which leaves the VERBOSE, SKIP_LOCKED, INDEX_CLEANUP, and TRUNCATE unconfigurable. I skipped FULL in that list because I'm assuming no one would ever want autovac to run VACUUM FULL.Tommy", "msg_date": "Fri, 22 Jan 2021 14:06:11 -0800", "msg_from": "Tommy Li <tommy@coffeemeetsbagel.com>", "msg_from_op": true, "msg_subject": "a verbose option for autovacuum" }, { "msg_contents": "Tommy Li <tommy@coffeemeetsbagel.com> writes:\n> I was surprised to see that there's no way to get `VACUUM VERBOSE`-like\n> output from autovacuum. Is there any interest in enabling this?\n\nSeems like that would very soon feel like log spam. What would be the\nuse case for having this on? If you want one-off results you could\nrun VACUUM manually.\n\n> Additionally, is there any interest in exposing more vacuum options to be\n> run by autovac? Right now it runs FREEZE and ANALYZE, which leaves the\n> VERBOSE, SKIP_LOCKED, INDEX_CLEANUP, and TRUNCATE unconfigurable.\n\nTo the extent that any of these make sense in autovacuum, I'd say they\nought to be managed automatically. I don't see a strong argument for\nusers configuring this. (See also nearby thread about allowing index\nAMs to control some of this.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jan 2021 17:33:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "Hey Tom\n\n> Seems like that would very soon feel like log spam. What would be the\n> use case for having this on? If you want one-off results you could\n> run VACUUM manually.\n\nIn my case I have a fairly large, fairly frequently updated table with a\nlarge number of indexes where autovacuum's runtime can fluctuate between 12\nand 24 hours. If I want to investigate why autovacuum today is running many\nhours longer than it did last week, the only information I have to go off\nis from pg_stat_progress_vacuum, which reports only progress based on the\nnumber of blocks completed across _all_ indexes.\n\nVACUUM VERBOSE's output is nice because it reports runtime per index, which\nwould allow me to see if a specific index has bloated more than usual.\n\nI also have autovacuum throttled much more aggressively than manual\nvacuums, so information from a one-off manual VACUUM isn't comparable.\n\nAs for log spam, I'm not sure it's a problem as long as the verbose option\nis disabled by default.\n\n\nTommy\n\nOn Fri, Jan 22, 2021 at 2:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tommy Li <tommy@coffeemeetsbagel.com> writes:\n> > I was surprised to see that there's no way to get `VACUUM VERBOSE`-like\n> > output from autovacuum. Is there any interest in enabling this?\n>\n> Seems like that would very soon feel like log spam. What would be the\n> use case for having this on? If you want one-off results you could\n> run VACUUM manually.\n>\n> > Additionally, is there any interest in exposing more vacuum options to be\n> > run by autovac? Right now it runs FREEZE and ANALYZE, which leaves the\n> > VERBOSE, SKIP_LOCKED, INDEX_CLEANUP, and TRUNCATE unconfigurable.\n>\n> To the extent that any of these make sense in autovacuum, I'd say they\n> ought to be managed automatically. I don't see a strong argument for\n> users configuring this. (See also nearby thread about allowing index\n> AMs to control some of this.)\n>\n> regards, tom lane\n>\n\nHey Tom> Seems like that would very soon feel like log spam.  What would be the> use case for having this on?  If you want one-off results you could> run VACUUM manually.In my case I have a fairly large, fairly frequently updated table with a large number of indexes where autovacuum's runtime can fluctuate between 12 and 24 hours. If I want to investigate why autovacuum today is running many hours longer than it did last week, the only information I have to go off is from pg_stat_progress_vacuum, which reports only progress based on the number of blocks completed across _all_ indexes.VACUUM VERBOSE's output is nice because it reports runtime per index, which would allow me to see if a specific index has bloated more than usual.I also have autovacuum throttled much more aggressively than manual vacuums, so information from a one-off manual VACUUM isn't comparable.As for log spam, I'm not sure it's a problem as long as the verbose option is disabled by default.TommyOn Fri, Jan 22, 2021 at 2:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Tommy Li <tommy@coffeemeetsbagel.com> writes:\n> I was surprised to see that there's no way to get `VACUUM VERBOSE`-like\n> output from autovacuum. Is there any interest in enabling this?\n\nSeems like that would very soon feel like log spam.  What would be the\nuse case for having this on?  If you want one-off results you could\nrun VACUUM manually.\n\n> Additionally, is there any interest in exposing more vacuum options to be\n> run by autovac? Right now it runs FREEZE and ANALYZE, which leaves the\n> VERBOSE, SKIP_LOCKED, INDEX_CLEANUP, and TRUNCATE unconfigurable.\n\nTo the extent that any of these make sense in autovacuum, I'd say they\nought to be managed automatically.  I don't see a strong argument for\nusers configuring this.  (See also nearby thread about allowing index\nAMs to control some of this.)\n\n                        regards, tom lane", "msg_date": "Fri, 22 Jan 2021 14:55:10 -0800", "msg_from": "Tommy Li <tommy@coffeemeetsbagel.com>", "msg_from_op": true, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "Greetings,\n\nOn Fri, Jan 22, 2021 at 2:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Tommy Li <tommy@coffeemeetsbagel.com> writes:\n> > Additionally, is there any interest in exposing more vacuum options to be\n> > run by autovac? Right now it runs FREEZE and ANALYZE, which leaves the\n> > VERBOSE, SKIP_LOCKED, INDEX_CLEANUP, and TRUNCATE unconfigurable.\n>\n> To the extent that any of these make sense in autovacuum, I'd say they\n> ought to be managed automatically. I don't see a strong argument for\n> users configuring this. (See also nearby thread about allowing index\n> AMs to control some of this.)\n\nI agree that it'd be nice to figure out some way to have these managed\nautomatically, but it's probably useful to point out to Tommy that you\ncan set vacuum options on a table level which autovacuum should respect,\nsuch as vacuum_index_cleanup and vacuum_truncate. For skip locked,\nautovacuum already will automatically release it's attempt to acquire a\nlock if someone backs up behind it for too long.\n\nUntil we get something automatic though, I could see being able to set\nTRUNCATE, in particular, to be off globally as useful when running a\nsystem with replicas that might end up having queries which block WAL\nreplay. If no one is stepping up to build some way to handle that\nautomatically then I'd be in support of making it something that an\nadministrator can configure (to avoid having to always remember to do it\nfor each table created...).\n\n* Tommy Li (tommy@coffeemeetsbagel.com) wrote:\n> > Seems like that would very soon feel like log spam. What would be the\n> > use case for having this on? If you want one-off results you could\n> > run VACUUM manually.\n> \n> In my case I have a fairly large, fairly frequently updated table with a\n> large number of indexes where autovacuum's runtime can fluctuate between 12\n> and 24 hours. If I want to investigate why autovacuum today is running many\n> hours longer than it did last week, the only information I have to go off\n> is from pg_stat_progress_vacuum, which reports only progress based on the\n> number of blocks completed across _all_ indexes.\n> \n> VACUUM VERBOSE's output is nice because it reports runtime per index, which\n> would allow me to see if a specific index has bloated more than usual.\n> \n> I also have autovacuum throttled much more aggressively than manual\n> vacuums, so information from a one-off manual VACUUM isn't comparable.\n\nI tend to agree that this is pretty useful information to have included\nwhen trying to figure out what autovacuum's doing.\n\n> As for log spam, I'm not sure it's a problem as long as the verbose option\n> is disabled by default.\n\nWhile this would be in-line with our existing dismal logging defaults,\nI'd be happier with a whole bunch more logging enabled by default,\nincluding this, so that we don't have to tell everyone who deploys PG to\ngo enable this very sensible logging. Arguments of 'log spam' really\nfall down when you're on the receiving end of practically empty PG logs\nand trying to figure out what's going on. Further, log analysis tools\nexist to aggregate this data and bring it up to a higher level for\nadministrators.\n\nStill, I'd much rather have the option, even if disabled by default,\nthan not have it at all.\n\nThanks,\n\nStephen", "msg_date": "Sat, 23 Jan 2021 13:11:17 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "Hi Stephen\n\n> ... can set vacuum options on a table level which autovacuum should\nrespect,\n> such as vacuum_index_cleanup and vacuum_truncate. For skip locked,\n> autovacuum already will automatically release it's attempt to acquire a\n> lock if someone backs up behind it for too long.\n\nThis is good information, I wasn't aware that autovacuum respected those\nsettings.\nIn that case I'd like to focus specifically on the verbose aspect.\n\nMy first thought was a new boolean configuration called\n\"autovacuum_verbose\".\nI'd want it to behave similarly to autovacuum_vacuum_cost_limit in that it\ncan be\nset globally or on a per-table basis.\n\nThoughts?\n\n\nTommy\n\nHi Stephen> ... can set vacuum options on a table level which autovacuum should respect,> such as vacuum_index_cleanup and vacuum_truncate.  For skip locked,> autovacuum already will automatically release it's attempt to acquire a> lock if someone backs up behind it for too long.This is good information, I wasn't aware that autovacuum respected those settings.In that case I'd like to focus specifically on the verbose aspect. My first thought was a new boolean configuration called \"autovacuum_verbose\".I'd want it to behave similarly to autovacuum_vacuum_cost_limit in that it can beset globally or on a per-table basis.Thoughts?Tommy", "msg_date": "Mon, 25 Jan 2021 09:46:28 -0800", "msg_from": "Tommy Li <tommy@coffeemeetsbagel.com>", "msg_from_op": true, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Tue, Jan 26, 2021 at 2:46 AM Tommy Li <tommy@coffeemeetsbagel.com> wrote:\n>\n> Hi Stephen\n>\n> > ... can set vacuum options on a table level which autovacuum should respect,\n> > such as vacuum_index_cleanup and vacuum_truncate. For skip locked,\n> > autovacuum already will automatically release it's attempt to acquire a\n> > lock if someone backs up behind it for too long.\n>\n> This is good information, I wasn't aware that autovacuum respected those settings.\n> In that case I'd like to focus specifically on the verbose aspect.\n>\n> My first thought was a new boolean configuration called \"autovacuum_verbose\".\n> I'd want it to behave similarly to autovacuum_vacuum_cost_limit in that it can be\n> set globally or on a per-table basis.\n\nI agree to have autovacuum log more information, especially index\nvacuums. Currently, the log related to index vacuum is only the number\nof index scans. I think it would be helpful if the log has more\ndetails about each index vacuum.\n\nBut I'm not sure that neither always logging that nor having set the\nparameter per-table basis is a good idea. In the former case, it could\nbe log spam for example in the case of anti-wraparound vacuums that\nvacuums on all tables (and their indexes) in the database. If we set\nit per-table basis, it’s useful when the user already knows which\ntables are likely to take a long time for autovacuum but won’t work\nwhen the users want to check the autovacuum details for tables that\nautovacuum could take a long time for.\n\nGiven that we already have log_autovacuum_min_duration, I think this\nverbose logging should work together with that. I’d prefer to enable\nthe verbose logging by default for the same reason Stephen mentioned.\nOr maybe we can have a parameter to control verbosity, say\nlog_autovaucum_verbosity.\n\nRegarding when to log, we can have autovacuum emit index vacuum log\nafter each lazy_vacuum/cleanup_index() end like VACUUM VERBOSE does,\nbut I’m not sure how it could work together with\nlog_autovacuum_min_duration. So one idea could be to have autovacuum\nemit a log for each index vacuum statistics along with the current\nautovacuum logs at the end of lazy vacuum if the execution time\nexceeds log_autovacuum_min_duration. A downside would be one\nautovacuum log could be very long if the table has many indexes, and\nwe still don’t know how much time taken for each index vacuum. But you\ncan see if a specific index has bloated more than usual. And for the\nlatter part, we would be able to add the statistics of execution time\nfor each vacuum phase to the log as a further improvement.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 29 Jan 2021 16:35:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "Hi Masahiko\n\n> If we set\n> it per-table basis, it’s useful when the user already knows which\n> tables are likely to take a long time for autovacuum\n\nI would assume that's the default case, most apps I've seen are designed\naround a small\nnumber of large tables that take up most of the maintenance effort\n\n> Regarding when to log, we can have autovacuum emit index vacuum log\n> after each lazy_vacuum/cleanup_index() end like VACUUM VERBOSE does,\n> but I’m not sure how it could work together with\n> log_autovacuum_min_duration.\n\nI do like having this rolled into the existing configuration. This might be\nan absurd idea, but\nwhat if the autovacuum process accumulates the per-index vacuum information\nuntil that\nthreshold is reached, and then spits out the logs all at once? And after\nthe min duration is\npassed, it just logs the rest of the index vacuum information as they\noccur. That way the\ninformation is more likely to be available to investigate an abnormally\nlong running vacuum\nwhile it's still happening.\n\n\nTommy\n\nHi Masahiko> If we set> it per-table basis, it’s useful when the user already knows which> tables are likely to take a long time for autovacuumI would assume that's the default case, most apps I've seen are designed around a smallnumber of large tables that take up most of the maintenance effort> Regarding when to log, we can have autovacuum emit index vacuum log> after each lazy_vacuum/cleanup_index() end like VACUUM VERBOSE does,> but I’m not sure how it could work together with> log_autovacuum_min_duration.I do like having this rolled into the existing configuration. This might be an absurd idea, butwhat if the autovacuum process accumulates the per-index vacuum information until thatthreshold is reached, and then spits out the logs all at once? And after the min duration is passed, it just logs the rest of the index vacuum information as they occur. That way the information is more likely to be available to investigate an abnormally long running vacuum while it's still happening.Tommy", "msg_date": "Mon, 1 Feb 2021 16:59:35 -0800", "msg_from": "Tommy Li <tommy@coffeemeetsbagel.com>", "msg_from_op": true, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Fri, Jan 29, 2021, at 4:35 AM, Masahiko Sawada wrote:\n> I agree to have autovacuum log more information, especially index\n> vacuums. Currently, the log related to index vacuum is only the number\n> of index scans. I think it would be helpful if the log has more\n> details about each index vacuum.\n+1 for this feature. Sometimes this analysis is useful to confirm your theory;\nwithout data, it is just a wild guess.\n\n> But I'm not sure that neither always logging that nor having set the\n> parameter per-table basis is a good idea. In the former case, it could\n> be log spam for example in the case of anti-wraparound vacuums that\n> vacuums on all tables (and their indexes) in the database. If we set\n> it per-table basis, it’s useful when the user already knows which\n> tables are likely to take a long time for autovacuum but won’t work\n> when the users want to check the autovacuum details for tables that\n> autovacuum could take a long time for.\nI prefer a per-table parameter since it allows us a fine-grained tuning. It\ncovers the cases you provided above. You can disable it at all and only enable\nit in critical tables or enable it and disable it for known-to-be-spam tables.\n\n> Given that we already have log_autovacuum_min_duration, I think this\n> verbose logging should work together with that. I’d prefer to enable\n> the verbose logging by default for the same reason Stephen mentioned.\n> Or maybe we can have a parameter to control verbosity, say\n> log_autovaucum_verbosity.\nIMO this new parameter is just an option to inject VERBOSE into VACUUM command.\nSince there is already a parameter to avoid spam autovacuum messages, this\nfeature shouldn't hijack log_autovacuum_min_duration behavior. If the\nautovacuum command execution time runs less than l_a_m_d, the output should be\ndiscarded.\n\nI don't have a strong opinion about this parameter name but I think your\nsuggestion (log_autovaccum_verbosity) is easier to guess what this parameter is\nfor.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Jan 29, 2021, at 4:35 AM, Masahiko Sawada wrote:I agree to have autovacuum log more information, especially indexvacuums. Currently, the log related to index vacuum is only the numberof index scans. I think it would be helpful if the log has moredetails about each index vacuum.+1 for this feature. Sometimes this analysis is useful to confirm your theory;without data, it is just a wild guess.But I'm not sure that neither always logging that nor having set theparameter per-table basis is a good idea. In the former case, it couldbe log spam for example in the case of anti-wraparound vacuums thatvacuums on all tables (and their indexes) in the database. If we setit per-table basis, it’s useful when the user already knows whichtables are likely to take a long time for autovacuum but won’t workwhen the users want to check the autovacuum details for tables thatautovacuum could take a long time for.I prefer a per-table parameter since it allows us a fine-grained tuning. Itcovers the cases you provided above. You can disable it at all and only enableit in critical tables or enable it and disable it for known-to-be-spam tables.Given that we already have log_autovacuum_min_duration, I think thisverbose logging should work together with that. I’d prefer to enablethe verbose logging by default for the same reason Stephen mentioned.Or maybe we can have a parameter to control verbosity, saylog_autovaucum_verbosity.IMO this new parameter is just an option to inject VERBOSE into VACUUM command.Since there is already a parameter to avoid spam autovacuum messages, thisfeature shouldn't hijack log_autovacuum_min_duration behavior. If theautovacuum command execution time runs less than l_a_m_d, the output should bediscarded.I don't have a strong opinion about this parameter name but I think yoursuggestion (log_autovaccum_verbosity) is easier to guess what this parameter isfor.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 01 Feb 2021 22:51:33 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Tue, Feb 2, 2021 at 9:59 AM Tommy Li <tommy@coffeemeetsbagel.com> wrote:\n>\n> Hi Masahiko\n>\n> > If we set\n> > it per-table basis, it’s useful when the user already knows which\n> > tables are likely to take a long time for autovacuum\n>\n> I would assume that's the default case, most apps I've seen are designed around a small\n> number of large tables that take up most of the maintenance effort\n>\n> > Regarding when to log, we can have autovacuum emit index vacuum log\n> > after each lazy_vacuum/cleanup_index() end like VACUUM VERBOSE does,\n> > but I’m not sure how it could work together with\n> > log_autovacuum_min_duration.\n>\n> I do like having this rolled into the existing configuration. This might be an absurd idea, but\n> what if the autovacuum process accumulates the per-index vacuum information until that\n> threshold is reached, and then spits out the logs all at once? And after the min duration is\n> passed, it just logs the rest of the index vacuum information as they occur. That way the\n> information is more likely to be available to investigate an abnormally long running vacuum\n> while it's still happening.\n\nSince index vacuum can be executed more than once within an\nautovacuum, we need to keep all of them. It's not impossible.\n\nAs the second idea, I think showing index vacuum statistics (i.g.,\nwhat lazy_cleanup_index shows) together with the current autovacuum\nlogs might be a good start. The autovacuum log becomes like follows:\n\n* HEAD\nLOG: automatic vacuum of table \"postgres.public.test\": index scans: 1\npages: 0 removed, 443 remain, 0 skipped due to pins, 0 skipped frozen\ntuples: 1000 removed, 99000 remain, 0 are dead but not yet removable,\noldest xmin: 545\nbuffer usage: 2234 hits, 4 misses, 4 dirtied\navg read rate: 0.504 MB/s, avg write rate: 0.504 MB/s\nsystem usage: CPU: user: 0.03 s, system: 0.00 s, elapsed: 0.06 s\nWAL usage: 2162 records, 4 full page images, 159047 bytes\n\n* Proposed idea\nLOG: automatic vacuum of table \"postgres.public.test\": index scans: 1\npages: 0 removed, 443 remain, 0 skipped due to pins, 0 skipped frozen\ntuples: 1000 removed, 99000 remain, 0 are dead but not yet removable,\noldest xmin: 545\nindexes: \"postgres.public.test_idx1\" 276 pages, 0 newly deleted, 0\ncurrently deleted, 0 reusable.\n\"postgres.public.test_idx2\" 300 pages, 10 newly deleted, 0 currently\ndeleted, 3 reusable.\n\"postgres.public.test_idx2\" 310 pages, 4 newly deleted, 0 currently\ndeleted, 0 reusable.\nbuffer usage: 2234 hits, 4 misses, 4 dirtied\navg read rate: 0.504 MB/s, avg write rate: 0.504 MB/s\nsystem usage: CPU: user: 0.03 s, system: 0.00 s, elapsed: 0.06 s\nWAL usage: 2162 records, 4 full page images, 159047 bytes\n\nIt still lacks some of what VACUUM VERBOSE shows (e.g., each index\nvacuum execution time etc) but it would be enough information to know\nthe index page statistics. Probably we can output those by default\nwithout adding a new parameter controlling that.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 8 Mar 2021 14:32:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Tue, Feb 2, 2021 at 10:51 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Fri, Jan 29, 2021, at 4:35 AM, Masahiko Sawada wrote:\n>\n> I agree to have autovacuum log more information, especially index\n> vacuums. Currently, the log related to index vacuum is only the number\n> of index scans. I think it would be helpful if the log has more\n> details about each index vacuum.\n>\n> +1 for this feature. Sometimes this analysis is useful to confirm your theory;\n> without data, it is just a wild guess.\n>\n> But I'm not sure that neither always logging that nor having set the\n> parameter per-table basis is a good idea. In the former case, it could\n> be log spam for example in the case of anti-wraparound vacuums that\n> vacuums on all tables (and their indexes) in the database. If we set\n> it per-table basis, it’s useful when the user already knows which\n> tables are likely to take a long time for autovacuum but won’t work\n> when the users want to check the autovacuum details for tables that\n> autovacuum could take a long time for.\n>\n> I prefer a per-table parameter since it allows us a fine-grained tuning. It\n> covers the cases you provided above. You can disable it at all and only enable\n> it in critical tables or enable it and disable it for known-to-be-spam tables.\n>\n> Given that we already have log_autovacuum_min_duration, I think this\n> verbose logging should work together with that. I’d prefer to enable\n> the verbose logging by default for the same reason Stephen mentioned.\n> Or maybe we can have a parameter to control verbosity, say\n> log_autovaucum_verbosity.\n>\n> IMO this new parameter is just an option to inject VERBOSE into VACUUM command.\n> Since there is already a parameter to avoid spam autovacuum messages, this\n> feature shouldn't hijack log_autovacuum_min_duration behavior. If the\n> autovacuum command execution time runs less than l_a_m_d, the output should be\n> discarded.\n\nYeah, if autovacuum execution time doesn't exceed\nlog_autovacuum_min_duration, the output should be discarded. My idea\nis to show autovacuum log along with the new information about index\nvacuum etc if autovacuum execution time exceeds the threshold.\n\nRegarding the new parameter being discussed here, I think it depends\non how much the amount of autovacuum logs increases. I think if we\nadded a few information about indexes to the current autovacuum log\nthe new parameter would not required. So I just posted[1] another idea\nto show only index page statistics (i.g., what lazy_cleanup_index()\nshows) along with the current autovacuum logs. Please refer to it.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAy6SxHiTivh5yAPJSUE4S%3DQRPpSZUdafOSz0R%2BfRcM6Q%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 8 Mar 2021 14:47:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Mon, Mar 8, 2021, at 2:32 AM, Masahiko Sawada wrote:\n> * Proposed idea\n> LOG: automatic vacuum of table \"postgres.public.test\": index scans: 1\n> pages: 0 removed, 443 remain, 0 skipped due to pins, 0 skipped frozen\n> tuples: 1000 removed, 99000 remain, 0 are dead but not yet removable,\n> oldest xmin: 545\n> indexes: \"postgres.public.test_idx1\" 276 pages, 0 newly deleted, 0\n> currently deleted, 0 reusable.\n> \"postgres.public.test_idx2\" 300 pages, 10 newly deleted, 0 currently\n> deleted, 3 reusable.\n> \"postgres.public.test_idx2\" 310 pages, 4 newly deleted, 0 currently\n> deleted, 0 reusable.\nInstead of using \"indexes:\" and add a list of indexes (one on each line), it \nwould be more parse-friendly if it prints one index per line using 'index \n\"postgres.public.idxname\" 123 pages, 45 newly deleted, 67 currently deleted, 8 \nreusable.'.\n\n> It still lacks some of what VACUUM VERBOSE shows (e.g., each index\n> vacuum execution time etc) but it would be enough information to know\n> the index page statistics. Probably we can output those by default\n> without adding a new parameter controlling that.\nPerfect is the enemy of the good. Let start with this piece of information.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Mar 8, 2021, at 2:32 AM, Masahiko Sawada wrote:* Proposed ideaLOG:  automatic vacuum of table \"postgres.public.test\": index scans: 1pages: 0 removed, 443 remain, 0 skipped due to pins, 0 skipped frozentuples: 1000 removed, 99000 remain, 0 are dead but not yet removable,oldest xmin: 545indexes: \"postgres.public.test_idx1\" 276 pages, 0 newly deleted, 0currently deleted, 0 reusable.\"postgres.public.test_idx2\" 300 pages, 10 newly deleted, 0 currentlydeleted, 3 reusable.\"postgres.public.test_idx2\" 310 pages, 4 newly deleted, 0 currentlydeleted, 0 reusable.Instead of using \"indexes:\" and add a list of indexes (one on each line), it       would be more parse-friendly if it prints one index per line using 'index         \"postgres.public.idxname\" 123 pages, 45 newly deleted, 67 currently deleted, 8  reusable.'.It still lacks some of what VACUUM VERBOSE shows (e.g., each indexvacuum execution time etc) but it would be enough information to knowthe index page statistics. Probably we can output those by defaultwithout adding a new parameter controlling that.Perfect is the enemy of the good. Let start with this piece of information.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 08 Mar 2021 12:57:36 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Tue, Mar 9, 2021 at 12:58 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Mar 8, 2021, at 2:32 AM, Masahiko Sawada wrote:\n>\n> * Proposed idea\n> LOG: automatic vacuum of table \"postgres.public.test\": index scans: 1\n> pages: 0 removed, 443 remain, 0 skipped due to pins, 0 skipped frozen\n> tuples: 1000 removed, 99000 remain, 0 are dead but not yet removable,\n> oldest xmin: 545\n> indexes: \"postgres.public.test_idx1\" 276 pages, 0 newly deleted, 0\n> currently deleted, 0 reusable.\n> \"postgres.public.test_idx2\" 300 pages, 10 newly deleted, 0 currently\n> deleted, 3 reusable.\n> \"postgres.public.test_idx2\" 310 pages, 4 newly deleted, 0 currently\n> deleted, 0 reusable.\n>\n> Instead of using \"indexes:\" and add a list of indexes (one on each line), it\n> would be more parse-friendly if it prints one index per line using 'index\n> \"postgres.public.idxname\" 123 pages, 45 newly deleted, 67 currently deleted, 8\n> reusable.'.\n\nAgreed.\n\nAttached a patch. I've slightly modified the format for consistency\nwith heap statistics.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 10 Mar 2021 12:46:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Wed, Mar 10, 2021, at 12:46 AM, Masahiko Sawada wrote:\n> Attached a patch. I've slightly modified the format for consistency\n> with heap statistics.\nSince commit 5f8727f5a6, this patch doesn't apply anymore. Fortunately, it is\njust a small hunk. I reviewed this patch and it looks good to me. There is just\na small issue (double space after 'if') that I fixed in the attached version.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/", "msg_date": "Wed, 17 Mar 2021 08:50:26 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Wed, Mar 17, 2021 at 08:50:26AM -0300, Euler Taveira wrote:\n> Since commit 5f8727f5a6, this patch doesn't apply anymore. Fortunately, it is\n> just a small hunk. I reviewed this patch and it looks good to me. There is just\n> a small issue (double space after 'if') that I fixed in the attached version.\n\nNo major objections to what you are proposing here.\n\n> \t/* and log the action if appropriate */\n> -\tif (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> +\tif (IsAutoVacuumWorkerProcess())\n> \t{\n> -\t\tTimestampTz endtime = GetCurrentTimestamp();\n> +\t\tTimestampTz endtime = 0;\n> +\t\tint i;\n> \n> -\t\tif (params->log_min_duration == 0 ||\n> -\t\t\tTimestampDifferenceExceeds(starttime, endtime,\n> -\t\t\t\t\t\t\t\t\t params->log_min_duration))\n> +\t\tif (params->log_min_duration >= 0)\n> +\t\t\tendtime = GetCurrentTimestamp();\n> +\n> +\t\tif (endtime > 0 &&\n> +\t\t\t(params->log_min_duration == 0 ||\n> +\t\t\t TimestampDifferenceExceeds(starttime, endtime,\n\nWhy is there any need to actually change this part? If I am following\nthe patch correctly, the reason why you are doing things this way is\nto free the set of N statistics all the time for autovacuum. However,\nwe could make that much simpler, and your patch is already half-way\nthrough that by adding the stats of the N indexes to LVRelStats. Here\nis the idea:\n- Allocation of the N items for IndexBulkDeleteResult at the beginning\nof heap_vacuum_rel(). It seems to me that we are going to need the N\nindex names within LVRelStats, to be able to still call\nvac_close_indexes() *before* logging the stats.\n- No need to pass down indstats for the two callers of\nlazy_vacuum_all_indexes().\n- Clean up of vacrelstats once heap_vacuum_rel() is done with it.\n--\nMichael", "msg_date": "Thu, 18 Mar 2021 15:41:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Thu, Mar 18, 2021 at 3:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 17, 2021 at 08:50:26AM -0300, Euler Taveira wrote:\n> > Since commit 5f8727f5a6, this patch doesn't apply anymore. Fortunately, it is\n> > just a small hunk. I reviewed this patch and it looks good to me. There is just\n> > a small issue (double space after 'if') that I fixed in the attached version.\n>\n> No major objections to what you are proposing here.\n>\n> > /* and log the action if appropriate */\n> > - if (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> > + if (IsAutoVacuumWorkerProcess())\n> > {\n> > - TimestampTz endtime = GetCurrentTimestamp();\n> > + TimestampTz endtime = 0;\n> > + int i;\n> >\n> > - if (params->log_min_duration == 0 ||\n> > - TimestampDifferenceExceeds(starttime, endtime,\n> > - params->log_min_duration))\n> > + if (params->log_min_duration >= 0)\n> > + endtime = GetCurrentTimestamp();\n> > +\n> > + if (endtime > 0 &&\n> > + (params->log_min_duration == 0 ||\n> > + TimestampDifferenceExceeds(starttime, endtime,\n>\n> Why is there any need to actually change this part? If I am following\n> the patch correctly, the reason why you are doing things this way is\n> to free the set of N statistics all the time for autovacuum. However,\n> we could make that much simpler, and your patch is already half-way\n> through that by adding the stats of the N indexes to LVRelStats. Here\n> is the idea:\n> - Allocation of the N items for IndexBulkDeleteResult at the beginning\n> of heap_vacuum_rel(). It seems to me that we are going to need the N\n> index names within LVRelStats, to be able to still call\n> vac_close_indexes() *before* logging the stats.\n> - No need to pass down indstats for the two callers of\n> lazy_vacuum_all_indexes().\n> - Clean up of vacrelstats once heap_vacuum_rel() is done with it.\n\nOkay, I've updated the patch accordingly. If we add\nIndexBulkDeleteResult to LVRelStats I think we can remove\nIndexBulkDeleteResult function argument also from some other functions\nsuch as lazy_parallel_vacuum_indexes() and vacuum_indexes_leader().\nPlease review the attached patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 18 Mar 2021 21:13:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Thu, Mar 18, 2021 at 9:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Mar 18, 2021 at 3:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Mar 17, 2021 at 08:50:26AM -0300, Euler Taveira wrote:\n> > > Since commit 5f8727f5a6, this patch doesn't apply anymore. Fortunately, it is\n> > > just a small hunk. I reviewed this patch and it looks good to me. There is just\n> > > a small issue (double space after 'if') that I fixed in the attached version.\n> >\n> > No major objections to what you are proposing here.\n> >\n> > > /* and log the action if appropriate */\n> > > - if (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> > > + if (IsAutoVacuumWorkerProcess())\n> > > {\n> > > - TimestampTz endtime = GetCurrentTimestamp();\n> > > + TimestampTz endtime = 0;\n> > > + int i;\n> > >\n> > > - if (params->log_min_duration == 0 ||\n> > > - TimestampDifferenceExceeds(starttime, endtime,\n> > > - params->log_min_duration))\n> > > + if (params->log_min_duration >= 0)\n> > > + endtime = GetCurrentTimestamp();\n> > > +\n> > > + if (endtime > 0 &&\n> > > + (params->log_min_duration == 0 ||\n> > > + TimestampDifferenceExceeds(starttime, endtime,\n> >\n> > Why is there any need to actually change this part? If I am following\n> > the patch correctly, the reason why you are doing things this way is\n> > to free the set of N statistics all the time for autovacuum. However,\n> > we could make that much simpler, and your patch is already half-way\n> > through that by adding the stats of the N indexes to LVRelStats. Here\n> > is the idea:\n> > - Allocation of the N items for IndexBulkDeleteResult at the beginning\n> > of heap_vacuum_rel(). It seems to me that we are going to need the N\n> > index names within LVRelStats, to be able to still call\n> > vac_close_indexes() *before* logging the stats.\n> > - No need to pass down indstats for the two callers of\n> > lazy_vacuum_all_indexes().\n> > - Clean up of vacrelstats once heap_vacuum_rel() is done with it.\n>\n> Okay, I've updated the patch accordingly. If we add\n> IndexBulkDeleteResult to LVRelStats I think we can remove\n> IndexBulkDeleteResult function argument also from some other functions\n> such as lazy_parallel_vacuum_indexes() and vacuum_indexes_leader().\n> Please review the attached patch.\n\nSorry, I attached the wrong version patch. So attached the right one.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 18 Mar 2021 23:30:46 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Thu, Mar 18, 2021 at 5:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Okay, I've updated the patch accordingly. If we add\n> IndexBulkDeleteResult to LVRelStats I think we can remove\n> IndexBulkDeleteResult function argument also from some other functions\n> such as lazy_parallel_vacuum_indexes() and vacuum_indexes_leader().\n> Please review the attached patch.\n\nThat seems much clearer.\n\nWere you going to take care of this, Michael?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Mar 2021 09:38:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Thu, Mar 18, 2021 at 09:38:05AM -0700, Peter Geoghegan wrote:\n> Were you going to take care of this, Michael?\n\nYes, I was waiting for Sawada-san to send an updated version, which he\njust did.\n--\nMichael", "msg_date": "Fri, 19 Mar 2021 06:08:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Thu, Mar 18, 2021 at 2:08 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Yes, I was waiting for Sawada-san to send an updated version, which he\n> just did.\n\nGreat. This really seems worth having. I was hoping that somebody else\ncould pick this one up.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Mar 2021 15:08:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Thu, Mar 18, 2021 at 11:30:46PM +0900, Masahiko Sawada wrote:\n> Sorry, I attached the wrong version patch. So attached the right one.\n\nThanks. I have been hacking aon that, and I think that we could do\nmore in terms of integration of the index stats into LVRelStats to\nhelp with debugging issues, mainly, but also to open the door at\nallowing autovacuum to use the parallel path in the future. Hence,\nfor consistency, I think that we should change the following things in\nLVRelStats:\n- Add the number of indexes. It looks rather unusual to not track\ndown the number of indexes directly in the structure anyway, as the\nstats array gets added there.\n- Add all the index names, for parallel and non-parallel mode.\n- Replace the index name in the error callback by an index number,\npointing back to its location in indstats and indnames.\n\nAs lazy_vacuum_index() requires the index number to be set internally\nto it, this means that we need to pass it down across\nvacuum_indexes_leader(), lazy_parallel_vacuum_indexes(), but that\nseems like an acceptable compromise to me for now. I think that it\nwould be good to tighten a bit more the relationship between the index\nstats in the DSM for the parallel case and the ones in local memory,\nbut what we have here looks enough to me so we could figure out that\nas a future step.\n\nSawada-san, what do you think? Attached is the patch I have finished\nwith.\n--\nMichael", "msg_date": "Fri, 19 Mar 2021 15:14:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Fri, Mar 19, 2021 at 3:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Mar 18, 2021 at 11:30:46PM +0900, Masahiko Sawada wrote:\n> > Sorry, I attached the wrong version patch. So attached the right one.\n>\n> Thanks. I have been hacking aon that, and I think that we could do\n> more in terms of integration of the index stats into LVRelStats to\n> help with debugging issues, mainly, but also to open the door at\n> allowing autovacuum to use the parallel path in the future.\n\nThank you for updating the patch!\n\n> Hence,\n> for consistency, I think that we should change the following things in\n> LVRelStats:\n> - Add the number of indexes. It looks rather unusual to not track\n> down the number of indexes directly in the structure anyway, as the\n> stats array gets added there.\n> - Add all the index names, for parallel and non-parallel mode.\n\nAgreed with those two changes.\n\n> - Replace the index name in the error callback by an index number,\n> pointing back to its location in indstats and indnames.\n\nI like this idea but I'm not sure the approach that the patch took\nimproved the code. Please read the below my concern.\n\n>\n> As lazy_vacuum_index() requires the index number to be set internally\n> to it, this means that we need to pass it down across\n> vacuum_indexes_leader(), lazy_parallel_vacuum_indexes(), but that\n> seems like an acceptable compromise to me for now. I think that it\n> would be good to tighten a bit more the relationship between the index\n> stats in the DSM for the parallel case and the ones in local memory,\n> but what we have here looks enough to me so we could figure out that\n> as a future step.\n>\n> Sawada-san, what do you think? Attached is the patch I have finished\n> with.\n\nWith this idea, vacuum_one_index() will become:\n\nstatic void\nlazy_vacuum_index(Relation indrel, IndexBulkDeleteResult **stats,\n LVDeadTuples *dead_tuples, double reltuples,\n LVRelStats *vacrelstats, int indnum)\n\nand the caller calls this function as follow:\n\n for (idx = 0; idx < nindexes; idx++)\n lazy_vacuum_index(Irel[idx], &(vacrelstats->indstats[idx]),\n vacrelstats->dead_tuples,\n vacrelstats->old_live_tuples, vacrelstats,\n idx);\n\nIt's not bad but it seems redundant a bit to me. We pass the idx in\nspite of passing also Irel[idx] and &(vacrelstats->indstats[idx]). I\nthink your first idea that is done in v4 patch (saving index names at\nthe beginning of heap_vacuum_rel() for autovacuum logging purpose\nonly) and the idea of deferring to close indexes until the end of\nheap_vacuum_rel() so that we can refer index name at autovacuum\nlogging are more simple.\n\nBTW the patch led to a crash in my environment. The problem is here:\n\n static void\n-vacuum_one_index(Relation indrel, IndexBulkDeleteResult **stats,\n+vacuum_one_index(Relation indrel,\n LVShared *lvshared, LVSharedIndStats *shared_indstats,\n- LVDeadTuples *dead_tuples, LVRelStats *vacrelstats)\n+ LVDeadTuples *dead_tuples, LVRelStats *vacrelstats,\n+ int indnum)\n {\n IndexBulkDeleteResult *bulkdelete_res = NULL;\n+ IndexBulkDeleteResult *stats;\n\nWe need to initialize *stats with NULL here.\n\nAnd while looking at the change of vacuum_one_index() I found another problem:\n\n@@ -2349,17 +2400,20 @@ vacuum_one_index(Relation indrel,\nIndexBulkDeleteResult **stats,\n * Update the pointer to the corresponding\nbulk-deletion result if\n * someone has already updated it.\n */\n- if (shared_indstats->updated && *stats == NULL)\n- *stats = bulkdelete_res;\n+ if (shared_indstats->updated)\n+ stats = bulkdelete_res;\n }\n+ else\n+ stats = vacrelstats->indstats[indnum];\n\n /* Do vacuum or cleanup of the index */\n if (lvshared->for_cleanup)\n- lazy_cleanup_index(indrel, stats, lvshared->reltuples,\n-\nlvshared->estimated_count, vacrelstats);\n+ lazy_cleanup_index(indrel, &stats, lvshared->reltuples,\n+\nlvshared->estimated_count, vacrelstats,\n+ indnum);\n else\n- lazy_vacuum_index(indrel, stats, dead_tuples,\n- lvshared->reltuples,\nvacelstats);\n+ lazy_vacuum_index(indrel, &stats, dead_tuples,\n+ lvshared->reltuples,\nvacrelstats, indnum);\n\n /*\n * Copy the index bulk-deletion result returned from ambulkdelete and\n\nIf shared_indstats is NULL (e.g., we do \" stats =\nvacrelstats->indstats[indnum];\"), vacrelstats->indstats[indnum] is not\nupdated since we pass &stats. I think we should pass\n&(vacrelstats->indstats[indnum]) instead in this case.\n\nPreviously, we update the element of the pointer array of index\nstatistics to the pointer pointing to either the local memory or DSM.\nBut with the above change, we do that only when the index statistics\nare in the local memory. In other words, vacrelstats->indstats[i] is\nnever updated if the corresponding index supports parallel indexes. I\nthink this is not relevant with the change that we'd like to do here\n(i.e., passing indnum down).\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Sat, 20 Mar 2021 13:06:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Sat, Mar 20, 2021 at 01:06:51PM +0900, Masahiko Sawada wrote:\n> It's not bad but it seems redundant a bit to me. We pass the idx in\n> spite of passing also Irel[idx] and &(vacrelstats->indstats[idx]). I\n> think your first idea that is done in v4 patch (saving index names at\n> the beginning of heap_vacuum_rel() for autovacuum logging purpose\n> only) and the idea of deferring to close indexes until the end of\n> heap_vacuum_rel() so that we can refer index name at autovacuum\n> logging are more simple.\n\nOkay.\n\n> We need to initialize *stats with NULL here.\n\nRight. I am wondering why I did not get any complain here.\n\n> If shared_indstats is NULL (e.g., we do \" stats =\n> vacrelstats->indstats[indnum];\"), vacrelstats->indstats[indnum] is not\n> updated since we pass &stats. I think we should pass\n> &(vacrelstats->indstats[indnum]) instead in this case.\n\nIf we get rid completely of this idea around indnum, that I don't\ndisagree with so let's keep just indname, you mean to keep the second\nargument IndexBulkDeleteResult of vacuum_one_index() and pass down\n&(vacrelstats->indstats[indnum]) as argument. No objections from me\nto just do that.\n\n> Previously, we update the element of the pointer array of index\n> statistics to the pointer pointing to either the local memory or DSM.\n> But with the above change, we do that only when the index statistics\n> are in the local memory. In other words, vacrelstats->indstats[i] is\n> never updated if the corresponding index supports parallel indexes. I\n> think this is not relevant with the change that we'd like to do here\n> (i.e., passing indnum down).\n\nYeah, that looks like just some over-engineering design on my side.\nWould you like to update the patch with what you think is most\nadapted?\n--\nMichael", "msg_date": "Sat, 20 Mar 2021 15:40:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Sat, Mar 20, 2021 at 3:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Mar 20, 2021 at 01:06:51PM +0900, Masahiko Sawada wrote:\n> > It's not bad but it seems redundant a bit to me. We pass the idx in\n> > spite of passing also Irel[idx] and &(vacrelstats->indstats[idx]). I\n> > think your first idea that is done in v4 patch (saving index names at\n> > the beginning of heap_vacuum_rel() for autovacuum logging purpose\n> > only) and the idea of deferring to close indexes until the end of\n> > heap_vacuum_rel() so that we can refer index name at autovacuum\n> > logging are more simple.\n>\n> Okay.\n>\n> > We need to initialize *stats with NULL here.\n>\n> Right. I am wondering why I did not get any complain here.\n>\n> > If shared_indstats is NULL (e.g., we do \" stats =\n> > vacrelstats->indstats[indnum];\"), vacrelstats->indstats[indnum] is not\n> > updated since we pass &stats. I think we should pass\n> > &(vacrelstats->indstats[indnum]) instead in this case.\n>\n> If we get rid completely of this idea around indnum, that I don't\n> disagree with so let's keep just indname, you mean to keep the second\n> argument IndexBulkDeleteResult of vacuum_one_index() and pass down\n> &(vacrelstats->indstats[indnum]) as argument. No objections from me\n> to just do that.\n>\n> > Previously, we update the element of the pointer array of index\n> > statistics to the pointer pointing to either the local memory or DSM.\n> > But with the above change, we do that only when the index statistics\n> > are in the local memory. In other words, vacrelstats->indstats[i] is\n> > never updated if the corresponding index supports parallel indexes. I\n> > think this is not relevant with the change that we'd like to do here\n> > (i.e., passing indnum down).\n>\n> Yeah, that looks like just some over-engineering design on my side.\n> Would you like to update the patch with what you think is most\n> adapted?\n\nI've updated the patch. I saved the index names at the beginning of\nheap_vacuum_rel() for autovacuum logging, and add indstats and\nnindexes to LVRelStats. Some functions still have nindexes as a\nfunction argument but it seems to make sense since it corresponds the\nlist of index relations (*Irel). Please review the patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 22 Mar 2021 12:17:37 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Mon, Mar 22, 2021 at 12:17:37PM +0900, Masahiko Sawada wrote:\n> I've updated the patch. I saved the index names at the beginning of\n> heap_vacuum_rel() for autovacuum logging, and add indstats and\n> nindexes to LVRelStats. Some functions still have nindexes as a\n> function argument but it seems to make sense since it corresponds the\n> list of index relations (*Irel). Please review the patch.\n\nGoing back to that, the structure of the static APIs in this file make\nthe whole logic a bit hard to follow, but the whole set of changes you\nhave done here makes sense. It took me a moment to recall and\nunderstand why it is safe to free *stats at the end of\nvacuum_one_index() and if the index stats array actually pointed to\nthe DSM segment correctly within the shared stats.\n\nI think that there is more consolidation possible within LVRelStats,\nbut let's leave that for another day, if there is any need for such a\nmove.\n\nTo keep it short. Sold.\n--\nMichael", "msg_date": "Tue, 23 Mar 2021 13:31:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" }, { "msg_contents": "On Tue, Mar 23, 2021 at 1:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 22, 2021 at 12:17:37PM +0900, Masahiko Sawada wrote:\n> > I've updated the patch. I saved the index names at the beginning of\n> > heap_vacuum_rel() for autovacuum logging, and add indstats and\n> > nindexes to LVRelStats. Some functions still have nindexes as a\n> > function argument but it seems to make sense since it corresponds the\n> > list of index relations (*Irel). Please review the patch.\n>\n> Going back to that, the structure of the static APIs in this file make\n> the whole logic a bit hard to follow, but the whole set of changes you\n> have done here makes sense. It took me a moment to recall and\n> understand why it is safe to free *stats at the end of\n> vacuum_one_index() and if the index stats array actually pointed to\n> the DSM segment correctly within the shared stats.\n>\n> I think that there is more consolidation possible within LVRelStats,\n> but let's leave that for another day, if there is any need for such a\n> move.\n\nWhile studying your patch (v5-index_stat_log.patch) I found we can\npolish the parallel vacuum code in some places. I'll try it another\nday.\n\n>\n> To keep it short. Sold.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 23 Mar 2021 14:41:58 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: a verbose option for autovacuum" } ]
[ { "msg_contents": "Hi\n\njsonb with subscripting support can be used as a dictionary object in\nplpgsql.\n\nCan be nice to have support for iteration over a set of tuples (key,\nvalue).\n\nSome like\n\nFOREACH fieldvar [ KEY keyvar] IN DICTIONARY sourceexpr [VALUE searchexpr]\nLOOP\nEND LOOP;\n\nand for JSON arrays\n\nFOREACH var IN ARRAY jsonval\nLOOP\nEND LOOP\n\nExample:\n\ndict jsonb DEFAULT '{\"a\", \"a1\", \"b\", \"b1\"}\nv text; k text;\nj jsonb;\nBEGIN\n FOREACH v KEY k IN DICTIONARY dict\n LOOP\n RAISE NOTICE '%=>%', k, v; -- a=>a1\\nb=>b1\n END LOOP;\n --\n FOREACH j IN DICTIONARY dict\n LOOP\n RAISE NOTICE '%', j; -- {\"a\":\"a1\"}\\n{\"b\":\"b1\"}\n END LOOP;\n\nThe goal is to support fast iteration over some non atomic objects\ndifferent from arrays.\n\nMaybe some background of XMLTABLE and JSON_TABLE functions can be used\nthere.\n\nComments, notes?\n\nRegards\n\nPavel\n\nHijsonb with subscripting support can be used as a dictionary object in plpgsql.Can be nice to have support for iteration over a set of tuples (key, value). Some likeFOREACH fieldvar [ KEY keyvar] IN DICTIONARY sourceexpr [VALUE searchexpr]LOOPEND LOOP;and for JSON arraysFOREACH var IN ARRAY jsonvalLOOPEND LOOPExample:dict jsonb DEFAULT '{\"a\", \"a1\", \"b\", \"b1\"}v text; k text;j jsonb;BEGIN  FOREACH v KEY k IN DICTIONARY dict  LOOP    RAISE NOTICE '%=>%', k, v; -- a=>a1\\nb=>b1  END LOOP;  --  FOREACH j IN DICTIONARY dict  LOOP   RAISE NOTICE '%', j; -- {\"a\":\"a1\"}\\n{\"b\":\"b1\"}  END LOOP;The goal is to support fast iteration over some non atomic objects different from arrays.Maybe some background of XMLTABLE and JSON_TABLE functions can be used there.Comments, notes?RegardsPavel", "msg_date": "Sat, 23 Jan 2021 07:46:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - idea - enhancing plpgsql FOREACH for JSON,\n jsonb and hstore" }, { "msg_contents": "On Sat, Jan 23, 2021 at 07:46:01AM +0100, Pavel Stehule wrote:\n> Hi\n> \n> jsonb with subscripting support can be used as a dictionary object in\n> plpgsql.\n> \n> Can be nice to have support for iteration over a set of tuples (key,\n> value).\n> \n> Some like\n> \n> FOREACH fieldvar [ KEY keyvar] IN DICTIONARY sourceexpr [VALUE searchexpr]\n> LOOP\n> END LOOP;\n> \n> and for JSON arrays\n> \n> FOREACH var IN ARRAY jsonval\n> LOOP\n> END LOOP\n> [...]\n> \n> The goal is to support fast iteration over some non atomic objects\n> different from arrays.\n\n+1, it seems like something useful to have.\n\n\n", "msg_date": "Sat, 23 Jan 2021 14:52:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - idea - enhancing plpgsql FOREACH for JSON, jsonb and\n hstore" }, { "msg_contents": "Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> jsonb with subscripting support can be used as a dictionary object in\n> plpgsql.\n> \n> Can be nice to have support for iteration over a set of tuples (key,\n> value).\n\nYes, I agree that this would be useful.\n\n> FOREACH fieldvar [ KEY keyvar] IN DICTIONARY sourceexpr [VALUE searchexpr]\n> LOOP\n> END LOOP;\n\nShould we be thinking about using sql/json path for what to search\nfor instead of just fieldvar/keyvar..? Or perhaps support both..\n\n> and for JSON arrays\n> \n> FOREACH var IN ARRAY jsonval\n> LOOP\n> END LOOP\n\nPresumably we'd also support SLICE with this?\n\nAlso, I wonder about having a way to FOREACH through all objects,\nreturning top-level ones, which a user could then call jsonb_typeof on\nand then recurse if an object is found, allowing an entire jsonb tree to\nbe processed this way.\n\nThanks,\n\nStephen", "msg_date": "Sat, 23 Jan 2021 13:21:28 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal - idea - enhancing plpgsql FOREACH for JSON, jsonb and\n hstore" }, { "msg_contents": "so 23. 1. 2021 v 19:21 odesílatel Stephen Frost <sfrost@snowman.net> napsal:\n\n> Greetings,\n>\n> * Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> > jsonb with subscripting support can be used as a dictionary object in\n> > plpgsql.\n> >\n> > Can be nice to have support for iteration over a set of tuples (key,\n> > value).\n>\n> Yes, I agree that this would be useful.\n>\n> > FOREACH fieldvar [ KEY keyvar] IN DICTIONARY sourceexpr [VALUE\n> searchexpr]\n> > LOOP\n> > END LOOP;\n>\n> Should we be thinking about using sql/json path for what to search\n> for instead of just fieldvar/keyvar..? Or perhaps support both..\n>\n\nI would support both. JSONPath can be specified by a special clause - I\nused the keyword VALUE (but can be different).\n\nMy primary inspiration and motivation is the possibility to use jsonb as a\ncollection or dictionary in other languages. But if we implement some\n\"iterators\", then enhancing to support XMLPath or JSONPath is natural. The\ninterface should not be too complex like specialized functions XMLTABLE or\nJSON_TABLE, but simple task should be much faster with FOREACH statement,\nbecause there is not an overhead of SQL or SPI.\n\n\n> > and for JSON arrays\n> >\n> > FOREACH var IN ARRAY jsonval\n> > LOOP\n> > END LOOP\n>\n> Presumably we'd also support SLICE with this?\n>\n\nif we find good semantics, then why not?\n\n>\n> Also, I wonder about having a way to FOREACH through all objects,\n> returning top-level ones, which a user could then call jsonb_typeof on\n> and then recurse if an object is found, allowing an entire jsonb tree to\n> be processed this way.\n>\n\nProbably this should be possible via JSONPath iteration.\n\nWe need similar interface like nodeTableFuncscan.c\n\n\n\n> Thanks,\n>\n> Stephen\n>\n\nso 23. 1. 2021 v 19:21 odesílatel Stephen Frost <sfrost@snowman.net> napsal:Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> jsonb with subscripting support can be used as a dictionary object in\n> plpgsql.\n> \n> Can be nice to have support for iteration over a set of tuples (key,\n> value).\n\nYes, I agree that this would be useful.\n\n> FOREACH fieldvar [ KEY keyvar] IN DICTIONARY sourceexpr [VALUE searchexpr]\n> LOOP\n> END LOOP;\n\nShould we be thinking about using sql/json path for what to search\nfor instead of just fieldvar/keyvar..?  Or perhaps support both..I would support both. JSONPath can be specified by a special clause - I used the keyword VALUE (but can be different).My primary inspiration and motivation is  the possibility to use jsonb as a collection or dictionary in other languages. But if we implement some \"iterators\", then enhancing to support XMLPath or JSONPath is natural. The interface should not be too complex like specialized functions XMLTABLE or JSON_TABLE, but simple task should be much faster with FOREACH statement, because there is not an overhead of SQL or SPI.\n\n> and for JSON arrays\n> \n> FOREACH var IN ARRAY jsonval\n> LOOP\n> END LOOP\n\nPresumably we'd also support SLICE with this?if we find good semantics, then why not? \n\nAlso, I wonder about having a way to FOREACH through all objects,\nreturning top-level ones, which a user could then call jsonb_typeof on\nand then recurse if an object is found, allowing an entire jsonb tree to\nbe processed this way.Probably this should be possible via JSONPath iteration.We need similar interface like nodeTableFuncscan.c \n\nThanks,\n\nStephen", "msg_date": "Sat, 23 Jan 2021 19:50:49 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - idea - enhancing plpgsql FOREACH for JSON,\n jsonb and hstore" } ]
[ { "msg_contents": "I got annoyed about the lack of $SUBJECT. The attached patch\nadds a simple test case, bringing the module's coverage to 84%\naccording to my results. (The uncovered lines mostly are in\n_PG_fini(), which is unreachable, or else to do with chaining\nto additional occupiers of the same hooks.)\n\nAny objections?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 23 Jan 2021 17:41:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Test coverage for contrib/auto_explain" } ]
[ { "msg_contents": "Hi,\n\nWhile working on pg14 compatibility for an extension relying on an apparently\nuncommon combination of FOR UPDATE and stored function calls, I hit some new\nAsserts introduced in 866e24d47db (Extend amcheck to check heap pages):\n\n+\t/*\n+\t * Do not allow tuples with invalid combinations of hint bits to be placed\n+\t * on a page. These combinations are detected as corruption by the\n+\t * contrib/amcheck logic, so if you disable one or both of these\n+\t * assertions, make corresponding changes there.\n+\t */\n+\tAssert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n+\t\t\t (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n\n\nI attach a simple self contained script to reproduce the problem, the last\nUPDATE triggering the Assert.\n\nI'm not really familiar with this part of the code, so it's not exactly clear\nto me if some logic is missing in compute_new_xmax_infomask() /\nheap_prepare_insert(), or if this should actually be an allowed combination of\nhint bit.", "msg_date": "Sun, 24 Jan 2021 14:17:58 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit combination" }, { "msg_contents": "On 2021-Jan-24, Julien Rouhaud wrote:\n\n> +\t/*\n> +\t * Do not allow tuples with invalid combinations of hint bits to be placed\n> +\t * on a page. These combinations are detected as corruption by the\n> +\t * contrib/amcheck logic, so if you disable one or both of these\n> +\t * assertions, make corresponding changes there.\n> +\t */\n> +\tAssert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> +\t\t\t (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> \n> \n> I attach a simple self contained script to reproduce the problem, the last\n> UPDATE triggering the Assert.\n> \n> I'm not really familiar with this part of the code, so it's not exactly clear\n> to me if some logic is missing in compute_new_xmax_infomask() /\n> heap_prepare_insert(), or if this should actually be an allowed combination of\n> hint bit.\n\nHmm, it's probably a bug in compute_new_xmax_infomask. I don't think\nthe combination is sensible.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"There is evil in the world. There are dark, awful things. Occasionally, we get\na glimpse of them. But there are dark corners; horrors almost impossible to\nimagine... even in our worst nightmares.\" (Van Helsing, Dracula A.D. 1972)\n\n\n", "msg_date": "Sun, 24 Jan 2021 13:01:33 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Sun, 24 Jan 2021 at 11:48, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> While working on pg14 compatibility for an extension relying on an\napparently\n> uncommon combination of FOR UPDATE and stored function calls, I hit some\nnew\n> Asserts introduced in 866e24d47db (Extend amcheck to check heap pages):\n>\n> + /*\n> + * Do not allow tuples with invalid combinations of hint bits to\nbe placed\n> + * on a page. These combinations are detected as corruption by\nthe\n> + * contrib/amcheck logic, so if you disable one or both of these\n> + * assertions, make corresponding changes there.\n> + */\n> + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> + (tuple->t_data->t_infomask2 &\nHEAP_KEYS_UPDATED)));\n>\n>\n> I attach a simple self contained script to reproduce the problem, the last\n> UPDATE triggering the Assert.\n>\n> I'm not really familiar with this part of the code, so it's not exactly\nclear\n> to me if some logic is missing in compute_new_xmax_infomask() /\n> heap_prepare_insert(), or if this should actually be an allowed\ncombination of\n> hint bit.\n\nThanks Juliean for reporting this. I am also able to reproduce this assert.\n\n*Small test case to reproduce:*\n\n> DROP TABLE IF EXISTS t1;\n> CREATE TABLE t1(id integer, val text);\n> INSERT INTO t1 SELECT i, 'val' FROM generate_series(1, 2) i;\n>\n> BEGIN;\n> SAVEPOINT s1;\n> SELECT 1 FROM t1 WHERE id = 2 FOR UPDATE;\n> UPDATE t1 SET val = 'hoho' WHERE id = 2;\n> release s1;\n> SELECT 1 FROM t1 WHERE id = 2 FOR UPDATE;\n> UPDATE t1 SET val = 'hoho' WHERE id = 2;\n>\n\nIf we remove the \"release s1;\" step from the test case, then we are not\ngetting this assert failure.\n\n*Stack trace:*\n\n> warning: Unexpected size of section `.reg-xstate/123318' in core file.\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n> Core was generated by `postgres: mahendrathalor postgres [local] UPDATE\n> '.\n> Program terminated with signal SIGABRT, Aborted.\n>\n> warning: Unexpected size of section `.reg-xstate/123318' in core file.\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51\n> 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51\n> #1 0x00007fb50a7c88b1 in __GI_abort () at abort.c:79\n> #2 0x00005612a63f7c84 in ExceptionalCondition (\n> conditionName=0x5612a64da470 \"!((tuple->t_data->t_infomask &\n> HEAP_XMAX_LOCK_ONLY) && (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED))\",\n> errorType=0x5612a64da426 \"FailedAssertion\", fileName=0x5612a64da420\n> \"hio.c\", lineNumber=57) at assert.c:69\n> #3 0x00005612a597e76b in RelationPutHeapTuple (relation=0x7fb50ce37fc0,\n> buffer=163, tuple=0x5612a795de18, token=false) at hio.c:56\n> #4 0x00005612a5955d32 in heap_update (relation=0x7fb50ce37fc0,\n> otid=0x7ffc8b5e30d2, newtup=0x5612a795de18, cid=0, crosscheck=0x0,\n> wait=true, tmfd=0x7ffc8b5e3060,\n> lockmode=0x7ffc8b5e3028) at heapam.c:3791\n> #5 0x00005612a596ebdc in heapam_tuple_update (relation=0x7fb50ce37fc0,\n> otid=0x7ffc8b5e30d2, slot=0x5612a794d348, cid=3, snapshot=0x5612a793d620,\n> crosscheck=0x0, wait=true,\n> tmfd=0x7ffc8b5e3060, lockmode=0x7ffc8b5e3028,\n> update_indexes=0x7ffc8b5e3025) at heapam_handler.c:327\n> #6 0x00005612a5da745d in table_tuple_update (rel=0x7fb50ce37fc0,\n> otid=0x7ffc8b5e30d2, slot=0x5612a794d348, cid=3, snapshot=0x5612a793d620,\n> crosscheck=0x0, wait=true,\n> tmfd=0x7ffc8b5e3060, lockmode=0x7ffc8b5e3028,\n> update_indexes=0x7ffc8b5e3025) at ../../../src/include/access/tableam.h:1422\n> #7 0x00005612a5dab6ef in ExecUpdate (mtstate=0x5612a794bc20,\n> resultRelInfo=0x5612a794be58, tupleid=0x7ffc8b5e30d2, oldtuple=0x0,\n> slot=0x5612a794d348, planSlot=0x5612a794d1f8,\n> epqstate=0x5612a794bd18, estate=0x5612a794b9b0, canSetTag=true) at\n> nodeModifyTable.c:1498\n> #8 0x00005612a5dadb17 in ExecModifyTable (pstate=0x5612a794bc20) at\n> nodeModifyTable.c:2254\n> #9 0x00005612a5d4fdc5 in ExecProcNodeFirst (node=0x5612a794bc20) at\n> execProcnode.c:450\n> #10 0x00005612a5d3bd3a in ExecProcNode (node=0x5612a794bc20) at\n> ../../../src/include/executor/executor.h:247\n> #11 0x00005612a5d40764 in ExecutePlan (estate=0x5612a794b9b0,\n> planstate=0x5612a794bc20, use_parallel_mode=false, operation=CMD_UPDATE,\n> sendTuples=false, numberTuples=0,\n> direction=ForwardScanDirection, dest=0x5612a7946d38,\n> execute_once=true) at execMain.c:1542\n> #12 0x00005612a5d3c8e2 in standard_ExecutorRun (queryDesc=0x5612a79468c0,\n> direction=ForwardScanDirection, count=0, execute_once=true) at\n> execMain.c:364\n> #13 0x00005612a5d3c5aa in ExecutorRun (queryDesc=0x5612a79468c0,\n> direction=ForwardScanDirection, count=0, execute_once=true) at\n> execMain.c:308\n> #14 0x00005612a612d78a in ProcessQuery (plan=0x5612a7946c48,\n> sourceText=0x5612a787b570 \"UPDATE t1 SET val = 'hoho' WHERE id = 2;\",\n> params=0x0, queryEnv=0x0,\n> dest=0x5612a7946d38, qc=0x7ffc8b5e3570) at pquery.c:160\n> #15 0x00005612a61306f6 in PortalRunMulti (portal=0x5612a78dd5f0,\n> isTopLevel=true, setHoldSnapshot=false, dest=0x5612a7946d38,\n> altdest=0x5612a7946d38, qc=0x7ffc8b5e3570)\n> at pquery.c:1267\n> #16 0x00005612a612f256 in PortalRun (portal=0x5612a78dd5f0,\n> count=9223372036854775807, isTopLevel=true, run_once=true,\n> dest=0x5612a7946d38, altdest=0x5612a7946d38,\n> qc=0x7ffc8b5e3570) at pquery.c:779\n> #17 0x00005612a612266f in exec_simple_query (query_string=0x5612a787b570\n> \"UPDATE t1 SET val = 'hoho' WHERE id = 2;\") at postgres.c:1240\n> #18 0x00005612a612b8dd in PostgresMain (argc=1, argv=0x7ffc8b5e3790,\n> dbname=0x5612a78a74f0 \"postgres\", username=0x5612a78a74c8 \"mahendrathalor\")\n> at postgres.c:4394\n> #19 0x00005612a5fd5bf0 in BackendRun (port=0x5612a789ec00) at\n> postmaster.c:4484\n> #20 0x00005612a5fd4f46 in BackendStartup (port=0x5612a789ec00) at\n> postmaster.c:4206\n> #21 0x00005612a5fcd301 in ServerLoop () at postmaster.c:1730\n> #22 0x00005612a5fcc4fe in PostmasterMain (argc=5, argv=0x5612a7873e70) at\n> postmaster.c:1402\n> #23 0x00005612a5e16d9a in main (argc=5, argv=0x5612a7873e70) at main.c:209\n>\n\n\nI am also trying to understand infomask and all.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sun, 24 Jan 2021 at 11:48, Julien Rouhaud <rjuju123@gmail.com> wrote:>> Hi,>> While working on pg14 compatibility for an extension relying on an apparently> uncommon combination of FOR UPDATE and stored function calls, I hit some new> Asserts introduced in 866e24d47db (Extend amcheck to check heap pages):>> +       /*> +        * Do not allow tuples with invalid combinations of hint bits to be placed> +        * on a page.  These combinations are detected as corruption by the> +        * contrib/amcheck logic, so if you disable one or both of these> +        * assertions, make corresponding changes there.> +        */> +       Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&> +                        (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));>>> I attach a simple self contained script to reproduce the problem, the last> UPDATE triggering the Assert.>> I'm not really familiar with this part of the code, so it's not exactly clear> to me if some logic is missing in compute_new_xmax_infomask() /> heap_prepare_insert(), or if this should actually be an allowed combination of> hint bit.Thanks Juliean for reporting this. I am also able to reproduce this assert.Small test case to reproduce:DROP TABLE IF EXISTS t1;CREATE TABLE t1(id integer, val text);INSERT INTO t1 SELECT i, 'val' FROM generate_series(1, 2) i;BEGIN;SAVEPOINT s1;SELECT 1 FROM t1 WHERE id = 2 FOR UPDATE;UPDATE t1 SET val = 'hoho' WHERE id = 2;release s1;SELECT 1 FROM t1 WHERE id = 2 FOR UPDATE;UPDATE t1 SET val = 'hoho' WHERE id = 2;If we remove the \"release s1;\" step from the test case, then we are not getting this assert failure.Stack trace:warning: Unexpected size of section `.reg-xstate/123318' in core file.[Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".Core was generated by `postgres: mahendrathalor postgres [local] UPDATE                              '.Program terminated with signal SIGABRT, Aborted.warning: Unexpected size of section `.reg-xstate/123318' in core file.#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5151\t../sysdeps/unix/sysv/linux/raise.c: No such file or directory.(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51#1  0x00007fb50a7c88b1 in __GI_abort () at abort.c:79#2  0x00005612a63f7c84 in ExceptionalCondition (    conditionName=0x5612a64da470 \"!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) && (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED))\",     errorType=0x5612a64da426 \"FailedAssertion\", fileName=0x5612a64da420 \"hio.c\", lineNumber=57) at assert.c:69#3  0x00005612a597e76b in RelationPutHeapTuple (relation=0x7fb50ce37fc0, buffer=163, tuple=0x5612a795de18, token=false) at hio.c:56#4  0x00005612a5955d32 in heap_update (relation=0x7fb50ce37fc0, otid=0x7ffc8b5e30d2, newtup=0x5612a795de18, cid=0, crosscheck=0x0, wait=true, tmfd=0x7ffc8b5e3060,     lockmode=0x7ffc8b5e3028) at heapam.c:3791#5  0x00005612a596ebdc in heapam_tuple_update (relation=0x7fb50ce37fc0, otid=0x7ffc8b5e30d2, slot=0x5612a794d348, cid=3, snapshot=0x5612a793d620, crosscheck=0x0, wait=true,     tmfd=0x7ffc8b5e3060, lockmode=0x7ffc8b5e3028, update_indexes=0x7ffc8b5e3025) at heapam_handler.c:327#6  0x00005612a5da745d in table_tuple_update (rel=0x7fb50ce37fc0, otid=0x7ffc8b5e30d2, slot=0x5612a794d348, cid=3, snapshot=0x5612a793d620, crosscheck=0x0, wait=true,     tmfd=0x7ffc8b5e3060, lockmode=0x7ffc8b5e3028, update_indexes=0x7ffc8b5e3025) at ../../../src/include/access/tableam.h:1422#7  0x00005612a5dab6ef in ExecUpdate (mtstate=0x5612a794bc20, resultRelInfo=0x5612a794be58, tupleid=0x7ffc8b5e30d2, oldtuple=0x0, slot=0x5612a794d348, planSlot=0x5612a794d1f8,     epqstate=0x5612a794bd18, estate=0x5612a794b9b0, canSetTag=true) at nodeModifyTable.c:1498#8  0x00005612a5dadb17 in ExecModifyTable (pstate=0x5612a794bc20) at nodeModifyTable.c:2254#9  0x00005612a5d4fdc5 in ExecProcNodeFirst (node=0x5612a794bc20) at execProcnode.c:450#10 0x00005612a5d3bd3a in ExecProcNode (node=0x5612a794bc20) at ../../../src/include/executor/executor.h:247#11 0x00005612a5d40764 in ExecutePlan (estate=0x5612a794b9b0, planstate=0x5612a794bc20, use_parallel_mode=false, operation=CMD_UPDATE, sendTuples=false, numberTuples=0,     direction=ForwardScanDirection, dest=0x5612a7946d38, execute_once=true) at execMain.c:1542#12 0x00005612a5d3c8e2 in standard_ExecutorRun (queryDesc=0x5612a79468c0, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364#13 0x00005612a5d3c5aa in ExecutorRun (queryDesc=0x5612a79468c0, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:308#14 0x00005612a612d78a in ProcessQuery (plan=0x5612a7946c48, sourceText=0x5612a787b570 \"UPDATE t1 SET val = 'hoho' WHERE id = 2;\", params=0x0, queryEnv=0x0,     dest=0x5612a7946d38, qc=0x7ffc8b5e3570) at pquery.c:160#15 0x00005612a61306f6 in PortalRunMulti (portal=0x5612a78dd5f0, isTopLevel=true, setHoldSnapshot=false, dest=0x5612a7946d38, altdest=0x5612a7946d38, qc=0x7ffc8b5e3570)    at pquery.c:1267#16 0x00005612a612f256 in PortalRun (portal=0x5612a78dd5f0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x5612a7946d38, altdest=0x5612a7946d38,     qc=0x7ffc8b5e3570) at pquery.c:779#17 0x00005612a612266f in exec_simple_query (query_string=0x5612a787b570 \"UPDATE t1 SET val = 'hoho' WHERE id = 2;\") at postgres.c:1240#18 0x00005612a612b8dd in PostgresMain (argc=1, argv=0x7ffc8b5e3790, dbname=0x5612a78a74f0 \"postgres\", username=0x5612a78a74c8 \"mahendrathalor\") at postgres.c:4394#19 0x00005612a5fd5bf0 in BackendRun (port=0x5612a789ec00) at postmaster.c:4484#20 0x00005612a5fd4f46 in BackendStartup (port=0x5612a789ec00) at postmaster.c:4206#21 0x00005612a5fcd301 in ServerLoop () at postmaster.c:1730#22 0x00005612a5fcc4fe in PostmasterMain (argc=5, argv=0x5612a7873e70) at postmaster.c:1402#23 0x00005612a5e16d9a in main (argc=5, argv=0x5612a7873e70) at main.c:209I am also trying to understand infomask and all.Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 24 Jan 2021 21:36:43 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Mon, Jan 25, 2021 at 12:01 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jan-24, Julien Rouhaud wrote:\n>\n> > + /*\n> > + * Do not allow tuples with invalid combinations of hint bits to be placed\n> > + * on a page. These combinations are detected as corruption by the\n> > + * contrib/amcheck logic, so if you disable one or both of these\n> > + * assertions, make corresponding changes there.\n> > + */\n> > + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> > + (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> >\n> >\n> > I attach a simple self contained script to reproduce the problem, the last\n> > UPDATE triggering the Assert.\n> >\n> > I'm not really familiar with this part of the code, so it's not exactly clear\n> > to me if some logic is missing in compute_new_xmax_infomask() /\n> > heap_prepare_insert(), or if this should actually be an allowed combination of\n> > hint bit.\n>\n> Hmm, it's probably a bug in compute_new_xmax_infomask. I don't think\n> the combination is sensible.\n\nYeah, the combination clearly doesn't make sense, but I'm wondering\nwhat to do about existing data? Amcheck.verify_am will report\ncorruption for those, and at least all servers where powa-archivist\nextension is installed will be impacted.\n\n\n", "msg_date": "Mon, 25 Jan 2021 01:04:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Mon, Jan 25, 2021 at 12:06 AM Mahendra Singh Thalor\n<mahi6run@gmail.com> wrote:\n>\n> On Sun, 24 Jan 2021 at 11:48, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I'm not really familiar with this part of the code, so it's not exactly clear\n> > to me if some logic is missing in compute_new_xmax_infomask() /\n> > heap_prepare_insert(), or if this should actually be an allowed combination of\n> > hint bit.\n>\n> Thanks Juliean for reporting this. I am also able to reproduce this assert.\n\nThanks for looking at it!\n>\n> Small test case to reproduce:\n>>\n>> DROP TABLE IF EXISTS t1;\n>> CREATE TABLE t1(id integer, val text);\n>> INSERT INTO t1 SELECT i, 'val' FROM generate_series(1, 2) i;\n>>\n>> BEGIN;\n>> SAVEPOINT s1;\n>> SELECT 1 FROM t1 WHERE id = 2 FOR UPDATE;\n>> UPDATE t1 SET val = 'hoho' WHERE id = 2;\n>> release s1;\n>> SELECT 1 FROM t1 WHERE id = 2 FOR UPDATE;\n>> UPDATE t1 SET val = 'hoho' WHERE id = 2;\n>\n>\n> If we remove the \"release s1;\" step from the test case, then we are not getting this assert failure.\n\nYes, this is the smallest reproducer that could trigger the problem,\nand the release is required.\n\n\n", "msg_date": "Mon, 25 Jan 2021 01:05:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Sun, Jan 24, 2021 at 9:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jan-24, Julien Rouhaud wrote:\n>\n> > + /*\n> > + * Do not allow tuples with invalid combinations of hint bits to be placed\n> > + * on a page. These combinations are detected as corruption by the\n> > + * contrib/amcheck logic, so if you disable one or both of these\n> > + * assertions, make corresponding changes there.\n> > + */\n> > + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> > + (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> >\n> >\n> > I attach a simple self contained script to reproduce the problem, the last\n> > UPDATE triggering the Assert.\n> >\n> > I'm not really familiar with this part of the code, so it's not exactly clear\n> > to me if some logic is missing in compute_new_xmax_infomask() /\n> > heap_prepare_insert(), or if this should actually be an allowed combination of\n> > hint bit.\n>\n> Hmm, it's probably a bug in compute_new_xmax_infomask. I don't think\n> the combination is sensible.\n>\n\nIf we see the logic of GetMultiXactIdHintBits then it appeared that we\ncan get this combination in the case of multi-xact.\n\nswitch (members[i].status)\n{\n...\n case MultiXactStatusForUpdate:\n bits2 |= HEAP_KEYS_UPDATED;\n break;\n}\n\n....\nif (!has_update)\nbits |= HEAP_XMAX_LOCK_ONLY;\n\nBasically, if it is \"select for update\" then we will mark infomask2 as\nHEAP_KEYS_UPDATED and the informask as HEAP_XMAX_LOCK_ONLY.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Feb 2021 11:34:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Mon, Feb 1, 2021 at 2:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Jan 24, 2021 at 9:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Jan-24, Julien Rouhaud wrote:\n> >\n> > > + /*\n> > > + * Do not allow tuples with invalid combinations of hint bits to be placed\n> > > + * on a page. These combinations are detected as corruption by the\n> > > + * contrib/amcheck logic, so if you disable one or both of these\n> > > + * assertions, make corresponding changes there.\n> > > + */\n> > > + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> > > + (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> > >\n> > >\n> > > I attach a simple self contained script to reproduce the problem, the last\n> > > UPDATE triggering the Assert.\n> > >\n> > > I'm not really familiar with this part of the code, so it's not exactly clear\n> > > to me if some logic is missing in compute_new_xmax_infomask() /\n> > > heap_prepare_insert(), or if this should actually be an allowed combination of\n> > > hint bit.\n> >\n> > Hmm, it's probably a bug in compute_new_xmax_infomask. I don't think\n> > the combination is sensible.\n> >\n>\n> If we see the logic of GetMultiXactIdHintBits then it appeared that we\n> can get this combination in the case of multi-xact.\n>\n> switch (members[i].status)\n> {\n> ...\n> case MultiXactStatusForUpdate:\n> bits2 |= HEAP_KEYS_UPDATED;\n> break;\n> }\n>\n> ....\n> if (!has_update)\n> bits |= HEAP_XMAX_LOCK_ONLY;\n>\n> Basically, if it is \"select for update\" then we will mark infomask2 as\n> HEAP_KEYS_UPDATED and the informask as HEAP_XMAX_LOCK_ONLY.\n\nYes I saw that too, I don't know if the MultiXactStatusForUpdate case\nis ok or not.\n\nNote that this hint bit can get cleaned later in heap_update in case\nof hot_update or if there's TOAST:\n\n/*\n* To prevent concurrent sessions from updating the tuple, we have to\n* temporarily mark it locked, while we release the page-level lock.\n[...]\n/* Clear obsolete visibility flags ... */\noldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);\noldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;\n\n\n", "msg_date": "Mon, 1 Feb 2021 18:35:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Mon, Feb 1, 2021 at 4:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 2:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sun, Jan 24, 2021 at 9:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2021-Jan-24, Julien Rouhaud wrote:\n> > >\n> > > > + /*\n> > > > + * Do not allow tuples with invalid combinations of hint bits to be placed\n> > > > + * on a page. These combinations are detected as corruption by the\n> > > > + * contrib/amcheck logic, so if you disable one or both of these\n> > > > + * assertions, make corresponding changes there.\n> > > > + */\n> > > > + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> > > > + (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> > > >\n> > > >\n> > > > I attach a simple self contained script to reproduce the problem, the last\n> > > > UPDATE triggering the Assert.\n> > > >\n> > > > I'm not really familiar with this part of the code, so it's not exactly clear\n> > > > to me if some logic is missing in compute_new_xmax_infomask() /\n> > > > heap_prepare_insert(), or if this should actually be an allowed combination of\n> > > > hint bit.\n> > >\n> > > Hmm, it's probably a bug in compute_new_xmax_infomask. I don't think\n> > > the combination is sensible.\n> > >\n> >\n> > If we see the logic of GetMultiXactIdHintBits then it appeared that we\n> > can get this combination in the case of multi-xact.\n> >\n> > switch (members[i].status)\n> > {\n> > ...\n> > case MultiXactStatusForUpdate:\n> > bits2 |= HEAP_KEYS_UPDATED;\n> > break;\n> > }\n> >\n> > ....\n> > if (!has_update)\n> > bits |= HEAP_XMAX_LOCK_ONLY;\n> >\n> > Basically, if it is \"select for update\" then we will mark infomask2 as\n> > HEAP_KEYS_UPDATED and the informask as HEAP_XMAX_LOCK_ONLY.\n>\n> Yes I saw that too, I don't know if the MultiXactStatusForUpdate case\n> is ok or not.\n\nIt seems it is done intentionally to handle some case, I am not sure\nwhich case though. But Setting HEAP_KEYS_UPDATED in case of \"for\nupdate\" seems wrong.\nThe comment of this flag clearly says that \"tuple was updated and key\ncols modified, or tuple deleted \" and that is obviously not the case\nhere.\n\n#define HEAP_KEYS_UPDATED 0x2000 /* tuple was updated and key cols\n* modified, or tuple deleted */\n\n\n> Note that this hint bit can get cleaned later in heap_update in case\n> of hot_update or if there's TOAST:\n>\n> /*\n> * To prevent concurrent sessions from updating the tuple, we have to\n> * temporarily mark it locked, while we release the page-level lock.\n> [...]\n> /* Clear obsolete visibility flags ... */\n> oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);\n> oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;\n\nI see.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Feb 2021 16:22:34 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On 2021-Jan-24, Julien Rouhaud wrote:\n\n> While working on pg14 compatibility for an extension relying on an apparently\n> uncommon combination of FOR UPDATE and stored function calls, I hit some new\n> Asserts introduced in 866e24d47db (Extend amcheck to check heap pages):\n> \n> +\t/*\n> +\t * Do not allow tuples with invalid combinations of hint bits to be placed\n> +\t * on a page. These combinations are detected as corruption by the\n> +\t * contrib/amcheck logic, so if you disable one or both of these\n> +\t * assertions, make corresponding changes there.\n> +\t */\n> +\tAssert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> +\t\t\t (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> \n> \n> I attach a simple self contained script to reproduce the problem, the last\n> UPDATE triggering the Assert.\n\nMaybe we should contest the idea that this is a sensible thing to Assert\nagainst. AFAICS this was originally suggested here:\nhttps://www.postgresql.org/message-id/flat/CAFiTN-syyHc3jZoou51v0SR8z0POoNfktqEO6MaGig4YS8mosA%40mail.gmail.com#ad215d0ee0606b5f67bbc57d011c96b8\nand it appears now to have been a bad idea. If I recall correctly,\nHEAP_KEYS_UPDATED is supposed to distinguish locks/updates that don't\nmodify the key columns from those that do. Since SELECT FOR UPDATE\nstands for a future update that may modify arbitrary portions of the\ntuple (including \"key\" columns), then it produces that bit, just as said\nUPDATE or a DELETE; as opposed to SELECT FOR NO KEY UPDATE which stands\nfor a future UPDATE that will only change columns that aren't part of\nany keys.\n\nSo I think that I misspoke earlier in this thread when I said this is a\nbug, and that the right fix here is to remove the Assert() and change\namcheck to match.\n\nSeparately, maybe it'd also be good to have a test case based on\nJulien's SQL snippet that produces this particular infomask combination\n(and other interesting combinations) and passes them through VACUUM etc\nto see that everything behaves correctly.\n\nYou could also argue the HEAP_KEYS_UPDATED is a misnomer and that we'd\ndo well to change its name, and update README.tuplock.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n", "msg_date": "Mon, 1 Feb 2021 14:00:48 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Mon, Feb 01, 2021 at 02:00:48PM -0300, Alvaro Herrera wrote:\n> On 2021-Jan-24, Julien Rouhaud wrote:\n> \n> > While working on pg14 compatibility for an extension relying on an apparently\n> > uncommon combination of FOR UPDATE and stored function calls, I hit some new\n> > Asserts introduced in 866e24d47db (Extend amcheck to check heap pages):\n> > \n> > +\t/*\n> > +\t * Do not allow tuples with invalid combinations of hint bits to be placed\n> > +\t * on a page. These combinations are detected as corruption by the\n> > +\t * contrib/amcheck logic, so if you disable one or both of these\n> > +\t * assertions, make corresponding changes there.\n> > +\t */\n> > +\tAssert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> > +\t\t\t (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> > \n> > \n> > I attach a simple self contained script to reproduce the problem, the last\n> > UPDATE triggering the Assert.\n> \n> Maybe we should contest the idea that this is a sensible thing to Assert\n> against. AFAICS this was originally suggested here:\n> https://www.postgresql.org/message-id/flat/CAFiTN-syyHc3jZoou51v0SR8z0POoNfktqEO6MaGig4YS8mosA%40mail.gmail.com#ad215d0ee0606b5f67bbc57d011c96b8\n> and it appears now to have been a bad idea. If I recall correctly,\n> HEAP_KEYS_UPDATED is supposed to distinguish locks/updates that don't\n> modify the key columns from those that do. Since SELECT FOR UPDATE\n> stands for a future update that may modify arbitrary portions of the\n> tuple (including \"key\" columns), then it produces that bit, just as said\n> UPDATE or a DELETE; as opposed to SELECT FOR NO KEY UPDATE which stands\n> for a future UPDATE that will only change columns that aren't part of\n> any keys.\n\nThanks for the clarification, that makes sense.\n\n> So I think that I misspoke earlier in this thread when I said this is a\n> bug, and that the right fix here is to remove the Assert() and change\n> amcheck to match.\n\nI'm attaching a patch to do so.\n\n> Separately, maybe it'd also be good to have a test case based on\n> Julien's SQL snippet that produces this particular infomask combination\n> (and other interesting combinations) and passes them through VACUUM etc\n> to see that everything behaves correctly.\n\nI also updated amcheck perl regression tests to generate such a combination,\nadd added an additional pass of verify_heapam() just after the VACUUM.\n\n> \n> You could also argue the HEAP_KEYS_UPDATED is a misnomer and that we'd\n> do well to change its name, and update README.tuplock.\n\nChanging the name may be overkill, but claryfing the hint bit usage in\nREADME.tuplock would definitely be useful, especially since the combination\nisn't always produced. How about adding something like:\n\n HEAP_KEYS_UPDATED\n This bit lives in t_infomask2. If set, indicates that the XMAX updated\n this tuple and changed the key values, or it deleted the tuple.\n+ It can also be set in combination of HEAP_XMAX_LOCK_ONLY.\n It's set regardless of whether the XMAX is a TransactionId or a MultiXactId.", "msg_date": "Tue, 2 Feb 2021 12:21:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Mon, Feb 1, 2021 at 10:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jan-24, Julien Rouhaud wrote:\n>\n> > While working on pg14 compatibility for an extension relying on an apparently\n> > uncommon combination of FOR UPDATE and stored function calls, I hit some new\n> > Asserts introduced in 866e24d47db (Extend amcheck to check heap pages):\n> >\n> > + /*\n> > + * Do not allow tuples with invalid combinations of hint bits to be placed\n> > + * on a page. These combinations are detected as corruption by the\n> > + * contrib/amcheck logic, so if you disable one or both of these\n> > + * assertions, make corresponding changes there.\n> > + */\n> > + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> > + (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> >\n> >\n> > I attach a simple self contained script to reproduce the problem, the last\n> > UPDATE triggering the Assert.\n>\n> Maybe we should contest the idea that this is a sensible thing to Assert\n> against. AFAICS this was originally suggested here:\n> https://www.postgresql.org/message-id/flat/CAFiTN-syyHc3jZoou51v0SR8z0POoNfktqEO6MaGig4YS8mosA%40mail.gmail.com#ad215d0ee0606b5f67bbc57d011c96b8\n> and it appears now to have been a bad idea.\n\nI see, I suggested that :)\n\n If I recall correctly,\n> HEAP_KEYS_UPDATED is supposed to distinguish locks/updates that don't\n> modify the key columns from those that do. Since SELECT FOR UPDATE\n> stands for a future update that may modify arbitrary portions of the\n> tuple (including \"key\" columns), then it produces that bit, just as said\n> UPDATE or a DELETE; as opposed to SELECT FOR NO KEY UPDATE which stands\n> for a future UPDATE that will only change columns that aren't part of\n> any keys.\n\nYeah, that makes sense.\n\n> So I think that I misspoke earlier in this thread when I said this is a\n> bug, and that the right fix here is to remove the Assert() and change\n> amcheck to match.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Feb 2021 10:22:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Tue, 2 Feb 2021 at 09:51, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Feb 01, 2021 at 02:00:48PM -0300, Alvaro Herrera wrote:\n> > On 2021-Jan-24, Julien Rouhaud wrote:\n> >\n> > > While working on pg14 compatibility for an extension relying on an apparently\n> > > uncommon combination of FOR UPDATE and stored function calls, I hit some new\n> > > Asserts introduced in 866e24d47db (Extend amcheck to check heap pages):\n> > >\n> > > + /*\n> > > + * Do not allow tuples with invalid combinations of hint bits to be placed\n> > > + * on a page. These combinations are detected as corruption by the\n> > > + * contrib/amcheck logic, so if you disable one or both of these\n> > > + * assertions, make corresponding changes there.\n> > > + */\n> > > + Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n> > > + (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n> > >\n> > >\n> > > I attach a simple self contained script to reproduce the problem, the last\n> > > UPDATE triggering the Assert.\n> >\n> > Maybe we should contest the idea that this is a sensible thing to Assert\n> > against. AFAICS this was originally suggested here:\n> > https://www.postgresql.org/message-id/flat/CAFiTN-syyHc3jZoou51v0SR8z0POoNfktqEO6MaGig4YS8mosA%40mail.gmail.com#ad215d0ee0606b5f67bbc57d011c96b8\n> > and it appears now to have been a bad idea. If I recall correctly,\n> > HEAP_KEYS_UPDATED is supposed to distinguish locks/updates that don't\n> > modify the key columns from those that do. Since SELECT FOR UPDATE\n> > stands for a future update that may modify arbitrary portions of the\n> > tuple (including \"key\" columns), then it produces that bit, just as said\n> > UPDATE or a DELETE; as opposed to SELECT FOR NO KEY UPDATE which stands\n> > for a future UPDATE that will only change columns that aren't part of\n> > any keys.\n>\n> Thanks for the clarification, that makes sense.\n>\n> > So I think that I misspoke earlier in this thread when I said this is a\n> > bug, and that the right fix here is to remove the Assert() and change\n> > amcheck to match.\n>\n> I'm attaching a patch to do so.\n\nThanks Julien for the patch.\n\nPatch looks good to me and it is fixing the problem. I think we can\nregister in CF.\n\n>\n> > Separately, maybe it'd also be good to have a test case based on\n> > Julien's SQL snippet that produces this particular infomask combination\n> > (and other interesting combinations) and passes them through VACUUM etc\n> > to see that everything behaves correctly.\n>\n> I also updated amcheck perl regression tests to generate such a combination,\n> add added an additional pass of verify_heapam() just after the VACUUM.\n>\n> >\n> > You could also argue the HEAP_KEYS_UPDATED is a misnomer and that we'd\n> > do well to change its name, and update README.tuplock.\n>\n> Changing the name may be overkill, but claryfing the hint bit usage in\n> README.tuplock would definitely be useful, especially since the combination\n> isn't always produced. How about adding something like:\n>\n> HEAP_KEYS_UPDATED\n> This bit lives in t_infomask2. If set, indicates that the XMAX updated\n> this tuple and changed the key values, or it deleted the tuple.\n> + It can also be set in combination of HEAP_XMAX_LOCK_ONLY.\n> It's set regardless of whether the XMAX is a TransactionId or a MultiXactId.\nMake sense. Please can you update this?\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 20:34:13 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Thu, Feb 04, 2021 at 08:34:13PM +0530, Mahendra Singh Thalor wrote:\n> On Tue, 2 Feb 2021 at 09:51, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, Feb 01, 2021 at 02:00:48PM -0300, Alvaro Herrera wrote:\n> > > So I think that I misspoke earlier in this thread when I said this is a\n> > > bug, and that the right fix here is to remove the Assert() and change\n> > > amcheck to match.\n> >\n> > I'm attaching a patch to do so.\n> \n> Thanks Julien for the patch.\n> \n> Patch looks good to me and it is fixing the problem. I think we can\n> register in CF.\n\nThanks for looking at it! I just created an entry for the next commitfest.\n\n> >\n> > Changing the name may be overkill, but claryfing the hint bit usage in\n> > README.tuplock would definitely be useful, especially since the combination\n> > isn't always produced. How about adding something like:\n> >\n> > HEAP_KEYS_UPDATED\n> > This bit lives in t_infomask2. If set, indicates that the XMAX updated\n> > this tuple and changed the key values, or it deleted the tuple.\n> > + It can also be set in combination of HEAP_XMAX_LOCK_ONLY.\n> > It's set regardless of whether the XMAX is a TransactionId or a MultiXactId.\n> Make sense. Please can you update this?\n\nSure, done in attached v2!", "msg_date": "Fri, 5 Feb 2021 00:06:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On 2021-Feb-05, Julien Rouhaud wrote:\n\n> - HEAP_KEYS_UPDATED\n> This bit lives in t_infomask2. If set, indicates that the XMAX updated\n> - this tuple and changed the key values, or it deleted the tuple.\n> - It's set regardless of whether the XMAX is a TransactionId or a MultiXactId.\n> + this tuple and changed the key values, or it deleted the tuple. It can also\n> + be set in combination of HEAP_XMAX_LOCK_ONLY. It's set regardless of whether\n> + the XMAX is a TransactionId or a MultiXactId.\n\nI think we should reword this more completely, to avoid saying one thing\n(that the op is an update or delete) and then contradicting ourselves\n(that it can also be a lock). I propose this:\n\n\tThis bit lives in t_infomask2. If set, it indicates that the\n\toperation(s) done by the XMAX compromise the tuple key, such as\n\ta SELECT FOR UPDATE, an UPDATE that modifies the columns of the\n\tkey, or a DELETE.\n\nAlso, I just noticed that the paragraph just above this one says that\nHEAP_XMAX_EXCL_LOCK is used for both SELECT FOR UPDATE and SELECT FOR NO\nKEY UPDATE, and that this bit is what differentiates them.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Oh, great altar of passive entertainment, bestow upon me thy discordant images\nat such speed as to render linear thought impossible\" (Calvin a la TV)\n\n\n", "msg_date": "Thu, 4 Feb 2021 13:22:35 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On Thu, Feb 04, 2021 at 01:22:35PM -0300, Alvaro Herrera wrote:\n> On 2021-Feb-05, Julien Rouhaud wrote:\n> \n> > - HEAP_KEYS_UPDATED\n> > This bit lives in t_infomask2. If set, indicates that the XMAX updated\n> > - this tuple and changed the key values, or it deleted the tuple.\n> > - It's set regardless of whether the XMAX is a TransactionId or a MultiXactId.\n> > + this tuple and changed the key values, or it deleted the tuple. It can also\n> > + be set in combination of HEAP_XMAX_LOCK_ONLY. It's set regardless of whether\n> > + the XMAX is a TransactionId or a MultiXactId.\n> \n> I think we should reword this more completely, to avoid saying one thing\n> (that the op is an update or delete) and then contradicting ourselves\n> (that it can also be a lock). I propose this:\n> \n> \tThis bit lives in t_infomask2. If set, it indicates that the\n> \toperation(s) done by the XMAX compromise the tuple key, such as\n> \ta SELECT FOR UPDATE, an UPDATE that modifies the columns of the\n> \tkey, or a DELETE.\n\nThanks, that's way better, copied in v3. I'm still a bit worried about that\ndescription though, as that flag isn't consistently set for the FOR UPDATE\ncase. Well, to be more precise it's maybe consistently set when the hint bits\nare computed, but in some cases the flag is later cleared, so you won't\nreliably find it in the tuple.", "msg_date": "Fri, 5 Feb 2021 00:37:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On 2021-Feb-05, Julien Rouhaud wrote:\n\n> Thanks, that's way better, copied in v3. I'm still a bit worried about that\n> description though, as that flag isn't consistently set for the FOR UPDATE\n> case. Well, to be more precise it's maybe consistently set when the hint bits\n> are computed, but in some cases the flag is later cleared, so you won't\n> reliably find it in the tuple.\n\nHmm, that sounds bogus. I think the resetting of the other bits should\nbe undone afterwards, but I'm not sure that we correctly set\nKEYS_UPDATED again after the TOAST business. (What stuff does, from\nmemory, is to make the tuple look as if it is fully updated, which is\nnecessary during the TOAST handling; if the bits are not correctly set\ntransiently, that's okay. But it needs reinstated again later, once the\nTOAST stuff is finished).\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"El n�mero de instalaciones de UNIX se ha elevado a 10,\ny se espera que este n�mero aumente\" (UPM, 1972)\n\n\n", "msg_date": "Thu, 4 Feb 2021 14:36:37 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" }, { "msg_contents": "On 2021-Feb-05, Julien Rouhaud wrote:\n\n> Thanks, that's way better, copied in v3.\n\nThank you, pushed. The code changes are only relevant in master, but I\ndid back-patch the README.tuplock to all live branches.\n\n> I'm still a bit worried about that description though, as that flag\n> isn't consistently set for the FOR UPDATE case. Well, to be more\n> precise it's maybe consistently set when the hint bits are computed,\n> but in some cases the flag is later cleared, so you won't reliably\n> find it in the tuple.\n\nIs that right? I think compute_new_xmax_infomask would set the bit to\ninfomask2_old_tuple (its last output argument) if it's needed, and so\nthe bit would find its way to infomask2 eventually. Do you have a case\nwhere it doesn't achieve that?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"E pur si muove\" (Galileo Galilei)\n\n\n", "msg_date": "Tue, 23 Feb 2021 17:47:33 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Faulty HEAP_XMAX_LOCK_ONLY & HEAP_KEYS_UPDATED hintbit\n combination" } ]
[ { "msg_contents": "Hi:\n\n I recently found a use case like this. SELECT * FROM p, q WHERE p.partkey\n=\n q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either planning\ntime\n partition prune or init partition prune. Even though we have run-time\n partition pruning work at last, it is too late in some cases since we have\n to init all the plan nodes in advance. In my case, there are 10+\n partitioned relation in one query and the execution time is short, so the\n init plan a lot of plan nodes cares a lot.\n\nThe attached patches fix this issue. It just get the \"p.partkey = q.colx\"\ncase in root->eq_classes or rel->joinlist (outer join), and then check if\nthere\nis some baserestrictinfo in another relation which can be used for partition\npruning. To make the things easier, both partkey and colx must be Var\nexpression in implementation.\n\n- v1-0001-Make-some-static-functions-as-extern-and-extend-C.patch\n\nJust some existing refactoring and extending ChangeVarNodes to be able\nto change var->attno.\n\n- v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patch\n\nDo the real job.\n\nThought?\n\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Sun, 24 Jan 2021 18:34:26 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Extend more usecase for planning time partition pruning and init\n partition pruning." }, { "msg_contents": "On Sun, Jan 24, 2021 at 6:34 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi:\n>\n> I recently found a use case like this. SELECT * FROM p, q WHERE\n> p.partkey =\n> q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either planning\n> time\n> partition prune or init partition prune. Even though we have run-time\n> partition pruning work at last, it is too late in some cases since we have\n> to init all the plan nodes in advance. In my case, there are 10+\n> partitioned relation in one query and the execution time is short, so the\n> init plan a lot of plan nodes cares a lot.\n>\n> The attached patches fix this issue. It just get the \"p.partkey = q.colx\"\n> case in root->eq_classes or rel->joinlist (outer join), and then check if\n> there\n> is some baserestrictinfo in another relation which can be used for\n> partition\n> pruning. To make the things easier, both partkey and colx must be Var\n> expression in implementation.\n>\n> - v1-0001-Make-some-static-functions-as-extern-and-extend-C.patch\n>\n> Just some existing refactoring and extending ChangeVarNodes to be able\n> to change var->attno.\n>\n> - v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patch\n>\n> Do the real job.\n>\n> Thought?\n>\n>\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\n\nSome results from this patch.\n\ncreate table p (a int, b int, c character varying(8)) partition by list(c);\ncreate table p1 partition of p for values in ('000001');\ncreate table p2 partition of p for values in ('000002');\ncreate table p3 partition of p for values in ('000003');\ncreate table q (a int, c character varying(8), b int) partition by list(c);\ncreate table q1 partition of q for values in ('000001');\ncreate table q2 partition of q for values in ('000002');\ncreate table q3 partition of q for values in ('000003');\n\nBefore the patch:\npostgres=# explain (costs off) select * from p inner join q on p.c = q.c\nand q.c > '000002';\n QUERY PLAN\n----------------------------------------------------\n Hash Join\n Hash Cond: ((p.c)::text = (q.c)::text)\n -> Append\n -> Seq Scan on p1 p_1\n -> Seq Scan on p2 p_2\n -> Seq Scan on p3 p_3\n -> Hash\n -> Seq Scan on q3 q\n Filter: ((c)::text > '000002'::text)\n(9 rows)\n\nAfter the patch:\n\n QUERY PLAN\n----------------------------------------------------\n Hash Join\n Hash Cond: ((p.c)::text = (q.c)::text)\n -> Seq Scan on p3 p\n -> Hash\n -> Seq Scan on q3 q\n Filter: ((c)::text > '000002'::text)\n(6 rows)\n\n\nBefore the patch:\npostgres=# explain (costs off) select * from p inner join q on p.c = q.c\nand (q.c = '000002' or q.c = '000001');\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Hash Join\n Hash Cond: ((p.c)::text = (q.c)::text)\n -> Append\n -> Seq Scan on p1 p_1\n -> Seq Scan on p2 p_2\n -> Seq Scan on p3 p_3\n -> Hash\n -> Append\n -> Seq Scan on q1 q_1\n Filter: (((c)::text = '000002'::text) OR ((c)::text =\n'000001'::text))\n -> Seq Scan on q2 q_2\n Filter: (((c)::text = '000002'::text) OR ((c)::text =\n'000001'::text))\n(12 rows)\n\nAfter the patch:\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Hash Join\n Hash Cond: ((p.c)::text = (q.c)::text)\n -> Append\n -> Seq Scan on p1 p_1\n -> Seq Scan on p2 p_2\n -> Hash\n -> Append\n -> Seq Scan on q1 q_1\n Filter: (((c)::text = '000002'::text) OR ((c)::text =\n'000001'::text))\n -> Seq Scan on q2 q_2\n Filter: (((c)::text = '000002'::text) OR ((c)::text =\n'000001'::text))\n(11 rows)\n\nBefore the patch:\npostgres=# explain (costs off) select * from p left join q on p.c = q.c\nwhere (q.c = '000002' or q.c = '000001');\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Hash Join\n Hash Cond: ((p.c)::text = (q.c)::text)\n -> Append\n -> Seq Scan on p1 p_1\n -> Seq Scan on p2 p_2\n -> Seq Scan on p3 p_3\n -> Hash\n -> Append\n -> Seq Scan on q1 q_1\n Filter: (((c)::text = '000002'::text) OR ((c)::text =\n'000001'::text))\n -> Seq Scan on q2 q_2\n Filter: (((c)::text = '000002'::text) OR ((c)::text =\n'000001'::text))\n(12 rows)\n\nAfter the patch:\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Hash Join\n Hash Cond: ((p.c)::text = (q.c)::text)\n -> Append\n -> Seq Scan on p1 p_1\n -> Seq Scan on p2 p_2\n -> Hash\n -> Append\n -> Seq Scan on q1 q_1\n Filter: (((c)::text = '000002'::text) OR ((c)::text =\n'000001'::text))\n -> Seq Scan on q2 q_2\n Filter: (((c)::text = '000002'::text) OR ((c)::text =\n'000001'::text))\n(11 rows)\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Sun, Jan 24, 2021 at 6:34 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi: I recently found a use case like this.  SELECT * FROM p, q WHERE p.partkey = q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either planning time partition prune or init partition prune.  Even though we have run-time partition pruning work at last, it is too late in some cases since we have to init all the plan nodes in advance.  In my case, there are 10+ partitioned relation in one query and the execution time is short, so the init plan a lot of plan nodes cares a lot.The attached patches fix this issue. It just get the \"p.partkey = q.colx\"case in root->eq_classes or rel->joinlist (outer join), and then check if thereis some baserestrictinfo in another relation which can be used for partitionpruning. To make the things easier, both partkey and colx must be Varexpression in implementation.- v1-0001-Make-some-static-functions-as-extern-and-extend-C.patchJust some existing refactoring and extending ChangeVarNodes to be ableto change var->attno.- v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patchDo the real job.Thought?-- Best RegardsAndy Fan (https://www.aliyun.com/)\nSome results from this patch. create table p (a int, b int, c character varying(8)) partition by list(c);create table p1  partition of p for values in ('000001');create table p2  partition of p for values in ('000002');create table p3  partition of p for values in ('000003');create table q (a int, c character varying(8), b int) partition by list(c);create table q1  partition of q for values in ('000001');create table q2  partition of q for values in ('000002');create table q3  partition of q for values in ('000003');Before the patch:postgres=# explain (costs off) select * from p inner join q on p.c = q.c and q.c > '000002';                     QUERY PLAN---------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2         ->  Seq Scan on p3 p_3   ->  Hash         ->  Seq Scan on q3 q               Filter: ((c)::text > '000002'::text)(9 rows)After the patch:                     QUERY PLAN---------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Seq Scan on p3 p   ->  Hash         ->  Seq Scan on q3 q               Filter: ((c)::text > '000002'::text)(6 rows)Before the patch:postgres=# explain (costs off) select * from p inner join q on p.c = q.c and (q.c = '000002' or q.c = '000001');                                         QUERY PLAN-------------------------------------------------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2         ->  Seq Scan on p3 p_3   ->  Hash         ->  Append               ->  Seq Scan on q1 q_1                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))               ->  Seq Scan on q2 q_2                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))(12 rows)After the patch:                                         QUERY PLAN-------------------------------------------------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2   ->  Hash         ->  Append               ->  Seq Scan on q1 q_1                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))               ->  Seq Scan on q2 q_2                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))(11 rows)Before the patch:postgres=# explain (costs off) select * from p left join q on p.c = q.c where (q.c = '000002' or q.c = '000001');                                         QUERY PLAN-------------------------------------------------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2         ->  Seq Scan on p3 p_3   ->  Hash         ->  Append               ->  Seq Scan on q1 q_1                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))               ->  Seq Scan on q2 q_2                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))(12 rows)After the patch:                                         QUERY PLAN-------------------------------------------------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2   ->  Hash         ->  Append               ->  Seq Scan on q1 q_1                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))               ->  Seq Scan on q2 q_2                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))(11 rows)-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Mon, 25 Jan 2021 10:21:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extend more usecase for planning time partition pruning and init\n partition pruning." }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:21 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Sun, Jan 24, 2021 at 6:34 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> Hi:\n>>\n>> I recently found a use case like this. SELECT * FROM p, q WHERE\n>> p.partkey =\n>> q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either\n>> planning time\n>> partition prune or init partition prune. Even though we have run-time\n>> partition pruning work at last, it is too late in some cases since we\n>> have\n>> to init all the plan nodes in advance. In my case, there are 10+\n>> partitioned relation in one query and the execution time is short, so the\n>> init plan a lot of plan nodes cares a lot.\n>>\n>> The attached patches fix this issue. It just get the \"p.partkey = q.colx\"\n>> case in root->eq_classes or rel->joinlist (outer join), and then check if\n>> there\n>> is some baserestrictinfo in another relation which can be used for\n>> partition\n>> pruning. To make the things easier, both partkey and colx must be Var\n>> expression in implementation.\n>>\n>> - v1-0001-Make-some-static-functions-as-extern-and-extend-C.patch\n>>\n>> Just some existing refactoring and extending ChangeVarNodes to be able\n>> to change var->attno.\n>>\n>> - v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patch\n>>\n>> Do the real job.\n>>\n>> Thought?\n>>\n>>\n>>\n>> --\n>> Best Regards\n>> Andy Fan (https://www.aliyun.com/)\n>>\n>\n>\n> Some results from this patch.\n>\n> create table p (a int, b int, c character varying(8)) partition by list(c);\n> create table p1 partition of p for values in ('000001');\n> create table p2 partition of p for values in ('000002');\n> create table p3 partition of p for values in ('000003');\n> create table q (a int, c character varying(8), b int) partition by list(c);\n> create table q1 partition of q for values in ('000001');\n> create table q2 partition of q for values in ('000002');\n> create table q3 partition of q for values in ('000003');\n>\n> Before the patch:\n> postgres=# explain (costs off) select * from p inner join q on p.c = q.c\n> and q.c > '000002';\n> QUERY PLAN\n> ----------------------------------------------------\n> Hash Join\n> Hash Cond: ((p.c)::text = (q.c)::text)\n> -> Append\n> -> Seq Scan on p1 p_1\n> -> Seq Scan on p2 p_2\n> -> Seq Scan on p3 p_3\n> -> Hash\n> -> Seq Scan on q3 q\n> Filter: ((c)::text > '000002'::text)\n> (9 rows)\n>\n> After the patch:\n>\n> QUERY PLAN\n> ----------------------------------------------------\n> Hash Join\n> Hash Cond: ((p.c)::text = (q.c)::text)\n> -> Seq Scan on p3 p\n> -> Hash\n> -> Seq Scan on q3 q\n> Filter: ((c)::text > '000002'::text)\n> (6 rows)\n>\n>\n> Before the patch:\n> postgres=# explain (costs off) select * from p inner join q on p.c = q.c\n> and (q.c = '000002' or q.c = '000001');\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------\n> Hash Join\n> Hash Cond: ((p.c)::text = (q.c)::text)\n> -> Append\n> -> Seq Scan on p1 p_1\n> -> Seq Scan on p2 p_2\n> -> Seq Scan on p3 p_3\n> -> Hash\n> -> Append\n> -> Seq Scan on q1 q_1\n> Filter: (((c)::text = '000002'::text) OR ((c)::text =\n> '000001'::text))\n> -> Seq Scan on q2 q_2\n> Filter: (((c)::text = '000002'::text) OR ((c)::text =\n> '000001'::text))\n> (12 rows)\n>\n> After the patch:\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------\n> Hash Join\n> Hash Cond: ((p.c)::text = (q.c)::text)\n> -> Append\n> -> Seq Scan on p1 p_1\n> -> Seq Scan on p2 p_2\n> -> Hash\n> -> Append\n> -> Seq Scan on q1 q_1\n> Filter: (((c)::text = '000002'::text) OR ((c)::text =\n> '000001'::text))\n> -> Seq Scan on q2 q_2\n> Filter: (((c)::text = '000002'::text) OR ((c)::text =\n> '000001'::text))\n> (11 rows)\n>\n> Before the patch:\n> postgres=# explain (costs off) select * from p left join q on p.c = q.c\n> where (q.c = '000002' or q.c = '000001');\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------\n> Hash Join\n> Hash Cond: ((p.c)::text = (q.c)::text)\n> -> Append\n> -> Seq Scan on p1 p_1\n> -> Seq Scan on p2 p_2\n> -> Seq Scan on p3 p_3\n> -> Hash\n> -> Append\n> -> Seq Scan on q1 q_1\n> Filter: (((c)::text = '000002'::text) OR ((c)::text =\n> '000001'::text))\n> -> Seq Scan on q2 q_2\n> Filter: (((c)::text = '000002'::text) OR ((c)::text =\n> '000001'::text))\n> (12 rows)\n>\n> After the patch:\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------\n> Hash Join\n> Hash Cond: ((p.c)::text = (q.c)::text)\n> -> Append\n> -> Seq Scan on p1 p_1\n> -> Seq Scan on p2 p_2\n> -> Hash\n> -> Append\n> -> Seq Scan on q1 q_1\n> Filter: (((c)::text = '000002'::text) OR ((c)::text =\n> '000001'::text))\n> -> Seq Scan on q2 q_2\n> Filter: (((c)::text = '000002'::text) OR ((c)::text =\n> '000001'::text))\n> (11 rows)\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\n\nHere is a performance test regarding this patch. In the following simple\ncase,\nwe can get 3x faster than before.\n\ncreate table p (a int, b int, c int) partition by list(c);\nselect 'create table p_'||i||' partition of p for values in (' || i || ');'\nfrom generate_series(1, 100)i; \\gexec\ninsert into p select i, i, i from generate_series(1, 100)i;\ncreate table m as select * from p;\nanalyze m;\nanalyze p;\n\ntest sql: select * from m, p where m.c = p.c and m.c in (3, 10);\n\nWith this patch: 1.1ms\nWithout this patch: 3.4ms\n\nI'm happy with the result and the implementation, I have add this into\ncommitfest https://commitfest.postgresql.org/32/2975/\n\nThanks.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Mon, Jan 25, 2021 at 10:21 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Sun, Jan 24, 2021 at 6:34 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi: I recently found a use case like this.  SELECT * FROM p, q WHERE p.partkey = q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either planning time partition prune or init partition prune.  Even though we have run-time partition pruning work at last, it is too late in some cases since we have to init all the plan nodes in advance.  In my case, there are 10+ partitioned relation in one query and the execution time is short, so the init plan a lot of plan nodes cares a lot.The attached patches fix this issue. It just get the \"p.partkey = q.colx\"case in root->eq_classes or rel->joinlist (outer join), and then check if thereis some baserestrictinfo in another relation which can be used for partitionpruning. To make the things easier, both partkey and colx must be Varexpression in implementation.- v1-0001-Make-some-static-functions-as-extern-and-extend-C.patchJust some existing refactoring and extending ChangeVarNodes to be ableto change var->attno.- v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patchDo the real job.Thought?-- Best RegardsAndy Fan (https://www.aliyun.com/)\nSome results from this patch. create table p (a int, b int, c character varying(8)) partition by list(c);create table p1  partition of p for values in ('000001');create table p2  partition of p for values in ('000002');create table p3  partition of p for values in ('000003');create table q (a int, c character varying(8), b int) partition by list(c);create table q1  partition of q for values in ('000001');create table q2  partition of q for values in ('000002');create table q3  partition of q for values in ('000003');Before the patch:postgres=# explain (costs off) select * from p inner join q on p.c = q.c and q.c > '000002';                     QUERY PLAN---------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2         ->  Seq Scan on p3 p_3   ->  Hash         ->  Seq Scan on q3 q               Filter: ((c)::text > '000002'::text)(9 rows)After the patch:                     QUERY PLAN---------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Seq Scan on p3 p   ->  Hash         ->  Seq Scan on q3 q               Filter: ((c)::text > '000002'::text)(6 rows)Before the patch:postgres=# explain (costs off) select * from p inner join q on p.c = q.c and (q.c = '000002' or q.c = '000001');                                         QUERY PLAN-------------------------------------------------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2         ->  Seq Scan on p3 p_3   ->  Hash         ->  Append               ->  Seq Scan on q1 q_1                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))               ->  Seq Scan on q2 q_2                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))(12 rows)After the patch:                                         QUERY PLAN-------------------------------------------------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2   ->  Hash         ->  Append               ->  Seq Scan on q1 q_1                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))               ->  Seq Scan on q2 q_2                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))(11 rows)Before the patch:postgres=# explain (costs off) select * from p left join q on p.c = q.c where (q.c = '000002' or q.c = '000001');                                         QUERY PLAN-------------------------------------------------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2         ->  Seq Scan on p3 p_3   ->  Hash         ->  Append               ->  Seq Scan on q1 q_1                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))               ->  Seq Scan on q2 q_2                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))(12 rows)After the patch:                                         QUERY PLAN-------------------------------------------------------------------------------------------- Hash Join   Hash Cond: ((p.c)::text = (q.c)::text)   ->  Append         ->  Seq Scan on p1 p_1         ->  Seq Scan on p2 p_2   ->  Hash         ->  Append               ->  Seq Scan on q1 q_1                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))               ->  Seq Scan on q2 q_2                     Filter: (((c)::text = '000002'::text) OR ((c)::text = '000001'::text))(11 rows)-- Best RegardsAndy Fan (https://www.aliyun.com/)\nHere is a performance test regarding this patch.  In the following simple case,we can get 3x faster than before.  create table p (a int, b int, c int) partition by list(c);select 'create table p_'||i||' partition of p for values in (' || i || ');' from generate_series(1, 100)i; \\gexecinsert into p select i, i, i from generate_series(1, 100)i;create table m as select * from p;analyze m;analyze p;test sql:  select * from m, p where m.c = p.c and m.c in (3, 10);  With this patch:  1.1msWithout this patch: 3.4ms I'm happy with the result and the implementation,  I have add this intocommitfest https://commitfest.postgresql.org/32/2975/ Thanks. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Mon, 8 Feb 2021 15:43:47 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extend more usecase for planning time partition pruning and init\n partition pruning." }, { "msg_contents": "On Mon, Feb 8, 2021 at 3:43 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Mon, Jan 25, 2021 at 10:21 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n>\n>>\n>>\n>> On Sun, Jan 24, 2021 at 6:34 PM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>>\n>>> Hi:\n>>>\n>>> I recently found a use case like this. SELECT * FROM p, q WHERE\n>>> p.partkey =\n>>> q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either\n>>> planning time\n>>> partition prune or init partition prune. Even though we have run-time\n>>> partition pruning work at last, it is too late in some cases since we\n>>> have\n>>> to init all the plan nodes in advance. In my case, there are 10+\n>>> partitioned relation in one query and the execution time is short, so\n>>> the\n>>> init plan a lot of plan nodes cares a lot.\n>>>\n>>> The attached patches fix this issue. It just get the \"p.partkey = q.colx\"\n>>> case in root->eq_classes or rel->joinlist (outer join), and then check\n>>> if there\n>>> is some baserestrictinfo in another relation which can be used for\n>>> partition\n>>> pruning. To make the things easier, both partkey and colx must be Var\n>>> expression in implementation.\n>>>\n>>> - v1-0001-Make-some-static-functions-as-extern-and-extend-C.patch\n>>>\n>>> Just some existing refactoring and extending ChangeVarNodes to be able\n>>> to change var->attno.\n>>>\n>>> - v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patch\n>>>\n>>> Do the real job.\n>>>\n>>> Thought?\n>>>\n>>>\n>>>\n>>> --\n>>> Best Regards\n>>> Andy Fan (https://www.aliyun.com/)\n>>>\n>>\n>>\n>> Some results from this patch.\n>>\n>> create table p (a int, b int, c character varying(8)) partition by\n>> list(c);\n>> create table p1 partition of p for values in ('000001');\n>> create table p2 partition of p for values in ('000002');\n>> create table p3 partition of p for values in ('000003');\n>> create table q (a int, c character varying(8), b int) partition by\n>> list(c);\n>> create table q1 partition of q for values in ('000001');\n>> create table q2 partition of q for values in ('000002');\n>> create table q3 partition of q for values in ('000003');\n>>\n>> Before the patch:\n>> postgres=# explain (costs off) select * from p inner join q on p.c = q.c\n>> and q.c > '000002';\n>> QUERY PLAN\n>> ----------------------------------------------------\n>> Hash Join\n>> Hash Cond: ((p.c)::text = (q.c)::text)\n>> -> Append\n>> -> Seq Scan on p1 p_1\n>> -> Seq Scan on p2 p_2\n>> -> Seq Scan on p3 p_3\n>> -> Hash\n>> -> Seq Scan on q3 q\n>> Filter: ((c)::text > '000002'::text)\n>> (9 rows)\n>>\n>> After the patch:\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------\n>> Hash Join\n>> Hash Cond: ((p.c)::text = (q.c)::text)\n>> -> Seq Scan on p3 p\n>> -> Hash\n>> -> Seq Scan on q3 q\n>> Filter: ((c)::text > '000002'::text)\n>> (6 rows)\n>>\n>>\n>> Before the patch:\n>> postgres=# explain (costs off) select * from p inner join q on p.c = q.c\n>> and (q.c = '000002' or q.c = '000001');\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------\n>> Hash Join\n>> Hash Cond: ((p.c)::text = (q.c)::text)\n>> -> Append\n>> -> Seq Scan on p1 p_1\n>> -> Seq Scan on p2 p_2\n>> -> Seq Scan on p3 p_3\n>> -> Hash\n>> -> Append\n>> -> Seq Scan on q1 q_1\n>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>> = '000001'::text))\n>> -> Seq Scan on q2 q_2\n>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>> = '000001'::text))\n>> (12 rows)\n>>\n>> After the patch:\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------\n>> Hash Join\n>> Hash Cond: ((p.c)::text = (q.c)::text)\n>> -> Append\n>> -> Seq Scan on p1 p_1\n>> -> Seq Scan on p2 p_2\n>> -> Hash\n>> -> Append\n>> -> Seq Scan on q1 q_1\n>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>> = '000001'::text))\n>> -> Seq Scan on q2 q_2\n>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>> = '000001'::text))\n>> (11 rows)\n>>\n>> Before the patch:\n>> postgres=# explain (costs off) select * from p left join q on p.c = q.c\n>> where (q.c = '000002' or q.c = '000001');\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------\n>> Hash Join\n>> Hash Cond: ((p.c)::text = (q.c)::text)\n>> -> Append\n>> -> Seq Scan on p1 p_1\n>> -> Seq Scan on p2 p_2\n>> -> Seq Scan on p3 p_3\n>> -> Hash\n>> -> Append\n>> -> Seq Scan on q1 q_1\n>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>> = '000001'::text))\n>> -> Seq Scan on q2 q_2\n>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>> = '000001'::text))\n>> (12 rows)\n>>\n>> After the patch:\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------\n>> Hash Join\n>> Hash Cond: ((p.c)::text = (q.c)::text)\n>> -> Append\n>> -> Seq Scan on p1 p_1\n>> -> Seq Scan on p2 p_2\n>> -> Hash\n>> -> Append\n>> -> Seq Scan on q1 q_1\n>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>> = '000001'::text))\n>> -> Seq Scan on q2 q_2\n>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>> = '000001'::text))\n>> (11 rows)\n>>\n>> --\n>> Best Regards\n>> Andy Fan (https://www.aliyun.com/)\n>>\n>\n>\n> Here is a performance test regarding this patch. In the following simple\n> case,\n> we can get 3x faster than before.\n>\n> create table p (a int, b int, c int) partition by list(c);\n> select 'create table p_'||i||' partition of p for values in (' || i ||\n> ');' from generate_series(1, 100)i; \\gexec\n> insert into p select i, i, i from generate_series(1, 100)i;\n> create table m as select * from p;\n> analyze m;\n> analyze p;\n>\n> test sql: select * from m, p where m.c = p.c and m.c in (3, 10);\n>\n> With this patch: 1.1ms\n> Without this patch: 3.4ms\n>\n> I'm happy with the result and the implementation, I have add this into\n> commitfest https://commitfest.postgresql.org/32/2975/\n>\n> Thanks.\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\nRebase to the current latest commit 678d0e239b.\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Fri, 19 Feb 2021 18:03:10 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extend more usecase for planning time partition pruning and init\n partition pruning." }, { "msg_contents": "On Fri, Feb 19, 2021 at 6:03 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Mon, Feb 8, 2021 at 3:43 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>>\n>>\n>> On Mon, Jan 25, 2021 at 10:21 AM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> On Sun, Jan 24, 2021 at 6:34 PM Andy Fan <zhihui.fan1213@gmail.com>\n>>> wrote:\n>>>\n>>>> Hi:\n>>>>\n>>>> I recently found a use case like this. SELECT * FROM p, q WHERE\n>>>> p.partkey =\n>>>> q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either\n>>>> planning time\n>>>> partition prune or init partition prune. Even though we have run-time\n>>>> partition pruning work at last, it is too late in some cases since we\n>>>> have\n>>>> to init all the plan nodes in advance. In my case, there are 10+\n>>>> partitioned relation in one query and the execution time is short, so\n>>>> the\n>>>> init plan a lot of plan nodes cares a lot.\n>>>>\n>>>> The attached patches fix this issue. It just get the \"p.partkey =\n>>>> q.colx\"\n>>>> case in root->eq_classes or rel->joinlist (outer join), and then check\n>>>> if there\n>>>> is some baserestrictinfo in another relation which can be used for\n>>>> partition\n>>>> pruning. To make the things easier, both partkey and colx must be Var\n>>>> expression in implementation.\n>>>>\n>>>> - v1-0001-Make-some-static-functions-as-extern-and-extend-C.patch\n>>>>\n>>>> Just some existing refactoring and extending ChangeVarNodes to be able\n>>>> to change var->attno.\n>>>>\n>>>> - v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patch\n>>>>\n>>>> Do the real job.\n>>>>\n>>>> Thought?\n>>>>\n>>>>\n>>>>\n>>>> --\n>>>> Best Regards\n>>>> Andy Fan (https://www.aliyun.com/)\n>>>>\n>>>\n>>>\n>>> Some results from this patch.\n>>>\n>>> create table p (a int, b int, c character varying(8)) partition by\n>>> list(c);\n>>> create table p1 partition of p for values in ('000001');\n>>> create table p2 partition of p for values in ('000002');\n>>> create table p3 partition of p for values in ('000003');\n>>> create table q (a int, c character varying(8), b int) partition by\n>>> list(c);\n>>> create table q1 partition of q for values in ('000001');\n>>> create table q2 partition of q for values in ('000002');\n>>> create table q3 partition of q for values in ('000003');\n>>>\n>>> Before the patch:\n>>> postgres=# explain (costs off) select * from p inner join q on p.c = q.c\n>>> and q.c > '000002';\n>>> QUERY PLAN\n>>> ----------------------------------------------------\n>>> Hash Join\n>>> Hash Cond: ((p.c)::text = (q.c)::text)\n>>> -> Append\n>>> -> Seq Scan on p1 p_1\n>>> -> Seq Scan on p2 p_2\n>>> -> Seq Scan on p3 p_3\n>>> -> Hash\n>>> -> Seq Scan on q3 q\n>>> Filter: ((c)::text > '000002'::text)\n>>> (9 rows)\n>>>\n>>> After the patch:\n>>>\n>>> QUERY PLAN\n>>> ----------------------------------------------------\n>>> Hash Join\n>>> Hash Cond: ((p.c)::text = (q.c)::text)\n>>> -> Seq Scan on p3 p\n>>> -> Hash\n>>> -> Seq Scan on q3 q\n>>> Filter: ((c)::text > '000002'::text)\n>>> (6 rows)\n>>>\n>>>\n>>> Before the patch:\n>>> postgres=# explain (costs off) select * from p inner join q on p.c = q.c\n>>> and (q.c = '000002' or q.c = '000001');\n>>> QUERY PLAN\n>>>\n>>> --------------------------------------------------------------------------------------------\n>>> Hash Join\n>>> Hash Cond: ((p.c)::text = (q.c)::text)\n>>> -> Append\n>>> -> Seq Scan on p1 p_1\n>>> -> Seq Scan on p2 p_2\n>>> -> Seq Scan on p3 p_3\n>>> -> Hash\n>>> -> Append\n>>> -> Seq Scan on q1 q_1\n>>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>>> = '000001'::text))\n>>> -> Seq Scan on q2 q_2\n>>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>>> = '000001'::text))\n>>> (12 rows)\n>>>\n>>> After the patch:\n>>> QUERY PLAN\n>>>\n>>> --------------------------------------------------------------------------------------------\n>>> Hash Join\n>>> Hash Cond: ((p.c)::text = (q.c)::text)\n>>> -> Append\n>>> -> Seq Scan on p1 p_1\n>>> -> Seq Scan on p2 p_2\n>>> -> Hash\n>>> -> Append\n>>> -> Seq Scan on q1 q_1\n>>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>>> = '000001'::text))\n>>> -> Seq Scan on q2 q_2\n>>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>>> = '000001'::text))\n>>> (11 rows)\n>>>\n>>> Before the patch:\n>>> postgres=# explain (costs off) select * from p left join q on p.c = q.c\n>>> where (q.c = '000002' or q.c = '000001');\n>>> QUERY PLAN\n>>>\n>>> --------------------------------------------------------------------------------------------\n>>> Hash Join\n>>> Hash Cond: ((p.c)::text = (q.c)::text)\n>>> -> Append\n>>> -> Seq Scan on p1 p_1\n>>> -> Seq Scan on p2 p_2\n>>> -> Seq Scan on p3 p_3\n>>> -> Hash\n>>> -> Append\n>>> -> Seq Scan on q1 q_1\n>>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>>> = '000001'::text))\n>>> -> Seq Scan on q2 q_2\n>>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>>> = '000001'::text))\n>>> (12 rows)\n>>>\n>>> After the patch:\n>>> QUERY PLAN\n>>>\n>>> --------------------------------------------------------------------------------------------\n>>> Hash Join\n>>> Hash Cond: ((p.c)::text = (q.c)::text)\n>>> -> Append\n>>> -> Seq Scan on p1 p_1\n>>> -> Seq Scan on p2 p_2\n>>> -> Hash\n>>> -> Append\n>>> -> Seq Scan on q1 q_1\n>>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>>> = '000001'::text))\n>>> -> Seq Scan on q2 q_2\n>>> Filter: (((c)::text = '000002'::text) OR ((c)::text\n>>> = '000001'::text))\n>>> (11 rows)\n>>>\n>>> --\n>>> Best Regards\n>>> Andy Fan (https://www.aliyun.com/)\n>>>\n>>\n>>\n>> Here is a performance test regarding this patch. In the following simple\n>> case,\n>> we can get 3x faster than before.\n>>\n>> create table p (a int, b int, c int) partition by list(c);\n>> select 'create table p_'||i||' partition of p for values in (' || i ||\n>> ');' from generate_series(1, 100)i; \\gexec\n>> insert into p select i, i, i from generate_series(1, 100)i;\n>> create table m as select * from p;\n>> analyze m;\n>> analyze p;\n>>\n>> test sql: select * from m, p where m.c = p.c and m.c in (3, 10);\n>>\n>> With this patch: 1.1ms\n>> Without this patch: 3.4ms\n>>\n>> I'm happy with the result and the implementation, I have add this into\n>> commitfest https://commitfest.postgresql.org/32/2975/\n>>\n>> Thanks.\n>>\n>> --\n>> Best Regards\n>> Andy Fan (https://www.aliyun.com/)\n>>\n>\n> Rebase to the current latest commit 678d0e239b.\n>\n>\nRebase to the latest commit ea1268f630 .\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Sun, 21 Feb 2021 21:33:38 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extend more usecase for planning time partition pruning and init\n partition pruning." }, { "msg_contents": "Hi Andy,\n\nOn Sun, Jan 24, 2021 at 7:34 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I recently found a use case like this. SELECT * FROM p, q WHERE p.partkey =\n> q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either planning time\n> partition prune or init partition prune. Even though we have run-time\n> partition pruning work at last, it is too late in some cases since we have\n> to init all the plan nodes in advance. In my case, there are 10+\n> partitioned relation in one query and the execution time is short, so the\n> init plan a lot of plan nodes cares a lot.\n>\n> The attached patches fix this issue. It just get the \"p.partkey = q.colx\"\n> case in root->eq_classes or rel->joinlist (outer join), and then check if there\n> is some baserestrictinfo in another relation which can be used for partition\n> pruning. To make the things easier, both partkey and colx must be Var\n> expression in implementation.\n>\n> - v1-0001-Make-some-static-functions-as-extern-and-extend-C.patch\n>\n> Just some existing refactoring and extending ChangeVarNodes to be able\n> to change var->attno.\n>\n> - v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patch\n\nIIUC, your proposal is to transpose the \"q.b in (1, 2)\" in the\nfollowing query as \"p.a in (1, 2)\" and pass it down as a pruning qual\nfor p:\n\nselect * from p, q where p.a = q.b and q.b in (1, 2);\n\nor \"(q.b = 1 or q.b = 2)\" in the following query as \"(p.a = 1 or p.a = 2)\":\n\nselect * from p, q where p.a = q.b and (q.b = 1 or q.b = 2);\n\nWhile that transposition sounds *roughly* valid, I have some questions\nabout the approach:\n\n* If the transposed quals are assumed valid to use for partition\npruning, could they also not be used by, say, the surviving\npartitions' index scan paths? So, perhaps, it doesn't seem right that\npartprune.c builds the clauses on-the-fly for pruning and dump them\nonce done.\n\n* On that last part, I wonder if partprune.c isn't the wrong place to\ndetermine that \"q.b in (1, 2)\" and \"p.a in (1, 2)\" are in fact\nequivalent. That sort of thing is normally done in the phase of\nplanning when distribute_qual_to_rels() runs and any equivalences\nfound stored in PlannerInfo.eq_classes. Have you investigated why the\nprocess_ machinery doesn't support working with ScalarArrayOpExpr and\nBoolExpr to begin with?\n\n* Or maybe have you considered generalizing what\nbuild_implied_pruning_quals() does so that other places like\nindxpath.c can use the facility?\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Mar 2021 18:07:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extend more usecase for planning time partition pruning and init\n partition pruning." }, { "msg_contents": "Hi Amit:\n Thanks for your review!\n\nOn Thu, Mar 4, 2021 at 5:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi Andy,\n>\n> On Sun, Jan 24, 2021 at 7:34 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > I recently found a use case like this. SELECT * FROM p, q WHERE\n> p.partkey =\n> > q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either\n> planning time\n> > partition prune or init partition prune. Even though we have run-time\n> > partition pruning work at last, it is too late in some cases since we\n> have\n> > to init all the plan nodes in advance. In my case, there are 10+\n> > partitioned relation in one query and the execution time is short, so\n> the\n> > init plan a lot of plan nodes cares a lot.\n> >\n> > The attached patches fix this issue. It just get the \"p.partkey = q.colx\"\n> > case in root->eq_classes or rel->joinlist (outer join), and then check\n> if there\n> > is some baserestrictinfo in another relation which can be used for\n> partition\n> > pruning. To make the things easier, both partkey and colx must be Var\n> > expression in implementation.\n> >\n> > - v1-0001-Make-some-static-functions-as-extern-and-extend-C.patch\n> >\n> > Just some existing refactoring and extending ChangeVarNodes to be able\n> > to change var->attno.\n> >\n> > - v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patch\n>\n> IIUC, your proposal is to transpose the \"q.b in (1, 2)\" in the\n> following query as \"p.a in (1, 2)\" and pass it down as a pruning qual\n> for p:\n>\n> select * from p, q where p.a = q.b and q.b in (1, 2);\n>\n> or \"(q.b = 1 or q.b = 2)\" in the following query as \"(p.a = 1 or p.a = 2)\":\n>\n> select * from p, q where p.a = q.b and (q.b = 1 or q.b = 2);\n>\n>\nYes, you understand me correctly.\n\n\n> While that transposition sounds *roughly* valid, I have some questions\n> about the approach:\n>\n> * If the transposed quals are assumed valid to use for partition\n> pruning, could they also not be used by, say, the surviving\n> partitions' index scan paths? So, perhaps, it doesn't seem right that\n> partprune.c builds the clauses on-the-fly for pruning and dump them\n> once done.\n>\n> * On that last part, I wonder if partprune.c isn't the wrong place to\n> determine that \"q.b in (1, 2)\" and \"p.a in (1, 2)\" are in fact\n> equivalent. That sort of thing is normally done in the phase of\n> planning when distribute_qual_to_rels() runs and any equivalences\n> found stored in PlannerInfo.eq_classes. Have you investigated why the\n> process_ machinery doesn't support working with ScalarArrayOpExpr and\n> BoolExpr to begin with?\n>\n> * Or maybe have you considered generalizing what\n> build_implied_pruning_quals() does so that other places like\n> indxpath.c can use the facility?\n>\n>\nActually at the beginning of this work, I do think I should put the implied\nquals to baserestictinfo in the distribute_qual_for_rels stage. That\nprobably\ncan fix all the issues you reported. However that probably more complex\nthan what I did with more risks and I have a very limited timeline to handle\nthe real custom issue, so I choose this strategy. But it is the time to\nre-think\nthe baserestrictinfo way now. I will spend some time in this direction,\nThank you\n for this kind of push-up:) I just checked this stuff on Oracle, Oracle\ndoes use\nthis strategy.\n\nSQL> explain plan for select * from t1, t2 where t1.a = t2.a and t1.a > 2;\n\nExplained.\n\nSQL> select * from table(dbms_xplan.display);\n\nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------\nPlan hash value: 1838229974\n\n---------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n---------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 1 | 52 | 4 (0)| 00:00:01 |\n|* 1 | HASH JOIN | | 1 | 52 | 4 (0)| 00:00:01 |\n|* 2 | TABLE ACCESS FULL| T1 | 1 | 26 | 2 (0)| 00:00:01 |\n|* 3 | TABLE ACCESS FULL| T2 | 1 | 26 | 2 (0)| 00:00:01 |\n---------------------------------------------------------------------------\n\n\nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n 1 - access(\"T1\".\"A\"=\"T2\".\"A\")\n\n* 2 - filter(\"T1\".\"A\">2) 3 - filter(\"T2\".\"A\">2)*\n\n17 rows selected.\n\n\npostgres=# explain (costs off) select * from t1, t2 where t1.a = t2.a and\nt1.a > 2;\n QUERY PLAN\n-------------------------------\n Merge Join\n Merge Cond: (t1.a = t2.a)\n -> Sort\n Sort Key: t1.a\n -> Seq Scan on t1\n Filter: (a > 2)\n -> Sort\n Sort Key: t2.a\n -> Seq Scan on t2\n(9 rows)\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi Amit:  Thanks for your review!On Thu, Mar 4, 2021 at 5:07 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi Andy,\n\nOn Sun, Jan 24, 2021 at 7:34 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>  I recently found a use case like this.  SELECT * FROM p, q WHERE p.partkey =\n>  q.colx AND (q.colx = $1 OR q.colx = $2); Then we can't do either planning time\n>  partition prune or init partition prune.  Even though we have run-time\n>  partition pruning work at last, it is too late in some cases since we have\n>  to init all the plan nodes in advance.  In my case, there are 10+\n>  partitioned relation in one query and the execution time is short, so the\n>  init plan a lot of plan nodes cares a lot.\n>\n> The attached patches fix this issue. It just get the \"p.partkey = q.colx\"\n> case in root->eq_classes or rel->joinlist (outer join), and then check if there\n> is some baserestrictinfo in another relation which can be used for partition\n> pruning. To make the things easier, both partkey and colx must be Var\n> expression in implementation.\n>\n> - v1-0001-Make-some-static-functions-as-extern-and-extend-C.patch\n>\n> Just some existing refactoring and extending ChangeVarNodes to be able\n> to change var->attno.\n>\n> - v1-0002-Build-some-implied-pruning-quals-to-extend-the-us.patch\n\nIIUC, your proposal is to transpose the \"q.b in (1, 2)\" in the\nfollowing query as \"p.a in (1, 2)\" and pass it down as a pruning qual\nfor p:\n\nselect * from p, q where p.a = q.b and q.b in (1, 2);\n\nor \"(q.b = 1 or q.b = 2)\" in the following query as \"(p.a = 1 or p.a = 2)\":\n\nselect * from p, q where p.a = q.b and (q.b = 1 or q.b = 2);\nYes,  you understand me correctly.  \nWhile that transposition sounds *roughly* valid, I have some questions\nabout the approach:\n\n* If the transposed quals are assumed valid to use for partition\npruning, could they also not be used by, say, the surviving\npartitions' index scan paths?  So, perhaps, it doesn't seem right that\npartprune.c builds the clauses on-the-fly for pruning and dump them\nonce done.\n\n* On that last part, I wonder if partprune.c isn't the wrong place to\ndetermine that \"q.b in (1, 2)\" and \"p.a in (1, 2)\" are in fact\nequivalent.  That sort of thing is normally done in the phase of\nplanning when distribute_qual_to_rels() runs and any equivalences\nfound stored in PlannerInfo.eq_classes.  Have you investigated why the\nprocess_ machinery doesn't support working with ScalarArrayOpExpr and\nBoolExpr to begin with?\n\n* Or maybe have you considered generalizing what\nbuild_implied_pruning_quals() does so that other places like\nindxpath.c can use the facility?\nActually at the beginning of this work, I do think I should put the impliedquals to baserestictinfo in the distribute_qual_for_rels stage.  That probablycan fix all the issues you reported.  However that probably more complexthan what I did with more risks and I have a very limited timeline to handlethe real custom issue,  so I choose this strategy.   But it is the time to re-thinkthe baserestrictinfo way now.  I will spend some time in this direction, Thank you for this kind  of push-up:)  I just checked this stuff on Oracle,  Oracle does usethis strategy.  SQL> explain plan for select * from t1, t2 where t1.a = t2.a and t1.a > 2;Explained.SQL> select * from table(dbms_xplan.display);PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------Plan hash value: 1838229974---------------------------------------------------------------------------| Id  | Operation\t   | Name | Rows  | Bytes | Cost (%CPU)| Time\t  |---------------------------------------------------------------------------|   0 | SELECT STATEMENT   |\t  |\t1 |    52 |\t4   (0)| 00:00:01 ||*  1 |  HASH JOIN\t   |\t  |\t1 |    52 |\t4   (0)| 00:00:01 ||*  2 |   TABLE ACCESS FULL| T1   |\t1 |    26 |\t2   (0)| 00:00:01 ||*  3 |   TABLE ACCESS FULL| T2   |\t1 |    26 |\t2   (0)| 00:00:01 |---------------------------------------------------------------------------PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - access(\"T1\".\"A\"=\"T2\".\"A\")   2 - filter(\"T1\".\"A\">2)   3 - filter(\"T2\".\"A\">2)17 rows selected.postgres=# explain (costs off) select * from t1, t2 where t1.a = t2.a and t1.a > 2;          QUERY PLAN------------------------------- Merge Join   Merge Cond: (t1.a = t2.a)   ->  Sort         Sort Key: t1.a         ->  Seq Scan on t1               Filter: (a > 2)   ->  Sort         Sort Key: t2.a         ->  Seq Scan on t2(9 rows)-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sat, 6 Mar 2021 07:02:20 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extend more usecase for planning time partition pruning and init\n partition pruning." }, { "msg_contents": "On Thu, 4 Mar 2021 at 22:07, Amit Langote <amitlangote09@gmail.com> wrote:\n> * Or maybe have you considered generalizing what\n> build_implied_pruning_quals() does so that other places like\n> indxpath.c can use the facility?\n\nI agree with doing it another way. There's plenty of other queries\nwhich we could produce a better plan for if EquivalenceClass knew\nabout things like IN conditions and >=, >, < and <= btree ops.\n\nIt seems wrong to code anything in this regard that's specific to\npartition pruning.\n\nPlease see [1] for an idea. IIRC, the implementation was not well\nreceived and there were concerns about having to evaluate additional\nneedless quals. That part I think can be coded around. The trick will\nbe to know when and when not to use additional quals.\n\nThe show stopper for me was having a more efficient way to find if a\ngiven Expr exists in an EquivalenceClass. This is why I didn't take\nthe idea further, at the time. My implementation in that patch\nrequired lots of looping to find if a given Expr had an existing\nEquivalenceMember, to which there was a danger of that becoming slow\nfor complex queries.\n\nI'm unsure right now if it would be possible to build standard\nEquivalenceMembers and EquivalenceFilters in the same pass. I think\nit might require 2 passes since you only can use IN and range type\nquals for Exprs that actually have a EquivalenceMember. So you need to\nwait until you're certain there's some equality OpExpr before adding\nEquivalenceFilters. (Pass 1 can perhaps remember if anything looks\ninteresting and then skip pass 2 if there's not...??)\n\nEquivalenceClass might be slightly faster now since we have\nRelOptInfo.eclass_indexes. However, I've not checked to see if the\nindexes will be ready in time for when you'd be building the\nadditional filters. I'm guessing that they wouldn't be since you'd\nstill be building the EquivalenceClasses at that time. Certainly,\nprocess_equivalence() could do much faster lookups of Exprs if there\nwas some global index for all EquivalenceMembers. However,\nequalfuncs.c only gives us true or false if two nodes are equal().\nWe'd need to either have a -1, 0, +1 value or be able to hash nodes\nand put things into a hash table. Else we're stuck trawling through\nlists comparing each item 1-by-1. That's pretty slow. Especially with\ncomplex queries.\n\nBoth Andres and I have previously suggested ways to improve Node\nsearching. My idea is likely easier to implement, as it just changed\nequalfuncs.c to add a function that returns -1, 0, +1 so we could use\na binary search tree to index Nodes. Andres' idea [2] is likely the\nbetter of the two. Please have a look at that. It'll allow us to\neasily build a function to hash nodes and put them in a hash table.\n\nTo get [1], the implementation will need to be pretty smart. There's\nconcern about the idea. See [3]. You'll need to ensure you're not\nadding too much planner overhead and also not slowing down execution\nfor cases by adding additional qual evals that are redundant.\n\nIt's going to take some effort to make everyone happy here.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAKJS1f9fPdLKM6%3DSUZAGwucH3otbsPk6k0YT8-A1HgjFapL-zQ%40mail.gmail.com#024ad18e19bb9b6c022fb572edc8c992\n[2] https://www.postgresql.org/message-id/flat/20190828234136.fk2ndqtld3onfrrp%40alap3.anarazel.de\n[3] https://www.postgresql.org/message-id/flat/30810.1449335261@sss.pgh.pa.us#906319f5e212fc3a6a682f16da079f04\n\n\n", "msg_date": "Mon, 8 Mar 2021 14:34:45 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extend more usecase for planning time partition pruning and init\n partition pruning." }, { "msg_contents": "Hi David:\n\nOn Mon, Mar 8, 2021 at 9:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 4 Mar 2021 at 22:07, Amit Langote <amitlangote09@gmail.com> wrote:\n> > * Or maybe have you considered generalizing what\n> > build_implied_pruning_quals() does so that other places like\n> > indxpath.c can use the facility?\n>\n> I agree with doing it another way. There's plenty of other queries\n> which we could produce a better plan for if EquivalenceClass knew\n> about things like IN conditions and >=, >, < and <= btree ops.\n>\n> It seems wrong to code anything in this regard that's specific to\n> partition pruning.\n>\n> Please see [1] for an idea. IIRC, the implementation was not well\n> received and there were concerns about having to evaluate additional\n> needless quals. That part I think can be coded around. The trick will\n> be to know when and when not to use additional quals.\n>\n> The show stopper for me was having a more efficient way to find if a\n> given Expr exists in an EquivalenceClass. This is why I didn't take\n> the idea further, at the time. My implementation in that patch\n> required lots of looping to find if a given Expr had an existing\n> EquivalenceMember, to which there was a danger of that becoming slow\n> for complex queries.\n>\n> I'm unsure right now if it would be possible to build standard\n> EquivalenceMembers and EquivalenceFilters in the same pass. I think\n> it might require 2 passes since you only can use IN and range type\n> quals for Exprs that actually have a EquivalenceMember. So you need to\n> wait until you're certain there's some equality OpExpr before adding\n> EquivalenceFilters. (Pass 1 can perhaps remember if anything looks\n> interesting and then skip pass 2 if there's not...??)\n>\n> EquivalenceClass might be slightly faster now since we have\n> RelOptInfo.eclass_indexes. However, I've not checked to see if the\n> indexes will be ready in time for when you'd be building the\n> additional filters. I'm guessing that they wouldn't be since you'd\n> still be building the EquivalenceClasses at that time. Certainly,\n> process_equivalence() could do much faster lookups of Exprs if there\n> was some global index for all EquivalenceMembers. However,\n> equalfuncs.c only gives us true or false if two nodes are equal().\n> We'd need to either have a -1, 0, +1 value or be able to hash nodes\n> and put things into a hash table. Else we're stuck trawling through\n> lists comparing each item 1-by-1. That's pretty slow. Especially with\n> complex queries.\n>\n> Both Andres and I have previously suggested ways to improve Node\n> searching. My idea is likely easier to implement, as it just changed\n> equalfuncs.c to add a function that returns -1, 0, +1 so we could use\n> a binary search tree to index Nodes. Andres' idea [2] is likely the\n> better of the two. Please have a look at that. It'll allow us to\n> easily build a function to hash nodes and put them in a hash table.\n>\n> To get [1], the implementation will need to be pretty smart. There's\n> concern about the idea. See [3]. You'll need to ensure you're not\n> adding too much planner overhead and also not slowing down execution\n> for cases by adding additional qual evals that are redundant.\n>\n> It's going to take some effort to make everyone happy here.\n>\n\nI truly understand what you are saying here, and believe that needs some\nmore hard work to do. I am not sure I am prepared to do that at current\nstage. So I will give up this idea now and continue to work with this\nwhen time is permitted. I have marked the commitfest entry as \"Returned\nwith\nFeedback\". Thanks for the detailed information!\n\n\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAKJS1f9fPdLKM6%3DSUZAGwucH3otbsPk6k0YT8-A1HgjFapL-zQ%40mail.gmail.com#024ad18e19bb9b6c022fb572edc8c992\n> [2]\n> https://www.postgresql.org/message-id/flat/20190828234136.fk2ndqtld3onfrrp%40alap3.anarazel.de\n> [3]\n> https://www.postgresql.org/message-id/flat/30810.1449335261@sss.pgh.pa.us#906319f5e212fc3a6a682f16da079f04\n>\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi David:On Mon, Mar 8, 2021 at 9:34 AM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 4 Mar 2021 at 22:07, Amit Langote <amitlangote09@gmail.com> wrote:\n> * Or maybe have you considered generalizing what\n> build_implied_pruning_quals() does so that other places like\n> indxpath.c can use the facility?\n\nI agree with doing it another way.  There's plenty of other queries\nwhich we could produce a better plan for if EquivalenceClass knew\nabout things like IN conditions and >=, >, < and <= btree ops.\n\nIt seems wrong to code anything in this regard that's specific to\npartition pruning.\n\nPlease see [1] for an idea. IIRC, the implementation was not well\nreceived and there were concerns about having to evaluate additional\nneedless quals. That part I think can be coded around. The trick will\nbe to know when and when not to use additional quals.\n\nThe show stopper for me was having a more efficient way to find if a\ngiven Expr exists in an EquivalenceClass. This is why I didn't take\nthe idea further, at the time. My implementation in that patch\nrequired lots of looping to find if a given Expr had an existing\nEquivalenceMember, to which there was a danger of that becoming slow\nfor complex queries.\n\nI'm unsure right now if it would be possible to build standard\nEquivalenceMembers and EquivalenceFilters in the same pass.  I think\nit might require 2 passes since you only can use IN and range type\nquals for Exprs that actually have a EquivalenceMember. So you need to\nwait until you're certain there's some equality OpExpr before adding\nEquivalenceFilters. (Pass 1 can perhaps remember if anything looks\ninteresting and then skip pass 2 if there's not...??)\n\nEquivalenceClass might be slightly faster now since we have\nRelOptInfo.eclass_indexes. However, I've not checked to see if the\nindexes will be ready in time for when you'd be building the\nadditional filters. I'm guessing that they wouldn't be since you'd\nstill be building the EquivalenceClasses at that time.  Certainly,\nprocess_equivalence() could do much faster lookups of Exprs if there\nwas some global index for all EquivalenceMembers. However,\nequalfuncs.c only gives us true or false if two nodes are equal().\nWe'd need to either have a -1, 0, +1 value or be able to hash nodes\nand put things into a hash table. Else we're stuck trawling through\nlists comparing each item 1-by-1. That's pretty slow. Especially with\ncomplex queries.\n\nBoth Andres and I have previously suggested ways to improve Node\nsearching.  My idea is likely easier to implement, as it just changed\nequalfuncs.c to add a function that returns -1, 0, +1 so we could use\na binary search tree to index Nodes. Andres' idea [2] is likely the\nbetter of the two. Please have a look at that. It'll allow us to\neasily build a function to hash nodes and put them in a hash table.\n\nTo get [1], the implementation will need to be pretty smart. There's\nconcern about the idea. See [3]. You'll need to ensure you're not\nadding too much planner overhead and also not slowing down execution\nfor cases by adding additional qual evals that are redundant.\n\nIt's going to take some effort to make everyone happy here.I truly understand what you are saying here, and believe that needs somemore hard work to do.   I am not sure I am prepared to do that at currentstage.  So I will give up this idea now and continue to work with thiswhen time is permitted.  I have marked the commitfest entry as \"Returned withFeedback\".   Thanks for the detailed information!  \nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAKJS1f9fPdLKM6%3DSUZAGwucH3otbsPk6k0YT8-A1HgjFapL-zQ%40mail.gmail.com#024ad18e19bb9b6c022fb572edc8c992\n[2] https://www.postgresql.org/message-id/flat/20190828234136.fk2ndqtld3onfrrp%40alap3.anarazel.de\n[3] https://www.postgresql.org/message-id/flat/30810.1449335261@sss.pgh.pa.us#906319f5e212fc3a6a682f16da079f04\n-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sat, 27 Mar 2021 14:19:24 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extend more usecase for planning time partition pruning and init\n partition pruning." } ]
[ { "msg_contents": "Hi,\n\nI'm trying to understand how pg_proc.protrftypes works.\n\nThe documentation says \"Data type OIDs for which to apply transforms.\".\nFor this column, there is no reference to any catalog table?\nI would guess it should be \"(references pg_type.oid)\", right?\n\nI tried to generate a value for this column to verify my hypothesis,\nbut I struggle to find an example that produces a not null value here.\n\nI grepped the sources and found the \"CREATE TRANSFORM FOR type_name\" command,\nand found an extension using it named \"bool_plperl\" which I installed.\n\nI assumed this would cause a value, but no.\n\nBoth of bool_plperl's two functions get null pg_proc.protrftypes values.\n\nI've tried running the full regression \"make installcheck\", but protrftypes doesn't seem to be covered:\n\n$ cd postgresql\n$ make installcheck\n...\n=======================\nAll 203 tests passed.\n=======================\n$ psql regression\nregression=# SELECT COUNT(*) FROM pg_proc WHERE protrftypes IS NOT NULL;\ncount\n-------\n 0\n(1 row)\n\nCan someone please show me how to generate a function with a not null pg_proc.protrftypes value?\n\nMany thanks.\n\n/Joel\n\n\n\n\n\n\nHi,I'm trying to understand how pg_proc.protrftypes works.The documentation says \"Data type OIDs for which to apply transforms.\".For this column, there is no reference to any catalog table?I would guess it should be \"(references pg_type.oid)\", right?I tried to generate a value for this column to verify my hypothesis,but I struggle to find an example that produces a not null value here.I grepped the sources and found the \"CREATE TRANSFORM FOR type_name\" command,and found an extension using it named \"bool_plperl\" which I installed.I assumed this would cause a value, but no.Both of bool_plperl's two functions get null pg_proc.protrftypes values.I've tried running the full regression \"make installcheck\", but protrftypes doesn't seem to be covered:$ cd postgresql$ make installcheck...=======================All 203 tests passed.=======================$ psql regressionregression=# SELECT COUNT(*) FROM pg_proc WHERE protrftypes IS NOT NULL;count-------     0(1 row)Can someone please show me how to generate a function with a not null pg_proc.protrftypes value?Many thanks./Joel", "msg_date": "Mon, 25 Jan 2021 08:04:34 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "The mysterious pg_proc.protrftypes" }, { "msg_contents": "po 25. 1. 2021 v 8:05 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> Hi,\n>\n> I'm trying to understand how pg_proc.protrftypes works.\n>\n> The documentation says \"Data type OIDs for which to apply transforms.\".\n> For this column, there is no reference to any catalog table?\n> I would guess it should be \"(references pg_type.oid)\", right?\n>\n> I tried to generate a value for this column to verify my hypothesis,\n> but I struggle to find an example that produces a not null value here.\n>\n> I grepped the sources and found the \"CREATE TRANSFORM FOR type_name\"\n> command,\n> and found an extension using it named \"bool_plperl\" which I installed.\n>\n> I assumed this would cause a value, but no.\n>\n> Both of bool_plperl's two functions get null pg_proc.protrftypes values.\n>\n> I've tried running the full regression \"make installcheck\",\n> but protrftypes doesn't seem to be covered:\n>\n> $ cd postgresql\n> $ make installcheck\n> ...\n> =======================\n> All 203 tests passed.\n> =======================\n> $ psql regression\n> regression=# SELECT COUNT(*) FROM pg_proc WHERE protrftypes IS NOT NULL;\n> count\n> -------\n> 0\n> (1 row)\n>\n> Can someone please show me how to generate a function with a not null\n> pg_proc.protrftypes value?\n>\n\nyou should to use TRANSFORM clause in CREATE FUNCTION statement\n\nhttps://www.postgresql.org/docs/current/sql-createfunction.html\n\nCREATE EXTENSION hstore_plperl CASCADE;\n\nCREATE FUNCTION test2() RETURNS hstore\nLANGUAGE plperl\nTRANSFORM FOR TYPE hstore\nAS $$\n$val = {a => 1, b => 'boo', c => undef};\nreturn $val;\n$$;\n\nRegards\n\nPavel\n\n\n> Many thanks.\n>\n> /Joel\n>\n>\n>\n>\n>\n>\n>\n\npo 25. 1. 2021 v 8:05 odesílatel Joel Jacobson <joel@compiler.org> napsal:Hi,I'm trying to understand how pg_proc.protrftypes works.The documentation says \"Data type OIDs for which to apply transforms.\".For this column, there is no reference to any catalog table?I would guess it should be \"(references pg_type.oid)\", right?I tried to generate a value for this column to verify my hypothesis,but I struggle to find an example that produces a not null value here.I grepped the sources and found the \"CREATE TRANSFORM FOR type_name\" command,and found an extension using it named \"bool_plperl\" which I installed.I assumed this would cause a value, but no.Both of bool_plperl's two functions get null pg_proc.protrftypes values.I've tried running the full regression \"make installcheck\", but protrftypes doesn't seem to be covered:$ cd postgresql$ make installcheck...=======================All 203 tests passed.=======================$ psql regressionregression=# SELECT COUNT(*) FROM pg_proc WHERE protrftypes IS NOT NULL;count-------     0(1 row)Can someone please show me how to generate a function with a not null pg_proc.protrftypes value?you should to use TRANSFORM clause in CREATE FUNCTION statementhttps://www.postgresql.org/docs/current/sql-createfunction.htmlCREATE EXTENSION hstore_plperl CASCADE;CREATE FUNCTION test2() RETURNS hstoreLANGUAGE plperlTRANSFORM FOR TYPE hstoreAS $$$val = {a => 1, b => 'boo', c => undef};return $val;$$;RegardsPavel Many thanks./Joel", "msg_date": "Mon, 25 Jan 2021 08:14:37 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The mysterious pg_proc.protrftypes" }, { "msg_contents": "On Mon, Jan 25, 2021, at 08:14, Pavel Stehule wrote:\n>you should to use TRANSFORM clause in CREATE FUNCTION statement\n\nThanks, it worked, and like expected it references the pg_type.oid of the transform.\n\nAttached patch adds \"(references pg_type.oid)\" to the documentation for pg_proc.protrftypes.\n\nSuggested commit message: \"Document the fact that pg_proc.protrftypes references pg_type.oid\"\n\n/Joel", "msg_date": "Mon, 25 Jan 2021 08:46:24 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: The mysterious pg_proc.protrftypes" }, { "msg_contents": "po 25. 1. 2021 v 8:47 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Mon, Jan 25, 2021, at 08:14, Pavel Stehule wrote:\n> >you should to use TRANSFORM clause in CREATE FUNCTION statement\n>\n> Thanks, it worked, and like expected it references the pg_type.oid of the\n> transform.\n>\n> Attached patch adds \"(references pg_type.oid)\" to the documentation\n> for pg_proc.protrftypes.\n>\n> Suggested commit message: \"Document the fact that pg_proc.protrftypes\n> references pg_type.oid\"\n>\n\n+1\n\nPavel\n\n\n> /Joel\n>\n\npo 25. 1. 2021 v 8:47 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Mon, Jan 25, 2021, at 08:14, Pavel Stehule wrote:>you should to use TRANSFORM clause in CREATE FUNCTION statementThanks, it worked, and like expected it references the pg_type.oid of the transform.Attached patch adds \"(references pg_type.oid)\" to the documentation for pg_proc.protrftypes.Suggested commit message: \"Document the fact that pg_proc.protrftypes references pg_type.oid\"+1Pavel/Joel", "msg_date": "Mon, 25 Jan 2021 09:01:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The mysterious pg_proc.protrftypes" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Attached patch adds \"(references pg_type.oid)\" to the documentation for pg_proc.protrftypes.\n\nAgreed, pushed. I also stumbled over a backend core dump while\ntesting it :-(. So this whole area seems a bit spongy ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Jan 2021 13:05:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The mysterious pg_proc.protrftypes" } ]
[ { "msg_contents": "Hi,\n\nThis patch fixes $SUBJECT.\n\n/Joel", "msg_date": "Mon, 25 Jan 2021 08:57:48 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?[PATCH]_Document_the_fact_that_pg=5Fproc.protrftypes_reference?=\n =?UTF-8?Q?s_pg=5Ftype.oid?=" } ]
[ { "msg_contents": "Hi all,\n\nSHA-1 is now an option available for cryptohashes, and like the\nexisting set of functions of SHA-2, I don't really see a reason why we\nshould not have a SQL function for SHA1. Attached is a patch doing\nthat.\n\nThe same code pattern was repeated 4 times on HEAD for the SHA-2\nfunctions for the bytea -> bytea hashing, so I have refactored the\nwhole thing while integrating the new function, shaving some code from\ncryptohashfuncs.c.\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 25 Jan 2021 22:12:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Add SQL function for SHA1" }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:12:28PM +0900, Michael Paquier wrote:\n> SHA-1 is now an option available for cryptohashes, and like the\n> existing set of functions of SHA-2, I don't really see a reason why we\n> should not have a SQL function for SHA1.\n\nNIST deprecated SHA1 over ten years ago. It's too late to be adding this.\n\n\n", "msg_date": "Mon, 25 Jan 2021 19:28:14 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "+1 to adding a SHA1 SQL function. Even if it's deprecated, there's plenty\nof historical usage that I can see it being useful.\n\nEither way, the rest of the refactor can be improved a bit to perform a\nsingle palloc() and remove the memcpy(). Attached is a diff for\ncryptohashfuncs.c that does that by writing the digest final directly to\nthe result. It also removes the digest length arg and determines it in the\nswitch block. There's only one correct digest length for each type so\nthere's no reason to give callers the option to give the wrong one.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Mon, 25 Jan 2021 22:42:25 -0500", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": false, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:42:25PM -0500, Sehrope Sarkuni wrote:\n> +1 to adding a SHA1 SQL function. Even if it's deprecated, there's plenty\n> of historical usage that I can see it being useful.\n\nLet's wait for more opinions to see if we agree that this addition is\nhelpful or not. Even if this is not added, I think that there is\nstill value in refactoring the code anyway for the SHA-2 functions.\n\n> Either way, the rest of the refactor can be improved a bit to perform a\n> single palloc() and remove the memcpy(). Attached is a diff for\n> cryptohashfuncs.c that does that by writing the digest final directly to\n> the result. It also removes the digest length arg and determines it in the\n> switch block. There's only one correct digest length for each type so\n> there's no reason to give callers the option to give the wrong one.\n\nYeah, what you have here is better.\n\n+ default:\n+ elog(ERROR, \"unsupported digest type %d\", type);\nNot using a default clause is the purpose here, as it would generate a\ncompilation warning if a value in the enum is forgotten. Hence, if a\nnew option is added to pg_cryptohash_type in the future, people won't\nmiss that they could add a SQL function for the new option. If we\ndecide that MD5 and SHA1 have no need to use this code path, I'd\nrather just use elog(ERROR) instead.\n--\nMichael", "msg_date": "Tue, 26 Jan 2021 13:06:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "On Tue, Jan 26, 2021 at 01:06:29PM +0900, Michael Paquier wrote:\n> On Mon, Jan 25, 2021 at 10:42:25PM -0500, Sehrope Sarkuni wrote:\n> > +1 to adding a SHA1 SQL function. Even if it's deprecated, there's plenty\n> > of historical usage that I can see it being useful.\n> \n> Let's wait for more opinions to see if we agree that this addition is\n> helpful or not. Even if this is not added, I think that there is\n> still value in refactoring the code anyway for the SHA-2 functions.\n> \n\n+1 I know that it has been deprecated, but it can be very useful when\nworking with data from pre-deprecation. :) It is annoying to have to\nresort to plperl or plpython because it is not available. The lack or\northogonality is painful.\n\nRegards,\nKen\n\n\n", "msg_date": "Mon, 25 Jan 2021 22:23:30 -0600", "msg_from": "Kenneth Marshall <ktm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:23:30PM -0600, Kenneth Marshall wrote:\n> On Tue, Jan 26, 2021 at 01:06:29PM +0900, Michael Paquier wrote:\n> > On Mon, Jan 25, 2021 at 10:42:25PM -0500, Sehrope Sarkuni wrote:\n> > > +1 to adding a SHA1 SQL function. Even if it's deprecated, there's plenty\n> > > of historical usage that I can see it being useful.\n> > \n> > Let's wait for more opinions to see if we agree that this addition is\n> > helpful or not. Even if this is not added, I think that there is\n> > still value in refactoring the code anyway for the SHA-2 functions.\n> > \n> \n> +1 I know that it has been deprecated, but it can be very useful when\n> working with data from pre-deprecation. :) It is annoying to have to\n> resort to plperl or plpython because it is not available. The lack or\n> orthogonality is painful.\n\nYes, I think having SHA1 makes sense --- there are probably still valid\nuses for it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 25 Jan 2021 23:27:28 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:12:28PM +0900, Michael Paquier wrote:\n> Hi all,\n> \n> SHA-1 is now an option available for cryptohashes, and like the\n> existing set of functions of SHA-2, I don't really see a reason why we\n> should not have a SQL function for SHA1. Attached is a patch doing\n> that.\n\nThanks for doing this!\n\nWhile there are applications SHA1 is no longer good for, there are\nplenty where it's still in play. One I use frequently is git. While\nthere are plans for creating an upgrade path to more cryptographically\nsecure hashes, it will take some years before repositories have\nconverted over.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 26 Jan 2021 07:23:23 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "On Mon, Jan 25, 2021 at 11:27:28PM -0500, Bruce Momjian wrote:\n> On Mon, Jan 25, 2021 at 10:23:30PM -0600, Kenneth Marshall wrote:\n>> +1 I know that it has been deprecated, but it can be very useful when\n>> working with data from pre-deprecation. :) It is annoying to have to\n>> resort to plperl or plpython because it is not available. The lack or\n>> orthogonality is painful.\n\nplperl and plpython can be annoying to require if you have strong\nsecurity requirements as these are untrusted languages, but I don't\ncompletely agree with this argument because pgcrypto gives the option\nto use SHA1 with digest(), and this one is fine to have even in\nenvironments under STIG or equally-constrained environments.\n\n> Yes, I think having SHA1 makes sense --- there are probably still valid\n> uses for it.\n\nConsistency with the existing in-core SQL functions for cryptohashes\nand the possibility to not need pgcrypto are my only arguments at\nhand.\n\n;)\n--\nMichael", "msg_date": "Tue, 26 Jan 2021 16:22:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "> On 26 Jan 2021, at 04:28, Noah Misch <noah@leadboat.com> wrote:\n> \n> On Mon, Jan 25, 2021 at 10:12:28PM +0900, Michael Paquier wrote:\n>> SHA-1 is now an option available for cryptohashes, and like the\n>> existing set of functions of SHA-2, I don't really see a reason why we\n>> should not have a SQL function for SHA1.\n> \n> NIST deprecated SHA1 over ten years ago. It's too late to be adding this.\n\nAgreed, and pgcrypto already allows for using sha1.\n\nIt seems like any legitimate need for sha1 could be better served by an\nextension rather than supplying it in-core.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n", "msg_date": "Tue, 26 Jan 2021 10:38:43 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "On Tue, Jan 26, 2021 at 10:38:43AM +0100, Daniel Gustafsson wrote:\n> Agreed, and pgcrypto already allows for using sha1.\n> \n> It seems like any legitimate need for sha1 could be better served by an\n> extension rather than supplying it in-core.\n\nBoth of you telling the same thing is enough for me to discard this\nnew stuff. I'd like to refactor the code anyway as that's a nice\ncleanup, and this would have the advantage to make people look at\ncryptohashfuncs.c if introducing a new type. After sleeping about it,\nI think that I would just make MD5 and SHA1 issue an elog(ERROR) if\nthe internal routine is taken in those cases, like in the attached.\n\nIf there are any comments or objections to the refactoring piece,\nplease let me know.\n--\nMichael", "msg_date": "Wed, 27 Jan 2021 10:53:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "On Tue, Jan 26, 2021 at 8:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jan 26, 2021 at 10:38:43AM +0100, Daniel Gustafsson wrote:\n> > Agreed, and pgcrypto already allows for using sha1.\n> >\n> > It seems like any legitimate need for sha1 could be better served by an\n> > extension rather than supplying it in-core.\n>\n> Both of you telling the same thing is enough for me to discard this\n> new stuff. I'd like to refactor the code anyway as that's a nice\n> cleanup, and this would have the advantage to make people look at\n> cryptohashfuncs.c if introducing a new type. After sleeping about it,\n> I think that I would just make MD5 and SHA1 issue an elog(ERROR) if\n> the internal routine is taken in those cases, like in the attached.\n>\n\nThe refactor patch looks good. It builds and passes make check.\n\nThanks for the enum explanation too.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Tue, Jan 26, 2021 at 8:53 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jan 26, 2021 at 10:38:43AM +0100, Daniel Gustafsson wrote:\n> Agreed, and pgcrypto already allows for using sha1.\n> \n> It seems like any legitimate need for sha1 could be better served by an\n> extension rather than supplying it in-core.\n\nBoth of you telling the same thing is enough for me to discard this\nnew stuff.  I'd like to refactor the code anyway as that's a nice\ncleanup, and this would have the advantage to make people look at\ncryptohashfuncs.c if introducing a new type.  After sleeping about it,\nI think that I would just make MD5 and SHA1 issue an elog(ERROR) if\nthe internal routine is taken in those cases, like in the attached.The refactor patch looks good. It builds and passes make check.Thanks for the enum explanation too.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Tue, 26 Jan 2021 21:53:52 -0500", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": false, "msg_subject": "Re: Add SQL function for SHA1" }, { "msg_contents": "On Tue, Jan 26, 2021 at 09:53:52PM -0500, Sehrope Sarkuni wrote:\n> The refactor patch looks good. It builds and passes make check.\n\nThanks for double-checking! The refactoring has been just done as of\nf854c69.\n--\nMichael", "msg_date": "Thu, 28 Jan 2021 16:29:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Add SQL function for SHA1" } ]
[ { "msg_contents": "The following is a request for discussion and comments, not a refined\nproposal accompanied by a working patch.\n\nAs recently publicly announced Amazon Web Services is working on Babelfish,\na set of extensions that will allow PostgreSQL to be compatible with other\ndatabase systems. One part of this will be an extension that allows\nPostgreSQL to listen on a secondary port and process a different wire\nprotocol. The first extension we are creating in this direction is handling\nof the Tabular Data Stream (TDS), used by Sybase and Microsoft SQL-Server\ndatabases. It is more efficient to build an extension, that can handle the\nTDS protocol inside the backend, than creating a proxy process that\ntranslates from TDS to libpq protocol and back.\n\nCreating the necessary infrastructure in the postmaster and backend will\nopen up more possibilities, that are not tied to our compatibility efforts.\nPossible use cases for wire protocol extensibility include the development\nof a completely new, not backwards compatible PostgreSQL protocol or\nextending the existing wire protocol with things like 3rd party connection\npool specific features (like transfer of file descriptors between pool and\nworking backend for example).\n\nOur current plan is to create a new set of API calls and hooks that allow\nto register additional wire protocols. The existing backend libpq\nimplementation will be modified to register itself using the new API. This\nwill serve as a proof of concept as well as ensure that the API definition\nis not slanted towards a specific protocol. It is also similar to the way\ntable access methods and compression methods are added.\n\nA wire protocol extension will be a standard PostgreSQL dynamic loadable\nextension module. The wire protocol extensions to load will be listed in\nthe shared_preload_libraries GUC. The extension's Init function will\nregister a hook function to be called where the postmaster is currently\ncreating the libpq server sockets. This hook callback will then create the\nserver sockets and register them for monitoring via select(2) in the\npostmaster main loop, using a new API function. Part of the registration\ninformation are callback functions to invoke for accepting and\nauthenticating incoming connections, error reporting as well as a function\nthat will implement the TCOP loop for the protocol. Ongoing work on the TDS\nprotocol has shown us that different protocols make it desirable to have\nseparate implementations of the TCOP loop. The TCOP function will return\nonly after the connection has been terminated. Fortunately half the\ninterface already exists since the sending of result sets is implemented\nvia callback functions that are registered as the dest receiver, which\nworks pretty well in our current code.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrincipal Database Engineer\nAmazon Web Services\n\nThe following is a request for discussion and comments, not a refined proposal accompanied by a working patch.As recently publicly announced Amazon Web Services is working on Babelfish, a set of extensions that will allow PostgreSQL to be compatible with other database systems. One part of this will be an extension that allows PostgreSQL to listen on a secondary port and process a different wire protocol. The first extension we are creating in this direction is handling of the Tabular Data Stream (TDS), used by Sybase and Microsoft SQL-Server databases. It is more efficient to build an extension, that can handle the TDS protocol inside the backend, than creating a proxy process that translates from TDS to libpq protocol and back.Creating the necessary infrastructure in the postmaster and backend will open up more possibilities, that are not tied to our compatibility efforts. Possible use cases for wire protocol extensibility include the development of a completely new, not backwards compatible PostgreSQL protocol or extending the existing wire protocol with things like 3rd party connection pool specific features (like transfer of file descriptors between pool and working backend for example).Our current plan is to create a new set of API calls and hooks that allow to register additional wire protocols. The existing backend libpq implementation will be modified to register itself using the new API. This will serve as a proof of concept as well as ensure that the API definition is not slanted towards a specific protocol. It is also similar to the way table access methods and compression methods are added.A wire protocol extension will be a standard PostgreSQL dynamic loadable extension module. The wire protocol extensions to load will be listed in the shared_preload_libraries GUC. The extension's Init function will register a hook function to be called where the postmaster is currently creating the libpq server sockets. This hook callback will then create the server sockets and register them for monitoring via select(2) in the postmaster main loop, using a new API function. Part of the registration information are callback functions to invoke for accepting and authenticating incoming connections, error reporting as well as a function that will implement the TCOP loop for the protocol. Ongoing work on the TDS protocol has shown us that different protocols make it desirable to have separate implementations of the TCOP loop. The TCOP function will return only after the connection has been terminated. Fortunately half the interface already exists since the sending of result sets is implemented via callback functions that are registered as the dest receiver, which works pretty well in our current code.Regards, Jan-- Jan WieckPrincipal Database EngineerAmazon Web Services", "msg_date": "Mon, 25 Jan 2021 10:07:02 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:\n\n> The following is a request for discussion and comments, not a refined\n> proposal accompanied by a working patch.\n>\n\nAfter implementing this three different ways inside the backend over the\nyears, I landed on almost this identical approach for handling the MySQL,\nTDS, MongoDB, and Oracle protocols for NEXTGRES.\n\nInitially, each was implemented as an background worker extension which had\nto handle its own networking, passing the fd off to new protocol-specific\nconnections, etc. This worked, but duplicate a good amount of logic. It\nwould be great to have a standard, loadable, way to add support for a new\nprotocol.\n\n-- \nJonah H. Harris\n\nOn Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:The following is a request for discussion and comments, not a refined proposal accompanied by a working patch.After implementing this three different ways inside the backend over the years, I landed on almost this identical approach for handling the MySQL, TDS, MongoDB, and Oracle protocols for NEXTGRES.Initially, each was implemented as an background worker extension which had to handle its own networking, passing the fd off to new protocol-specific connections, etc. This worked, but duplicate a good amount of logic. It would be great to have a standard, loadable, way to add support for a new protocol.-- Jonah H. Harris", "msg_date": "Mon, 25 Jan 2021 10:18:45 -0500", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "Hi Jonah,\n\nOn Mon, Jan 25, 2021 at 10:18 AM Jonah H. Harris <jonah.harris@gmail.com>\nwrote:\n\n> On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:\n>\n>> The following is a request for discussion and comments, not a refined\n>> proposal accompanied by a working patch.\n>>\n>\n> After implementing this three different ways inside the backend over the\n> years, I landed on almost this identical approach for handling the MySQL,\n> TDS, MongoDB, and Oracle protocols for NEXTGRES.\n>\n\nCould any of that be open sourced? It would be an excellent addition to add\none of those as example code.\n\n\nRegards, Jan\n\n\n\n>\n> Initially, each was implemented as an background worker extension which\n> had to handle its own networking, passing the fd off to new\n> protocol-specific connections, etc. This worked, but duplicate a good\n> amount of logic. It would be great to have a standard, loadable, way to add\n> support for a new protocol.\n>\n> --\n> Jonah H. Harris\n>\n>\n\n-- \nJan Wieck\n\nHi Jonah,On Mon, Jan 25, 2021 at 10:18 AM Jonah H. Harris <jonah.harris@gmail.com> wrote:On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:The following is a request for discussion and comments, not a refined proposal accompanied by a working patch.After implementing this three different ways inside the backend over the years, I landed on almost this identical approach for handling the MySQL, TDS, MongoDB, and Oracle protocols for NEXTGRES.Could any of that be open sourced? It would be an excellent addition to add one of those as example code.Regards, Jan Initially, each was implemented as an background worker extension which had to handle its own networking, passing the fd off to new protocol-specific connections, etc. This worked, but duplicate a good amount of logic. It would be great to have a standard, loadable, way to add support for a new protocol.-- Jonah H. Harris\n-- Jan Wieck", "msg_date": "Mon, 25 Jan 2021 12:17:58 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:\n> Our current plan is to create a new set of API calls and hooks that allow to register additional wire protocols. The existing backend libpq implementation will be modified to register itself using the new API. This will serve as a proof of concept as well as ensure that the API definition is not slanted towards a specific protocol. It is also similar to the way table access methods and compression methods are added.\n\nIf we're going to end up with an open source implementation of\nsomething useful in contrib or whatever, then I think this is fine.\nBut, if not, then we're just making it easier for Amazon to do\nproprietary stuff without getting any benefit for the open-source\nproject. In fact, in that case PostgreSQL would ensure have to somehow\nensure that the hooks don't get broken without having any code that\nactually uses them, so not only would the project get no benefit, but\nit would actually incur a small tax. I wouldn't say that's an\nabsolutely show-stopper, but it definitely isn't my first choice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Feb 2021 11:43:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Wed, Feb 10, 2021 at 1:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:\n> > Our current plan is to create a new set of API calls and hooks that\nallow to register additional wire protocols. The existing backend libpq\nimplementation will be modified to register itself using the new API. This\nwill serve as a proof of concept as well as ensure that the API definition\nis not slanted towards a specific protocol. It is also similar to the way\ntable access methods and compression methods are added.\n>\n> If we're going to end up with an open source implementation of\n> something useful in contrib or whatever, then I think this is fine.\n> But, if not, then we're just making it easier for Amazon to do\n> proprietary stuff without getting any benefit for the open-source\n> project. In fact, in that case PostgreSQL would ensure have to somehow\n> ensure that the hooks don't get broken without having any code that\n> actually uses them, so not only would the project get no benefit, but\n> it would actually incur a small tax. I wouldn't say that's an\n> absolutely show-stopper, but it definitely isn't my first choice.\n\nAs far I understood Jan's proposal is to add enough hooks on PostgreSQL to\nenable us to extend the wire protocol and add a contrib module as an\nexample (maybe TDS, HTTP or just adding new capabilities to current\nimplementation).\n\nRegards,\n\n-- \n Fabrízio de Royes Mello\n PostgreSQL Developer at OnGres Inc. - https://ongres.com\n\nOn Wed, Feb 10, 2021 at 1:43 PM Robert Haas <robertmhaas@gmail.com> wrote:>> On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:> > Our current plan is to create a new set of API calls and hooks that allow to register additional wire protocols. The existing backend libpq implementation will be modified to register itself using the new API. This will serve as a proof of concept as well as ensure that the API definition is not slanted towards a specific protocol. It is also similar to the way table access methods and compression methods are added.>> If we're going to end up with an open source implementation of> something useful in contrib or whatever, then I think this is fine.> But, if not, then we're just making it easier for Amazon to do> proprietary stuff without getting any benefit for the open-source> project. In fact, in that case PostgreSQL would ensure have to somehow> ensure that the hooks don't get broken without having any code that> actually uses them, so not only would the project get no benefit, but> it would actually incur a small tax. I wouldn't say that's an> absolutely show-stopper, but it definitely isn't my first choice.As far I understood Jan's proposal is to add enough hooks on PostgreSQL to enable us to extend the wire protocol and add a contrib module as an example (maybe TDS, HTTP or just adding new capabilities to current implementation).Regards,--    Fabrízio de Royes Mello   PostgreSQL Developer at OnGres Inc. - https://ongres.com", "msg_date": "Wed, 10 Feb 2021 14:22:42 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Wed, Feb 10, 2021 at 11:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:\n> > Our current plan is to create a new set of API calls and hooks that\n> allow to register additional wire protocols. The existing backend libpq\n> implementation will be modified to register itself using the new API. This\n> will serve as a proof of concept as well as ensure that the API definition\n> is not slanted towards a specific protocol. It is also similar to the way\n> table access methods and compression methods are added.\n>\n> If we're going to end up with an open source implementation of\n> something useful in contrib or whatever, then I think this is fine.\n> But, if not, then we're just making it easier for Amazon to do\n> proprietary stuff without getting any benefit for the open-source\n> project. In fact, in that case PostgreSQL would ensure have to somehow\n> ensure that the hooks don't get broken without having any code that\n> actually uses them, so not only would the project get no benefit, but\n> it would actually incur a small tax. I wouldn't say that's an\n> absolutely show-stopper, but it definitely isn't my first choice.\n>\n\nAgreed on adding substantial hooks if they're not likely to be used. While\nI haven't yet seen AWS' implementation or concrete proposal, given the\npeople involved, I assume it's fairly similar to how I implemented it.\nAssuming that's correct and it doesn't require substantial redevelopment,\nI'd certainly open-source my MySQL-compatible protocol and parser\nimplementation. From my perspective, it would be awesome if these could be\ndone as extensions.\n\nWhile I'm not planning to open source it as of yet, for my\nOracle-compatible stuff, I don't think I'd be able to do anything other\nthan the protocol as an extension given the core-related changes similar to\nwhat EDB has to do. I don't think there's any easy way to get around that.\nBut, for the protocol and any type of simple translation to Postgres'\ndialect, I think that could easily be hook-based.\n\n-- \nJonah H. Harris\n\nOn Wed, Feb 10, 2021 at 11:43 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:\n> Our current plan is to create a new set of API calls and hooks that allow to register additional wire protocols. The existing backend libpq implementation will be modified to register itself using the new API. This will serve as a proof of concept as well as ensure that the API definition is not slanted towards a specific protocol. It is also similar to the way table access methods and compression methods are added.\n\nIf we're going to end up with an open source implementation of\nsomething useful in contrib or whatever, then I think this is fine.\nBut, if not, then we're just making it easier for Amazon to do\nproprietary stuff without getting any benefit for the open-source\nproject. In fact, in that case PostgreSQL would ensure have to somehow\nensure that the hooks don't get broken without having any code that\nactually uses them, so not only would the project get no benefit, but\nit would actually incur a small tax. I wouldn't say that's an\nabsolutely show-stopper, but it definitely isn't my first choice.Agreed on adding substantial hooks if they're not likely to be used. While I haven't yet seen AWS' implementation or concrete proposal, given the people involved, I assume it's fairly similar to how I implemented it. Assuming that's correct and it doesn't require substantial redevelopment, I'd certainly open-source my MySQL-compatible protocol and parser implementation. From my perspective, it would be awesome if these could be done as extensions.While I'm not planning to open source it as of yet, for my Oracle-compatible stuff, I don't think I'd be able to do anything other than the protocol as an extension given the core-related changes similar to what EDB has to do. I don't think there's any easy way to get around that. But, for the protocol and any type of simple translation to Postgres' dialect, I think that could easily be hook-based.-- Jonah H. Harris", "msg_date": "Wed, 10 Feb 2021 12:35:29 -0500", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If we're going to end up with an open source implementation of\n> something useful in contrib or whatever, then I think this is fine.\n> But, if not, then we're just making it easier for Amazon to do\n> proprietary stuff without getting any benefit for the open-source\n> project. In fact, in that case PostgreSQL would ensure have to somehow\n> ensure that the hooks don't get broken without having any code that\n> actually uses them, so not only would the project get no benefit, but\n> it would actually incur a small tax. I wouldn't say that's an\n> absolutely show-stopper, but it definitely isn't my first choice.\n\nAs others noted, a test module could be built to add some coverage here.\n\nWhat I'm actually more concerned about, in this whole line of development,\nis the follow-on requests that will surely occur to kluge up Postgres\nto make its behavior more like $whatever. As in \"well, now that we\ncan serve MySQL clients protocol-wise, can't we pretty please have a\nmode that makes the parser act more like MySQL\". If we start having\nmodes for MySQL identifier quoting, Oracle outer join syntax, yadda\nyadda, it's going to be way more of a maintenance nightmare than some\nhook functions. So if we accept any patch along this line, I want to\ndrive a hard stake in the ground that the answer to that sort of thing\nwill be NO.\n\nAssuming we're going to keep to that, though, it seems like people\ndoing this sort of thing will inevitably end up with a fork anyway.\nSo maybe we should just not bother with the first step either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Feb 2021 13:10:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Wed, Feb 10, 2021 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> What I'm actually more concerned about, in this whole line of development,\n> is the follow-on requests that will surely occur to kluge up Postgres\n> to make its behavior more like $whatever. As in \"well, now that we\n> can serve MySQL clients protocol-wise, can't we pretty please have a\n> mode that makes the parser act more like MySQL\". If we start having\n> modes for MySQL identifier quoting, Oracle outer join syntax, yadda\n> yadda, it's going to be way more of a maintenance nightmare than some\n> hook functions. So if we accept any patch along this line, I want to\n> drive a hard stake in the ground that the answer to that sort of thing\n> will be NO.\n>\n\nActually, a substantial amount can be done with hooks. For Oracle, which is\nsubstantially harder than MySQL, I have a completely separate parser that\ngenerates a PG-compatible parse tree packaged up as an extension. To handle\nautonomous transactions, database links, hierarchical query conversion,\nhints, and some execution-related items requires core changes. But, the\nprotocol and parsing can definitely be done with hooks. And, as was\nmentioned previously, this isn't tied directly to emulating another\ndatabase - it would enable us to support an HTTP-ish interface directly in\nthe server as an extension as well. A lot of this can be done with\nbackground worker extensions now, which is how my stuff was primarily\narchitected, but it's hacky when it comes to areas where the items Jan\ndiscussed could clean things up and make them more pluggable.\n\nAssuming we're going to keep to that, though, it seems like people\n> doing this sort of thing will inevitably end up with a fork anyway.\n> So maybe we should just not bother with the first step either.\n>\n\nPerhaps I'm misunderstanding you, but I wouldn't throw this entire idea out\n(which enables a substantial addition of extensible functionality with a\nlimited set of touchpoints) on the premise of future objections.\n\n-- \nJonah H. Harris\n\nOn Wed, Feb 10, 2021 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:What I'm actually more concerned about, in this whole line of development,\nis the follow-on requests that will surely occur to kluge up Postgres\nto make its behavior more like $whatever.  As in \"well, now that we\ncan serve MySQL clients protocol-wise, can't we pretty please have a\nmode that makes the parser act more like MySQL\".  If we start having\nmodes for MySQL identifier quoting, Oracle outer join syntax, yadda\nyadda, it's going to be way more of a maintenance nightmare than some\nhook functions.  So if we accept any patch along this line, I want to\ndrive a hard stake in the ground that the answer to that sort of thing\nwill be NO.Actually, a substantial amount can be done with hooks. For Oracle, which is substantially harder than MySQL, I have a completely separate parser that generates a PG-compatible parse tree packaged up as an extension. To handle autonomous transactions, database links, hierarchical query conversion, hints, and some execution-related items requires core changes. But, the protocol and parsing can definitely be done with hooks. And, as was mentioned previously, this isn't tied directly to emulating another database - it would enable us to support an HTTP-ish interface directly in the server as an extension as well. A lot of this can be done with background worker extensions now, which is how my stuff was primarily architected, but it's hacky when it comes to areas where the items Jan discussed could clean things up and make them more pluggable.Assuming we're going to keep to that, though, it seems like people\ndoing this sort of thing will inevitably end up with a fork anyway.\nSo maybe we should just not bother with the first step either. Perhaps I'm misunderstanding you, but I wouldn't throw this entire idea out (which enables a substantial addition of extensible functionality with a limited set of touchpoints) on the premise of future objections.-- Jonah H. Harris", "msg_date": "Wed, 10 Feb 2021 13:42:26 -0500", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "\"Jonah H. Harris\" <jonah.harris@gmail.com> writes:\n> On Wed, Feb 10, 2021 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... If we start having\n>> modes for MySQL identifier quoting, Oracle outer join syntax, yadda\n>> yadda, it's going to be way more of a maintenance nightmare than some\n>> hook functions. So if we accept any patch along this line, I want to\n>> drive a hard stake in the ground that the answer to that sort of thing\n>> will be NO.\n\n> Actually, a substantial amount can be done with hooks. For Oracle, which is\n> substantially harder than MySQL, I have a completely separate parser that\n> generates a PG-compatible parse tree packaged up as an extension. To handle\n> autonomous transactions, database links, hierarchical query conversion,\n> hints, and some execution-related items requires core changes.\n\nThat is a spot-on definition of where I do NOT want to end up. Hooks\neverywhere and enormous extensions that break anytime we change anything\nin the core. It's not really clear that anybody is going to find that\nmore maintainable than a straight fork, except to the extent that it\nenables the erstwhile forkers to shove some of their work onto the PG\ncommunity.\n\nMy feeling about this is if you want to use Oracle, go use Oracle.\nDon't ask PG to take on a ton of maintenance issues so you can have\na frankenOracle.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Feb 2021 14:04:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Wed, Feb 10, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> That is a spot-on definition of where I do NOT want to end up. Hooks\n> everywhere and enormous extensions that break anytime we change anything\n> in the core. It's not really clear that anybody is going to find that\n> more maintainable than a straight fork, except to the extent that it\n> enables the erstwhile forkers to shove some of their work onto the PG\n> community.\n>\n\nGiven the work over the last few major releases to make several other\naspects of Postgres pluggable, how is implementing a pluggable protocol API\nany different?\n\nTo me, this sounds more like a philosophical disagreement with how people\ncould potentially use Postgres than a technical one. My point is only that,\nusing current PG functionality, I could equally write a pluggable storage\ninterface for my Oracle and InnoDB data file readers/writers, which would\nsimilarly allow for the creation of a Postgres franken-Oracle by extension\nonly.\n\nI don't think anyone is asking for hooks for all the things I mentioned - a\npluggable transaction manager, for example, doesn't make much sense. But,\nwhen it comes to having actually done this vs. posited about its\nusefulness, I'd say it has some merit and doesn't really introduce that\nmuch complexity or maintenance overhead to core - whether the extensions\nstill work properly is up to the extension authors... isn't that the whole\npoint of extensions?\n\n-- \nJonah H. Harris\n\nOn Wed, Feb 10, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:That is a spot-on definition of where I do NOT want to end up.  Hooks\neverywhere and enormous extensions that break anytime we change anything\nin the core.  It's not really clear that anybody is going to find that\nmore maintainable than a straight fork, except to the extent that it\nenables the erstwhile forkers to shove some of their work onto the PG\ncommunity.Given the work over the last few major releases to make several other aspects of Postgres pluggable, how is implementing a pluggable protocol API any different?To me, this sounds more like a philosophical disagreement with how people could potentially use Postgres than a technical one. My point is only that, using current PG functionality, I could equally write a pluggable storage interface for my Oracle and InnoDB data file readers/writers, which would similarly allow for the creation of a Postgres franken-Oracle by extension only.I don't think anyone is asking for hooks for all the things I mentioned - a pluggable transaction manager, for example, doesn't make much sense. But, when it comes to having actually done this vs. posited about its usefulness, I'd say it has some merit and doesn't really introduce that much complexity or maintenance overhead to core - whether the extensions still work properly is up to the extension authors... isn't that the whole point of extensions?-- Jonah H. Harris", "msg_date": "Wed, 10 Feb 2021 14:33:22 -0500", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Wed, Feb 10, 2021 at 11:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:\n> > Our current plan is to create a new set of API calls and hooks that\n> allow to register additional wire protocols. The existing backend libpq\n> implementation will be modified to register itself using the new API. This\n> will serve as a proof of concept as well as ensure that the API definition\n> is not slanted towards a specific protocol. It is also similar to the way\n> table access methods and compression methods are added.\n>\n> If we're going to end up with an open source implementation of\n> something useful in contrib or whatever, then I think this is fine.\n> But, if not, then we're just making it easier for Amazon to do\n> proprietary stuff without getting any benefit for the open-source\n> project. In fact, in that case PostgreSQL would ensure have to somehow\n> ensure that the hooks don't get broken without having any code that\n> actually uses them, so not only would the project get no benefit, but\n> it would actually incur a small tax. I wouldn't say that's an\n> absolutely show-stopper, but it definitely isn't my first choice.\n>\n\nAt this very moment there are several parts to this. One is the hooks to\nmake wire protocols into loadable modules, which is what this effort is\nabout. Another is the TDS protocol as it is being implemented for Babelfish\nand third is the Babelfish extension itself. Both will require additional\nhooks and APIs I am not going to address here. I consider them not material\nto my effort.\n\nAs for making the wire protocol itself expandable I really see a lot of\npotential outside of what Amazon wants here. And I would not be advertising\nit if it would be for Babelfish alone. As I laid out, just the ability for\na third party to add additional messages for special connection pool\nsupport would be enough to make it useful. There also have been discussions\nin the JDBC subproject to combine certain messages into one single message.\nWhy not allow the JDBC project to develop their own, JDBC-optimized backend\nside? Last but not least, what would be wrong with listening for MariaDB\nclients?\n\nI am planning on a follow up project to this, demoting libpq itself to just\nanother loadable protocol. Just the way procedural languages are all on the\nsame level because that is how I developed the loadable, procedural\nlanguage handler all those years ago.\n\nConsidering how spread out and quite frankly unorganized our wire protocol\nhandling is, this is not a small order.\n\n\nRegards, Jan\n\n\n\n\n\n\n\n\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nJan Wieck\n\nOn Wed, Feb 10, 2021 at 11:43 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck <jan@wi3ck.info> wrote:\n> Our current plan is to create a new set of API calls and hooks that allow to register additional wire protocols. The existing backend libpq implementation will be modified to register itself using the new API. This will serve as a proof of concept as well as ensure that the API definition is not slanted towards a specific protocol. It is also similar to the way table access methods and compression methods are added.\n\nIf we're going to end up with an open source implementation of\nsomething useful in contrib or whatever, then I think this is fine.\nBut, if not, then we're just making it easier for Amazon to do\nproprietary stuff without getting any benefit for the open-source\nproject. In fact, in that case PostgreSQL would ensure have to somehow\nensure that the hooks don't get broken without having any code that\nactually uses them, so not only would the project get no benefit, but\nit would actually incur a small tax. I wouldn't say that's an\nabsolutely show-stopper, but it definitely isn't my first choice.At this very moment there are several parts to this. One is the hooks to make wire protocols into loadable modules, which is what this effort is about. Another is the TDS protocol as it is being implemented for Babelfish and third is the Babelfish extension itself. Both will require additional hooks and APIs I am not going to address here. I consider them not material to my effort.As for making the wire protocol itself expandable I really see a lot of potential outside of what Amazon wants here. And I would not be advertising it if it would be for Babelfish alone. As I laid out, just the ability for a third party to add additional messages for special connection pool support would be enough to make it useful. There also have been discussions in the JDBC subproject to combine certain messages into one single message. Why not allow the JDBC project to develop their own, JDBC-optimized backend side? Last but not least, what would be wrong with listening for MariaDB clients?I am planning on a follow up project to this, demoting libpq itself to just another loadable protocol. Just the way procedural languages are all on the same level because that is how I developed the loadable, procedural language handler all those years ago. Considering how spread out and quite frankly unorganized our wire protocol handling is, this is not a small order.Regards, Jan \n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n-- Jan Wieck", "msg_date": "Wed, 10 Feb 2021 16:32:21 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Wed, Feb 10, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That is a spot-on definition of where I do NOT want to end up. Hooks\n> everywhere and enormous extensions that break anytime we change anything\n> in the core. It's not really clear that anybody is going to find that\n> more maintainable than a straight fork, except to the extent that it\n> enables the erstwhile forkers to shove some of their work onto the PG\n> community.\n\n+1.\n\nMaking the lexer and parser extensible seems desirable to me. It would\nbe beneficial not only for companies like EDB and Amazon that might\nwant to extend the grammar in various ways, but also for extension\nauthors. However, it's vastly harder than Jan's proposal to make the\nwire protocol pluggable. The wire protocol is pretty well-isolated\nfrom the rest of the system. As long as you can get queries out of the\npackets the client sends and package up the results to send back, it's\nall good. The parser, on the other hand, is not at all well-isolated\nfrom the rest of the system. There's a LOT of code that knows a whole\nlot of stuff about the structure of parse trees, so your variant\nparser can't produce parse trees for new kinds of DDL, or for new\nquery constructs. And if it parsed some completely different syntax\nwhere, say, joins were not explicit, it would still have to figure out\nhow to represent them in a way that looked just like it came out of\nthe regular parser -- otherwise, parse analysis and query planning and\nso forth are not going to work, unless you go and change a lot of\nother code too, and I don't really have any idea how we could solve\nthat, even in theory. But that kind of thing just isn't a problem for\nthe proposal on this thread.\n\nThat being said, I'm not in favor of transferring maintenance work to\nthe community for this set of hooks any more than I am for something\non the parsing side. In general, I'm in favor of as much extensibility\nas we can reasonably create, but with a complicated proposal like this\none, the community should expect to be able to get something out of\nit. And so far what I hear Jan saying is that these hooks could in\ntheory be used for things other than Amazon's proprietary efforts and\nthose things could in theory bring benefits to the community, but\nthere are no actual plans to do anything with this that would benefit\nanyone other than Amazon. Which seems to bring us right back to\nexpecting the community to maintain things for the benefit of\nthird-party forks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Feb 2021 09:28:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Thu, Feb 11, 2021 at 9:28 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> That being said, I'm not in favor of transferring maintenance work to\n> the community for this set of hooks any more than I am for something\n> on the parsing side. In general, I'm in favor of as much extensibility\n> as we can reasonably create, but with a complicated proposal like this\n> one, the community should expect to be able to get something out of\n> it. And so far what I hear Jan saying is that these hooks could in\n> theory be used for things other than Amazon's proprietary efforts and\n> those things could in theory bring benefits to the community, but\n> there are no actual plans to do anything with this that would benefit\n> anyone other than Amazon. Which seems to bring us right back to\n> expecting the community to maintain things for the benefit of\n> third-party forks.\n>\n\nI'm quite sure I said I'd open source my MySQL implementation, which allows\nPostgres to appear to MySQL clients as a MySQL/MariaDB server. This is\nneither proprietary nor Amazon-related and makes Postgres substantially\nmore useful for a large number of applications.\n\nAs Jan said in his last email, they're not proposing all the different\naspects needed. In fact, nothing has actually been proposed yet. This is an\nentirely philosophical debate. I don't even know what's being proposed at\nthis point - I just know it *could* be useful. Let's just wait and see what\nis actually proposed before shooting it down, yes?\n\n-- \nJonah H. Harris\n\nOn Thu, Feb 11, 2021 at 9:28 AM Robert Haas <robertmhaas@gmail.com> wrote:That being said, I'm not in favor of transferring maintenance work to\nthe community for this set of hooks any more than I am for something\non the parsing side. In general, I'm in favor of as much extensibility\nas we can reasonably create, but with a complicated proposal like this\none, the community should expect to be able to get something out of\nit. And so far what I hear Jan saying is that these hooks could in\ntheory be used for things other than Amazon's proprietary efforts and\nthose things could in theory bring benefits to the community, but\nthere are no actual plans to do anything with this that would benefit\nanyone other than Amazon. Which seems to bring us right back to\nexpecting the community to maintain things for the benefit of\nthird-party forks.I'm quite sure I said I'd open source my MySQL implementation, which allows Postgres to appear to MySQL clients as a MySQL/MariaDB server. This is neither proprietary nor Amazon-related and makes Postgres substantially more useful for a large number of applications.As Jan said in his last email, they're not proposing all the different aspects needed. In fact, nothing has actually been proposed yet. This is an entirely philosophical debate. I don't even know what's being proposed at this point - I just know it *could* be useful. Let's just wait and see what is actually proposed before shooting it down, yes?-- Jonah H. Harris", "msg_date": "Thu, 11 Feb 2021 09:42:02 -0500", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Thu, Feb 11, 2021 at 9:42 AM Jonah H. Harris <jonah.harris@gmail.com> wrote:\n> I'm quite sure I said I'd open source my MySQL implementation, which allows Postgres to appear to MySQL clients as a MySQL/MariaDB server. This is neither proprietary nor Amazon-related and makes Postgres substantially more useful for a large number of applications.\n\nOK. There's stuff to think about there, too: do we want that in\ncontrib? Is it in good enough shape to be in contrib even if we did?\nIf it's not in contrib, how do we incorporate it into, say, the\nbuildfarm, so that we know if we break something? Is it actively\nmaintained and stable, so that if it needs adjustment for upstream\nchanges we can count on that getting addressed in a timely fashion? I\ndon't know the answers to these questions and am not trying to\nprejudge, but I think they are important and relevant questions.\n\n> As Jan said in his last email, they're not proposing all the different aspects needed. In fact, nothing has actually been proposed yet. This is an entirely philosophical debate. I don't even know what's being proposed at this point - I just know it *could* be useful. Let's just wait and see what is actually proposed before shooting it down, yes?\n\nI don't think I'm trying to shoot anything down, because as I said, I\nlike extensibility and am generally in favor of it. Rather, I'm\nexpressing a concern which seems to me to be justified, based on what\nwas posted. I'm sorry that my tone seems to have aggravated you, but\nit wasn't intended to do so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Feb 2021 09:55:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Feb 11, 2021 at 9:42 AM Jonah H. Harris <jonah.harris@gmail.com> wrote:\n>> As Jan said in his last email, they're not proposing all the different\n>> aspects needed. In fact, nothing has actually been proposed yet. This\n>> is an entirely philosophical debate. I don't even know what's being\n>> proposed at this point - I just know it *could* be useful. Let's just\n>> wait and see what is actually proposed before shooting it down, yes?\n\n> I don't think I'm trying to shoot anything down, because as I said, I\n> like extensibility and am generally in favor of it. Rather, I'm\n> expressing a concern which seems to me to be justified, based on what\n> was posted. I'm sorry that my tone seems to have aggravated you, but\n> it wasn't intended to do so.\n\nLikewise, the point I was trying to make is that a \"pluggable wire\nprotocol\" is only a tiny part of what would be needed to have a credible\nMySQL, Oracle, or whatever clone. There are large semantic differences\nfrom those products; there are maintenance issues arising from the fact\nthat we whack structures like parse trees around all the time; and so on.\nMaybe there is some useful thing that can be accomplished here, but we\nneed to consider the bigger picture rather than believing (without proof)\nthat a few hook variables will be enough to do anything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Feb 2021 10:06:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "\nOn 2/11/21 10:06 AM, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Thu, Feb 11, 2021 at 9:42 AM Jonah H. Harris <jonah.harris@gmail.com> wrote:\n>>> As Jan said in his last email, they're not proposing all the different\n>>> aspects needed. In fact, nothing has actually been proposed yet. This\n>>> is an entirely philosophical debate. I don't even know what's being\n>>> proposed at this point - I just know it *could* be useful. Let's just\n>>> wait and see what is actually proposed before shooting it down, yes?\n>> I don't think I'm trying to shoot anything down, because as I said, I\n>> like extensibility and am generally in favor of it. Rather, I'm\n>> expressing a concern which seems to me to be justified, based on what\n>> was posted. I'm sorry that my tone seems to have aggravated you, but\n>> it wasn't intended to do so.\n> Likewise, the point I was trying to make is that a \"pluggable wire\n> protocol\" is only a tiny part of what would be needed to have a credible\n> MySQL, Oracle, or whatever clone. There are large semantic differences\n> from those products; there are maintenance issues arising from the fact\n> that we whack structures like parse trees around all the time; and so on.\n> Maybe there is some useful thing that can be accomplished here, but we\n> need to consider the bigger picture rather than believing (without proof)\n> that a few hook variables will be enough to do anything.\n\n\n\nYeah. I think we'd need a fairly fully worked implementation to see\nwhere it goes. Is Amazon going to release (under TPL) its TDS\nimplementation of this? That might go a long way to convincing me this\nis worth considering.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 11 Feb 2021 10:28:51 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Thu, Feb 11, 2021 at 10:29 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2/11/21 10:06 AM, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> >> On Thu, Feb 11, 2021 at 9:42 AM Jonah H. Harris <jonah.harris@gmail.com>\n> wrote:\n> >>> As Jan said in his last email, they're not proposing all the different\n> >>> aspects needed. In fact, nothing has actually been proposed yet. This\n> >>> is an entirely philosophical debate. I don't even know what's being\n> >>> proposed at this point - I just know it *could* be useful. Let's just\n> >>> wait and see what is actually proposed before shooting it down, yes?\n> >> I don't think I'm trying to shoot anything down, because as I said, I\n> >> like extensibility and am generally in favor of it. Rather, I'm\n> >> expressing a concern which seems to me to be justified, based on what\n> >> was posted. I'm sorry that my tone seems to have aggravated you, but\n> >> it wasn't intended to do so.\n> > Likewise, the point I was trying to make is that a \"pluggable wire\n> > protocol\" is only a tiny part of what would be needed to have a credible\n> > MySQL, Oracle, or whatever clone. There are large semantic differences\n> > from those products; there are maintenance issues arising from the fact\n> > that we whack structures like parse trees around all the time; and so on.\n> > Maybe there is some useful thing that can be accomplished here, but we\n> > need to consider the bigger picture rather than believing (without proof)\n> > that a few hook variables will be enough to do anything.\n>\n>\n>\n> Yeah. I think we'd need a fairly fully worked implementation to see\n> where it goes. Is Amazon going to release (under TPL) its TDS\n> implementation of this? That might go a long way to convincing me this\n> is worth considering.\n>\n> Everything is planned to be released under the Apache 2.0 license so\npeople are free to do with it as they choose.\n\nOn Thu, Feb 11, 2021 at 10:29 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2/11/21 10:06 AM, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Thu, Feb 11, 2021 at 9:42 AM Jonah H. Harris <jonah.harris@gmail.com> wrote:\n>>> As Jan said in his last email, they're not proposing all the different\n>>> aspects needed. In fact, nothing has actually been proposed yet. This\n>>> is an entirely philosophical debate. I don't even know what's being\n>>> proposed at this point - I just know it *could* be useful. Let's just\n>>> wait and see what is actually proposed before shooting it down, yes?\n>> I don't think I'm trying to shoot anything down, because as I said, I\n>> like extensibility and am generally in favor of it. Rather, I'm\n>> expressing a concern which seems to me to be justified, based on what\n>> was posted. I'm sorry that my tone seems to have aggravated you, but\n>> it wasn't intended to do so.\n> Likewise, the point I was trying to make is that a \"pluggable wire\n> protocol\" is only a tiny part of what would be needed to have a credible\n> MySQL, Oracle, or whatever clone.  There are large semantic differences\n> from those products; there are maintenance issues arising from the fact\n> that we whack structures like parse trees around all the time; and so on.\n> Maybe there is some useful thing that can be accomplished here, but we\n> need to consider the bigger picture rather than believing (without proof)\n> that a few hook variables will be enough to do anything.\n\n\n\nYeah. I think we'd need a fairly fully worked implementation to see\nwhere it goes. Is Amazon going to release (under TPL) its TDS\nimplementation of this? That might go a long way to convincing me this\nis worth considering.\nEverything is planned to be released under the Apache 2.0 license so people are free to do with it as they choose.", "msg_date": "Thu, 11 Feb 2021 10:47:21 -0500", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Wed, Feb 10, 2021 at 11:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Jonah H. Harris\" <jonah.harris@gmail.com> writes:\n> > On Wed, Feb 10, 2021 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> ... If we start having\n> >> modes for MySQL identifier quoting, Oracle outer join syntax, yadda\n> >> yadda, it's going to be way more of a maintenance nightmare than some\n> >> hook functions. So if we accept any patch along this line, I want to\n> >> drive a hard stake in the ground that the answer to that sort of thing\n> >> will be NO.\n>\n> > Actually, a substantial amount can be done with hooks. For Oracle, which\n> is\n> > substantially harder than MySQL, I have a completely separate parser that\n> > generates a PG-compatible parse tree packaged up as an extension. To\n> handle\n> > autonomous transactions, database links, hierarchical query conversion,\n> > hints, and some execution-related items requires core changes.\n>\n> That is a spot-on definition of where I do NOT want to end up. Hooks\n> everywhere and enormous extensions that break anytime we change anything\n> in the core. It's not really clear that anybody is going to find that\n> more maintainable than a straight fork, except to the extent that it\n> enables the erstwhile forkers to shove some of their work onto the PG\n> community.\n>\n> My feeling about this is if you want to use Oracle, go use Oracle.\n> Don't ask PG to take on a ton of maintenance issues so you can have\n> a frankenOracle.\n>\n\nPostgreSQL over the last decade spent a considerable amount of time\nallowing it to become extensible outside of core. We are now useful in\nworkloads nobody would have considered in 2004 or 2008.\n\nThe more extensibility we add, the LESS we maintain. It is a lot easier to\nmaintain an API than it is an entire kernel. When I look at all the\ninteresting features coming from the ecosystem, they are all built on the\nhooks that this community worked so hard to create. This idea is an\nextension of that and a result of the community's success.\n\nThe more extensible we make PostgreSQL, the more the hacker community can\ninnovate without damaging the PostgreSQL reputation as a rock solid\ndatabase system.\n\nFeatures like these only enable the entire community to innovate. Is the\nreal issue that the more extensible PostgreSQL is, the more boring it will\nbecome?\n\nJD\n\n\n\n>\n> regards, tom lane\n>\n>\n>\n\nOn Wed, Feb 10, 2021 at 11:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"Jonah H. Harris\" <jonah.harris@gmail.com> writes:\n> On Wed, Feb 10, 2021 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ...  If we start having\n>> modes for MySQL identifier quoting, Oracle outer join syntax, yadda\n>> yadda, it's going to be way more of a maintenance nightmare than some\n>> hook functions.  So if we accept any patch along this line, I want to\n>> drive a hard stake in the ground that the answer to that sort of thing\n>> will be NO.\n\n> Actually, a substantial amount can be done with hooks. For Oracle, which is\n> substantially harder than MySQL, I have a completely separate parser that\n> generates a PG-compatible parse tree packaged up as an extension. To handle\n> autonomous transactions, database links, hierarchical query conversion,\n> hints, and some execution-related items requires core changes.\n\nThat is a spot-on definition of where I do NOT want to end up.  Hooks\neverywhere and enormous extensions that break anytime we change anything\nin the core.  It's not really clear that anybody is going to find that\nmore maintainable than a straight fork, except to the extent that it\nenables the erstwhile forkers to shove some of their work onto the PG\ncommunity.\n\nMy feeling about this is if you want to use Oracle, go use Oracle.\nDon't ask PG to take on a ton of maintenance issues so you can have\na frankenOracle.PostgreSQL over the last decade spent a considerable amount of time allowing it to become extensible outside of core. We are now useful in workloads nobody would have considered in 2004 or 2008.The more extensibility we add, the LESS we maintain. It is a lot easier to maintain an API than it is an entire kernel. When I look at all the interesting features coming from the ecosystem, they are all built on the hooks that this community worked so hard to create. This idea is an extension of that and a result of the community's success.The more extensible we make PostgreSQL, the more the hacker community can innovate without damaging the PostgreSQL reputation as a rock solid database system.Features like these only enable the entire community to innovate. Is the real issue that the more extensible PostgreSQL is, the more boring it will become?JD \n\n                        regards, tom lane", "msg_date": "Thu, 11 Feb 2021 10:28:57 -0800", "msg_from": "Joshua Drake <jd@commandprompt.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Thu, Feb 11, 2021 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Feb 11, 2021 at 9:42 AM Jonah H. Harris <jonah.harris@gmail.com>\nwrote:\n> >> As Jan said in his last email, they're not proposing all the different\n> >> aspects needed. In fact, nothing has actually been proposed yet. This\n> >> is an entirely philosophical debate. I don't even know what's being\n> >> proposed at this point - I just know it *could* be useful. Let's just\n> >> wait and see what is actually proposed before shooting it down, yes?\n>\n> > I don't think I'm trying to shoot anything down, because as I said, I\n> > like extensibility and am generally in favor of it. Rather, I'm\n> > expressing a concern which seems to me to be justified, based on what\n> > was posted. I'm sorry that my tone seems to have aggravated you, but\n> > it wasn't intended to do so.\n>\n> Likewise, the point I was trying to make is that a \"pluggable wire\n> protocol\" is only a tiny part of what would be needed to have a credible\n> MySQL, Oracle, or whatever clone. There are large semantic differences\n> from those products; there are maintenance issues arising from the fact\n> that we whack structures like parse trees around all the time; and so on.\n> Maybe there is some useful thing that can be accomplished here, but we\n> need to consider the bigger picture rather than believing (without proof)\n> that a few hook variables will be enough to do anything.\n>\n\nJust to don't miss the point, creating a compat protocol to mimic others\n(TDS,\nMySQL, etc) is just one use case.\n\nThere are other use cases to make wire protocol extensible, for example for\ntelemetry I can use some hooks to propagate context [1] and get more\ndetailed\ntracing information about the negotiation between frontend and backend and\nbeing able to implement a truly query tracing tool, for example.\n\nAnother use case is extending the current protocol to, for example, send\nmore\ninformation about query execution on CommandComplete command instead of\njust the number of affected rows.\n\nAbout the HTTP protocol I think PG should have it, maybe pure HTTP (no\nREST,\njust HTTP) because it's the most interoperable. Performance can still be\nvery good\nwith HTTP2, and you have a huge ecosystem of tools and proxies (like Envoy)\nthat\nwould do wonders with this. You could safely query a db from a web page\n(passing\nthrough proxies that would do auth, TLS, etc). Or maybe a higher performing\ngRPC\nversion (which is also HTTP2 and is amazing), but this makes it a bit more\ndifficult\nto query from a web page. In either case, context propagation is already\nbuilt-in, and\nin a standard way.\n\nRegards,\n\n[1] https://www.w3.org/TR/trace-context/\n\n-- \n Fabrízio de Royes Mello\n PostgreSQL Developer at OnGres Inc. - https://ongres.com\n\nOn Thu, Feb 11, 2021 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Robert Haas <robertmhaas@gmail.com> writes:> > On Thu, Feb 11, 2021 at 9:42 AM Jonah H. Harris <jonah.harris@gmail.com> wrote:> >> As Jan said in his last email, they're not proposing all the different> >> aspects needed. In fact, nothing has actually been proposed yet. This> >> is an entirely philosophical debate. I don't even know what's being> >> proposed at this point - I just know it *could* be useful. Let's just> >> wait and see what is actually proposed before shooting it down, yes?>> > I don't think I'm trying to shoot anything down, because as I said, I> > like extensibility and am generally in favor of it. Rather, I'm> > expressing a concern which seems to me to be justified, based on what> > was posted. I'm sorry that my tone seems to have aggravated you, but> > it wasn't intended to do so.>> Likewise, the point I was trying to make is that a \"pluggable wire> protocol\" is only a tiny part of what would be needed to have a credible> MySQL, Oracle, or whatever clone.  There are large semantic differences> from those products; there are maintenance issues arising from the fact> that we whack structures like parse trees around all the time; and so on.> Maybe there is some useful thing that can be accomplished here, but we> need to consider the bigger picture rather than believing (without proof)> that a few hook variables will be enough to do anything.>Just to don't miss the point, creating a compat protocol to mimic others (TDS, MySQL, etc) is just one use case.There are other use cases to make wire protocol extensible, for example for telemetry I can use some hooks to propagate context [1] and get more detailed tracing information about the negotiation between frontend and backend and being able to implement a truly query tracing tool, for example.Another use case is extending the current protocol to, for example, send more information about query execution on CommandComplete command instead of just the number of affected rows.About the HTTP protocol I think PG should have it, maybe pure HTTP (no REST, just HTTP) because it's the most interoperable. Performance can still be very good with HTTP2, and you have a huge ecosystem of tools and proxies (like Envoy) that would do wonders with this. You could safely query a db from a web page (passing through proxies that would do auth, TLS, etc). Or maybe a higher performing gRPC version (which is also HTTP2 and is amazing), but this makes it a bit more difficult to query from a web page. In either case, context propagation is already built-in, and in a standard way.Regards,[1] https://www.w3.org/TR/trace-context/--    Fabrízio de Royes Mello   PostgreSQL Developer at OnGres Inc. - https://ongres.com", "msg_date": "Fri, 12 Feb 2021 10:44:11 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Thu, 11 Feb 2021 at 09:28, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Feb 10, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > That is a spot-on definition of where I do NOT want to end up. Hooks\n> > everywhere and enormous extensions that break anytime we change anything\n> > in the core. It's not really clear that anybody is going to find that\n> > more maintainable than a straight fork, except to the extent that it\n> > enables the erstwhile forkers to shove some of their work onto the PG\n> > community.\n>\n> +1.\n>\n> Making the lexer and parser extensible seems desirable to me. It would\n> be beneficial not only for companies like EDB and Amazon that might\n> want to extend the grammar in various ways, but also for extension\n> authors. However, it's vastly harder than Jan's proposal to make the\n> wire protocol pluggable. The wire protocol is pretty well-isolated\n> from the rest of the system. As long as you can get queries out of the\n> packets the client sends and package up the results to send back, it's\n> all good.\n\n\nI would have to disagree that the wire protocol is well-isolated. Sending\nand receiving are not in a single file\nThe codes are not even named constants so trying to find a specific one is\ndifficult.\n\nAnything that would clean this up would be a benefit\n\n\nThat being said, I'm not in favor of transferring maintenance work to\n> the community for this set of hooks any more than I am for something\n> on the parsing side. In general, I'm in favor of as much extensibility\n> as we can reasonably create, but with a complicated proposal like this\n> one, the community should expect to be able to get something out of\n> it. And so far what I hear Jan saying is that these hooks could in\n> theory be used for things other than Amazon's proprietary efforts and\n> those things could in theory bring benefits to the community, but\n> there are no actual plans to do anything with this that would benefit\n> anyone other than Amazon. Which seems to bring us right back to\n> expecting the community to maintain things for the benefit of\n> third-party forks.\n>\n\nif this proposal brought us the ability stream results that would be a huge\nplus!\n\nDave Cramer\nwww.postgres.rocks\n\n>\n>\n\nOn Thu, 11 Feb 2021 at 09:28, Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Feb 10, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That is a spot-on definition of where I do NOT want to end up.  Hooks\n> everywhere and enormous extensions that break anytime we change anything\n> in the core.  It's not really clear that anybody is going to find that\n> more maintainable than a straight fork, except to the extent that it\n> enables the erstwhile forkers to shove some of their work onto the PG\n> community.\n\n+1.\n\nMaking the lexer and parser extensible seems desirable to me. It would\nbe beneficial not only for companies like EDB and Amazon that might\nwant to extend the grammar in various ways, but also for extension\nauthors. However, it's vastly harder than Jan's proposal to make the\nwire protocol pluggable. The wire protocol is pretty well-isolated\nfrom the rest of the system. As long as you can get queries out of the\npackets the client sends and package up the results to send back, it's\nall good. I would have to disagree that the wire protocol is well-isolated. Sending and receiving are not in a single fileThe codes are not even named constants so trying to find a specific one is difficult.Anything that would clean this up would be a benefit\nThat being said, I'm not in favor of transferring maintenance work to\nthe community for this set of hooks any more than I am for something\non the parsing side. In general, I'm in favor of as much extensibility\nas we can reasonably create, but with a complicated proposal like this\none, the community should expect to be able to get something out of\nit. And so far what I hear Jan saying is that these hooks could in\ntheory be used for things other than Amazon's proprietary efforts and\nthose things could in theory bring benefits to the community, but\nthere are no actual plans to do anything with this that would benefit\nanyone other than Amazon. Which seems to bring us right back to\nexpecting the community to maintain things for the benefit of\nthird-party forks.if this proposal brought us the ability stream results that would be a huge plus! Dave Cramerwww.postgres.rocks", "msg_date": "Sun, 14 Feb 2021 12:35:48 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "Attached are a first patch and a functioning extension that implements a\ntelnet protocol server.\n\nThe extension needs to be loaded via shared_preload_libraries and\nconfigured for a port number and listen_addresses as follows:\n\nshared_preload_libraries = 'telnet_srv'\n\ntelnet_srv.listen_addresses = '*'\ntelnet_srv.port = 54323\n\nIt is incomplete in that it doesn't address things like the COPY protocol.\nBut it is enough to give a more detailed idea of what this interface will\nlook like and what someone would do to implement their own protocol or\nextend an existing one.\n\nThe overall idea here is to route all functions, that communicate with the\nfrontend, through function pointers that hang off of MyProcPort. Since we\nare performing socket communication in them I believe one extra function\npointer indirection is unlikely to have significant performance impact.\n\nBest Regards, Jan\nOn behalf of Amazon Web Services\n\n\n\n\n\nOn Sun, Feb 14, 2021 at 12:36 PM Dave Cramer <davecramer@postgres.rocks>\nwrote:\n\n>\n>\n> On Thu, 11 Feb 2021 at 09:28, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Wed, Feb 10, 2021 at 2:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > That is a spot-on definition of where I do NOT want to end up. Hooks\n>> > everywhere and enormous extensions that break anytime we change anything\n>> > in the core. It's not really clear that anybody is going to find that\n>> > more maintainable than a straight fork, except to the extent that it\n>> > enables the erstwhile forkers to shove some of their work onto the PG\n>> > community.\n>>\n>> +1.\n>>\n>> Making the lexer and parser extensible seems desirable to me. It would\n>> be beneficial not only for companies like EDB and Amazon that might\n>> want to extend the grammar in various ways, but also for extension\n>> authors. However, it's vastly harder than Jan's proposal to make the\n>> wire protocol pluggable. The wire protocol is pretty well-isolated\n>> from the rest of the system. As long as you can get queries out of the\n>> packets the client sends and package up the results to send back, it's\n>> all good.\n>\n>\n> I would have to disagree that the wire protocol is well-isolated. Sending\n> and receiving are not in a single file\n> The codes are not even named constants so trying to find a specific one is\n> difficult.\n>\n> Anything that would clean this up would be a benefit\n>\n>\n> That being said, I'm not in favor of transferring maintenance work to\n>> the community for this set of hooks any more than I am for something\n>> on the parsing side. In general, I'm in favor of as much extensibility\n>> as we can reasonably create, but with a complicated proposal like this\n>> one, the community should expect to be able to get something out of\n>> it. And so far what I hear Jan saying is that these hooks could in\n>> theory be used for things other than Amazon's proprietary efforts and\n>> those things could in theory bring benefits to the community, but\n>> there are no actual plans to do anything with this that would benefit\n>> anyone other than Amazon. Which seems to bring us right back to\n>> expecting the community to maintain things for the benefit of\n>> third-party forks.\n>>\n>\n> if this proposal brought us the ability stream results that would be a\n> huge plus!\n>\n> Dave Cramer\n> www.postgres.rocks\n>\n>>\n>>\n\n-- \nJan Wieck", "msg_date": "Thu, 18 Feb 2021 11:01:38 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Thu, Feb 18, 2021 at 9:32 PM Jan Wieck <jan@wi3ck.info> wrote:\n>\nAnd, here is how it looks with the following configuration:\ntelnet_srv.port = 1433\ntelnet_srv.listen_addresses = '*'\n\ntelnet localhost 1433\n\n\n   master\nTrying 127.0.0.1...\nConnected to localhost.\nEscape character is '^]'.\nPostgreSQL Telnet Interface\n\ndatabase name: postgres\nusername: kuntal\npassword: changeme\n> select 1;\n?column?\n----\n1\n\nSELECT 1\n> select 1/0;\nMessage: ERROR - division by zero\n\nFew comments in the extension code (although experimental):\n\n1. In telnet_srv.c,\n+ static int pe_port;\n..\n+ DefineCustomIntVariable(\"telnet_srv.port\",\n+ \"Telnet server port.\",\n+ NULL,\n+ &pe_port,\n+ pe_port,\n+ 1024,\n+ 65536,\n+ PGC_POSTMASTER,\n+ 0,\n+ NULL,\n+ NULL,\n+ NULL);\n\nThe variable pe_port should be initialized to a value which is > 1024\nand < 65536. Otherwise, the following assert will fail,\nTRAP: FailedAssertion(\"newval >= conf->min\", File: \"guc.c\", Line:\n5541, PID: 12100)\n\n2. The function pq_putbytes shouldn't be used by anyone other than\nold-style COPY out.\n+ pq_putbytes(msg, strlen(msg));\nOtherwise, the following assert will fail in the same function:\n /* Should only be called by old-style COPY OUT */\n Assert(DoingCopyOut);\n\n-- \nThanks & Regards,\nKuntal Ghosh\nAmazon Web Services\n\n\n", "msg_date": "Fri, 19 Feb 2021 15:06:41 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "\n> On 11 Feb 2021, at 16:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Maybe there is some useful thing that can be accomplished here, but we\n> need to consider the bigger picture rather than believing (without proof)\n> that a few hook variables will be enough to do anything.\n> \n> \t\t\tregards, tom lane\n> \n\nPluggable wire protocol is a game-changer on its own. \n\nThe bigger picture is that a right protocol choice enables large-scale architectural simplifications for whole classes of production applications.\n\nFor browser-based applications (lob, saas, e-commerce), having the database server speak the browser protocol enables architectures without backend application code. This in turn leads to significant reductions of latency, complexity, and application development time. And it’s not just lack of backend code: one also profits from all the existing infrastructure like per-query compression/format choice, browser connection management, sse, multiple streams, prioritization, caching/cdns, etc.\n\nDon’t know if you’d consider it as a proof, yet I am seeing 2x to 4x latency reduction in production applications from protocol conversion to http/2. My present solution is a simple connection pooler I built on top of Nginx transforming the tcp stream as it passes through.\n\nIn a recent case, letting the browser talk directly to the database allowed me to get rid of a ~100k-sloc .net backend and all the complexity and infrastructure that goes with coding/testing/deploying/maintaining it, while keeping all the positives: per-query compression/data conversion, querying multiple databases over a single connection, session cookies, etc. Deployment is trivial compared to what was before. Latency is down 2x-4x across the board.\n\nHaving some production experience with this approach, I can see how http/2-speaking Postgres would further reduce latency, processing cost, and time-to-interaction for applications.\n\nA similar case can be made for IoT where one would want to plug an iot-optimized protocol. Again, most of the benefit is possible with a protocol-converting proxy, but there are additional non-trivial performance gains to be had if the database server speaks the right protocol.\n\nWhile not the only use cases, I’d venture a guess these represent a sizable chunk of what Postgres is used for today, and will be used even more for, so the positive impact of a pluggable protocol would be significant.\n\n--\nDamir\n\n", "msg_date": "Fri, 19 Feb 2021 13:29:57 +0100", "msg_from": "Damir Simunic <damir.simunic@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "Thank you Kuntal,\n\nOn Fri, Feb 19, 2021 at 4:36 AM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\nwrote:\n\n> On Thu, Feb 18, 2021 at 9:32 PM Jan Wieck <jan@wi3ck.info> wrote:\n>\n>\n> Few comments in the extension code (although experimental):\n>\n> 1. In telnet_srv.c,\n> + static int pe_port;\n> ..\n> + DefineCustomIntVariable(\"telnet_srv.port\",\n> + \"Telnet server\n> port.\",\n> + NULL,\n> + &pe_port,\n> + pe_port,\n> + 1024,\n> + 65536,\n> + PGC_POSTMASTER,\n> + 0,\n> + NULL,\n> + NULL,\n> + NULL);\n>\n> The variable pe_port should be initialized to a value which is > 1024\n> and < 65536. Otherwise, the following assert will fail,\n> TRAP: FailedAssertion(\"newval >= conf->min\", File: \"guc.c\", Line:\n> 5541, PID: 12100)\n>\n>\nRight, forgot to turn on Asserts.\n\n\n> 2. The function pq_putbytes shouldn't be used by anyone other than\n> old-style COPY out.\n> + pq_putbytes(msg, strlen(msg));\n> Otherwise, the following assert will fail in the same function:\n> /* Should only be called by old-style COPY OUT */\n> Assert(DoingCopyOut);\n>\n\nI would argue that the Assert needs to be changed. It is obvious that the\nAssert in place is meant to guard against direct usage of pg_putbytes() in\nan attempt to force all code to use pq_putmessage() instead. This is good\nwhen speaking libpq wire protocol since all messages there are prefixed\nwith a one byte message type. It does not apply to other protocols.\n\nI propose to create another global boolean IsNonLibpqFrontend which the\nprotocol extension will set to true when accepting the connection and the\nabove then will change to\n\n Assert(DoingCopyOut || IsNonLibpqFrontend);\n\n\nRegards, Jan\n\n\n\n>\n> --\n> Thanks & Regards,\n> Kuntal Ghosh\n> Amazon Web Services\n>\n\n\n-- \nJan Wieck\n\nThank you Kuntal,On Fri, Feb 19, 2021 at 4:36 AM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:On Thu, Feb 18, 2021 at 9:32 PM Jan Wieck <jan@wi3ck.info> wrote:\n\nFew comments in the extension code (although experimental):\n\n1. In telnet_srv.c,\n+ static int        pe_port;\n..\n+       DefineCustomIntVariable(\"telnet_srv.port\",\n+                                                       \"Telnet server port.\",\n+                                                       NULL,\n+                                                       &pe_port,\n+                                                       pe_port,\n+                                                       1024,\n+                                                       65536,\n+                                                       PGC_POSTMASTER,\n+                                                       0,\n+                                                       NULL,\n+                                                       NULL,\n+                                                       NULL);\n\nThe variable pe_port should be initialized to a value which is > 1024\nand < 65536. Otherwise, the following assert will fail,\nTRAP: FailedAssertion(\"newval >= conf->min\", File: \"guc.c\", Line:\n5541, PID: 12100)\nRight, forgot to turn on Asserts. \n2. The function pq_putbytes shouldn't be used by anyone other than\nold-style COPY out.\n+       pq_putbytes(msg, strlen(msg));\nOtherwise, the following assert will fail in the same function:\n    /* Should only be called by old-style COPY OUT */\n    Assert(DoingCopyOut);I would argue that the Assert needs to be changed. It is obvious that the Assert in place is meant to guard against direct usage of pg_putbytes() in an attempt to force all code to use pq_putmessage() instead. This is good when speaking libpq wire protocol since all messages there are prefixed with a one byte message type. It does not apply to other protocols.I propose to create another global boolean IsNonLibpqFrontend which the protocol extension will set to true when accepting the connection and the above then will change to    Assert(DoingCopyOut || IsNonLibpqFrontend);Regards, Jan \n\n-- \nThanks & Regards,\nKuntal Ghosh\nAmazon Web Services\n-- Jan Wieck", "msg_date": "Fri, 19 Feb 2021 08:46:10 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 19/02/2021 14:29, Damir Simunic wrote:\n> \n>> On 11 Feb 2021, at 16:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Maybe there is some useful thing that can be accomplished here, but\n>> we need to consider the bigger picture rather than believing\n>> (without proof) that a few hook variables will be enough to do\n>> anything.\n> \n> Pluggable wire protocol is a game-changer on its own.\n> \n> The bigger picture is that a right protocol choice enables\n> large-scale architectural simplifications for whole classes of\n> production applications.\n> \n> For browser-based applications (lob, saas, e-commerce), having the\n> database server speak the browser protocol enables architectures\n> without backend application code. This in turn leads to significant\n> reductions of latency, complexity, and application development time.\n> And it’s not just lack of backend code: one also profits from all the\n> existing infrastructure like per-query compression/format choice,\n> browser connection management, sse, multiple streams, prioritization,\n> caching/cdns, etc.\n> \n> Don’t know if you’d consider it as a proof, yet I am seeing 2x to 4x\n> latency reduction in production applications from protocol conversion\n> to http/2. My present solution is a simple connection pooler I built\n> on top of Nginx transforming the tcp stream as it passes through.\n\nI can see value in supporting different protocols. I don't like the \napproach discussed in this thread, however.\n\nFor example, there has been discussion elsewhere about integrating \nconnection pooling into the server itself. For that, you want to have a \ncustom process that listens for incoming connections, and launches \nbackends independently of the incoming connections. These hooks would \nnot help with that.\n\nSimilarly, if you want to integrate a web server into the database \nserver, you probably also want some kind of connection pooling. A \none-to-one relationship between HTTP connections and backend processes \ndoesn't seem nice.\n\nWith the hooks that exist today, would it possible to write a background \nworker that listens on a port, instead of postmaster? Can you launch \nbackends from a background worker? And communicate the backend processes \nusing a shared memory message queue (see pqmq.c).\n\nI would recommend this approach: write a separate program that sits \nbetween the client and PostgreSQL, speaking custom protocol to the \nclient, and libpq to the backend. And then move that program into a \nbackground worker process.\n\n> In a recent case, letting the browser talk directly to the database\n> allowed me to get rid of a ~100k-sloc .net backend and all the\n> complexity and infrastructure that goes with\n> coding/testing/deploying/maintaining it, while keeping all the\n> positives: per-query compression/data conversion, querying multiple\n> databases over a single connection, session cookies, etc. Deployment\n> is trivial compared to what was before. Latency is down 2x-4x across\n> the board.\n\nQuerying multiple databases over a single connection is not possible \nwith the approach taken here. Not sure about the others things you listed.\n\n- Heikki\n\n\n", "msg_date": "Fri, 19 Feb 2021 15:48:35 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Fri, Feb 19, 2021 at 8:48 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> With the hooks that exist today, would it possible to write a background\n> worker that listens on a port, instead of postmaster? Can you launch\n> backends from a background worker? And communicate the backend processes\n> using a shared memory message queue (see pqmq.c).\n>\n\nYes. That's similar to how mine work: A background worker that acts as a\nlistener for the new protocol which then sets up a new dynamic background\nworker on accept(), waits for its creation, passes the fd to the new\nbackground worker, and sits in a while (!got_sigterm) loop checking the\nsocket for activity and running the protocol similar to postmaster. I\nhaven't looked at the latest connection pooling patches but, in general,\nconnection pooling is an abstract issue and should be usable for any type\nof connection as, realistically, it's just an event loop and state problem\n- it shouldn't be protocol specific.\n\nI would recommend this approach: write a separate program that sits\n> between the client and PostgreSQL, speaking custom protocol to the\n> client, and libpq to the backend. And then move that program into a\n> background worker process.\n>\n\nDoing protocol conversion between libpq and a different protocol works, but\nis slow. My implementations were originally all proxies that worked outside\nthe database, then I moved them inside, then I replaced all the libpq code\nwith SPI-related calls.\n\n\n> > In a recent case, letting the browser talk directly to the database\n> > allowed me to get rid of a ~100k-sloc .net backend and all the\n> > complexity and infrastructure that goes with\n> > coding/testing/deploying/maintaining it, while keeping all the\n> > positives: per-query compression/data conversion, querying multiple\n> > databases over a single connection, session cookies, etc. Deployment\n> > is trivial compared to what was before. Latency is down 2x-4x across\n> > the board.\n>\n> Querying multiple databases over a single connection is not possible\n> with the approach taken here. Not sure about the others things you listed.\n>\n\nAccessing multiple databases from the same backend is problematic overall -\nI didn't solve that in my implementations either. IIRC, once a bgworker is\nattached to a specific database, it's basically stuck with that database.\n\n-- \nJonah H. Harris\n\nOn Fri, Feb 19, 2021 at 8:48 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:With the hooks that exist today, would it possible to write a background \nworker that listens on a port, instead of postmaster? Can you launch \nbackends from a background worker? And communicate the backend processes \nusing a shared memory message queue (see pqmq.c).Yes. That's similar to how mine work: A background worker that acts as a listener for the new protocol which then sets up a new dynamic background worker on accept(), waits for its creation, passes the fd to the new background worker, and sits in a while (!got_sigterm) loop checking the socket for activity and running the protocol similar to postmaster. I haven't looked at the latest connection pooling patches but, in general, connection pooling is an abstract issue and should be usable for any type of connection as, realistically, it's just an event loop and state problem - it shouldn't be protocol specific.I would recommend this approach: write a separate program that sits \nbetween the client and PostgreSQL, speaking custom protocol to the \nclient, and libpq to the backend. And then move that program into a \nbackground worker process.Doing protocol conversion between libpq and a different protocol works, but is slow. My implementations were originally all proxies that worked outside the database, then I moved them inside, then I replaced all the libpq code with SPI-related calls. > In a recent case, letting the browser talk directly to the database\n> allowed me to get rid of a ~100k-sloc .net backend and all the\n> complexity and infrastructure that goes with\n> coding/testing/deploying/maintaining it, while keeping all the\n> positives: per-query compression/data conversion, querying multiple\n> databases over a single connection, session cookies, etc. Deployment\n> is trivial compared to what was before. Latency is down 2x-4x across\n> the board.\n\nQuerying multiple databases over a single connection is not possible \nwith the approach taken here. Not sure about the others things you listed.Accessing multiple databases from the same backend is problematic overall - I didn't solve that in my implementations either. IIRC, once a bgworker is attached to a specific database, it's basically stuck with that database.-- Jonah H. Harris", "msg_date": "Fri, 19 Feb 2021 09:37:26 -0500", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 2/19/21 8:48 AM, Heikki Linnakangas wrote:\n> I can see value in supporting different protocols. I don't like the\n> approach discussed in this thread, however.\n> \n> For example, there has been discussion elsewhere about integrating\n> connection pooling into the server itself. For that, you want to have a\n> custom process that listens for incoming connections, and launches\n> backends independently of the incoming connections. These hooks would\n> not help with that.\n\nThe two are not mutually exclusive. You are right that the current \nproposal would not help with that type of built in connection pool, but \nit may be extended to that.\n\nGive the function, that postmaster is calling to accept a connection \nwhen a server_fd is ready, a return code that it can use to tell \npostmaster \"forget about it, don't fork or do anything else with it\". \nThis function is normally calling StreamConnection() before the \npostmaster then forks the backend. But it could instead hand over the \nsocket to the pool background worker (I presume Jonah is transferring \nthem from process to process via UDP packet). The pool worker is then \nlaunching the actual backends which receive a requesting client via the \nsame socket transfer to perform one or more transactions, then hand the \nsocket back to the pool worker.\n\nAll of that would still require a protocol extension that has special \nmessages for \"here is a client socket for you\" and \"you can have that \nback\".\n\n\n> I would recommend this approach: write a separate program that sits\n> between the client and PostgreSQL, speaking custom protocol to the\n> client, and libpq to the backend. And then move that program into a\n> background worker process.\n\nThat is a classic protocol converting proxy. It has been done in the \npast with not really good results, both performance wise as with respect \nto protocol completeness.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n", "msg_date": "Fri, 19 Feb 2021 10:13:36 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "\n> On 19 Feb 2021, at 14:48, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n> For example, there has been discussion elsewhere about integrating connection pooling into the server itself. For that, you want to have a custom process that listens for incoming connections, and launches backends independently of the incoming connections. These hooks would not help with that.\n> \n\nNot clear how the connection polling in the core is linked to discussing pluggable wire protocols. \n\n> Similarly, if you want to integrate a web server into the database server, you probably also want some kind of connection pooling. A one-to-one relationship between HTTP connections and backend processes doesn't seem nice.\n> \n\nHTTP/2 is just a protocol, not unlike fe/be that has a one-to-one relationship to backend processes as it stands. It shuttles data back and forth in query/response exchanges, and happens to be used by web servers and web browsers, among other things. My mentioning of it was simply an example I can speak of from experience, as opposed to speculating. Could have brought up any other wire protocol if I had experience with it, say MQTT.\n\nTo make it clear, “a pluggable wire protocol” as discussed here is a set of rules that defines how data is transmitted: what the requests and responses are, and how is the data laid out on the wire, what to do in case of error, etc. Nothing to do with a web server; why would one want to integrate it in the database, anyway?\n\nThe intended contribution to the discussion of big picture of pluggable wire protocols is that there are significant use cases where the protocol choice is restricted on the client side, and allowing a pluggable wire protocol on the server side brings tangible benefits in performance and architectural simplification. That’s all. The rest were supporting facts that hopefully can also serve as a counterpoint to “pluggable wire protocol is primarily useful to make Postgres pretend to be Mysql.\"\n\nProtocol conversion HTTP/2<—>FE/BE on the connection pooler already brings a lot of the mentioned benefits, and I’m satisfied with it. Beyond that I’m simply supporting the idea of pluggable protocols as experience so far allows me to see advantages that might sound theoretical to someone who never tried this scenario in production.\n\nGlad to offer a couple of examples where I see potential for performance gains for having such a wire protocol pluggable in the core. Let me know if you want me to elaborate.\n\n> Querying multiple databases over a single connection is not possible with the approach taken here. \n\nIndeed, querying multiple databases over a single connection is something you need a proxy for and a different client protocol from fe/be. No need to mix that with the talk about pluggable wire protocol. \n\nMy mentioning of it was in the sense “a lot of LoB backend code is nothing more than a bloated protocol converter that happens to also allow connecting to multiple databases from a single client connection => letting the client speak to the database [trough a proxy in this case] removed the bloated source of latency but kept the advantages.”\n\n--\nDamir\n\n\n\n", "msg_date": "Fri, 19 Feb 2021 18:18:48 +0100", "msg_from": "Damir Simunic <damir.simunic@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 2/19/21 12:18 PM, Damir Simunic wrote:\n> \n>> On 19 Feb 2021, at 14:48, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> \n>> For example, there has been discussion elsewhere about integrating connection pooling into the server itself. For that, you want to have a custom process that listens for incoming connections, and launches backends independently of the incoming connections. These hooks would not help with that.\n>> \n> \n> Not clear how the connection polling in the core is linked to discussing pluggable wire protocols.\n\nIt isn't per se. But there are things pluggable wire protocols can help \nwith in regards to connection pooling. For example a connection pool \nlike pgbouncer can be configured to switch client-backend association on \na transaction level. It therefore scans the traffic for the in \ntransaction state. This however only works if an application uses \nidentical session states across all connections in a pool. The JDBC \ndriver for example only really prepares PreparedStatements after a \nnumber of executions and then assigns a name based on a counter to them. \nSo it is neither guaranteed that a certain backend has the same \nstatements prepared, nor that they are named the same. Therefore JDBC \nbased applications cannot use PreparedStatements through pgbouncer in \ntransaction mode.\n\nAn \"extended\" libpq protocol could allow the pool to give clients a \nunique ID. The protocol handler would then maintain maps with the SQL of \nprepared statements and what the client thinks their prepared statement \nname is. So when a client sends a P packet, the protocol handler would \nlookup the mapping and see if it already has that statement prepared. \nJust add the mapping info or actually create a new statement entry in \nthe maps. These maps are of course shared across backends. So if then \nanother client sends bind+execute and the backend doesn't have a plan \nfor that query, it would internally create one.\n\nThere are security implications here, so things like the search path \nmight have to be part of the maps, but those are implementation details.\n\nAt the end this would allow a project like pgbouncer to create an \nextended version of libpq protocol that caters to the very special needs \nof that pool.\n\nMost of that would of course be possible on the pool side itself. But \nthe internal structure of pgbouncer isn't suitable for that. It is very \nlightweight and for long SQL queries may never have the complete 'P' \nmessage in memory. It would also not have direct access to security \nrelated information like the search path, which would require extra \nround trips between the pool and the backend to retrieve it.\n\nSo while not suitable to create a built in pool by itself, loadable wire \nprotocols can definitely help with connection pooling.\n\nI also am not sure if building a connection pool into a background \nworker or postmaster is a good idea to begin with. One of the important \nfeatures of a pool is to be able to suspend traffic and make the server \ncompletely idle to for example be able to restart the postmaster without \nforcibly disconnecting all clients. A pool built into a background \nworker cannot do that.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n", "msg_date": "Fri, 19 Feb 2021 13:30:54 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "\n> On 19 Feb 2021, at 19:30, Jan Wieck <jan@wi3ck.info> wrote:\n> \n> An \"extended\" libpq protocol could allow the pool to give clients a unique ID. The protocol handler would then maintain maps with the SQL of prepared statements and what the client thinks their prepared statement name is. \n\nOr, the connection pooler could support a different wire protocol that has some form of client cookies and could let the client hold on to an opaque token to present back with every query and use that to route to the right backend with a prepared statement for that client (or match the appropriate cached p statement from the cache), even across client disconnections.\n\n> Most of that would of course be possible on the pool side itself. But the internal structure of pgbouncer isn't suitable for that. It is very lightweight and for long SQL queries may never have the complete 'P' message in memory. It would also not have direct access to security related information like the search path, which would require extra round trips between the pool and the backend to retrieve it.\n\n> \n> So while not suitable to create a built in pool by itself, loadable wire protocols can definitely help with connection pooling.\n\nI think loadable wire protocols will have a positive effect on developing more sophisticated connection poolers.\n\n> I also am not sure if building a connection pool into a background worker or postmaster is a good idea to begin with. One of the important features of a pool is to be able to suspend traffic and make the server completely idle to for example be able to restart the postmaster without forcibly disconnecting all clients.\n\nAgreed. Going even further, a connection pooler supporting a protocol like quic (where the notion of connection is decoupled from the actual socket connection) could help a lot with balancing load between servers and data centers, which also would not be convenient for the actual Postgres to do with present architecture. (And here, too, a pluggable wire protocol would help with keeping tabs on individual backends).\n\n--\nDamir\n\n", "msg_date": "Fri, 19 Feb 2021 20:06:52 +0100", "msg_from": "Damir Simunic <damir.simunic@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "\n\nOn 19/2/21 14:48, Heikki Linnakangas wrote:\n> [...]\n>\n> I can see value in supporting different protocols. I don't like the\n> approach discussed in this thread, however.\n>\n> For example, there has been discussion elsewhere about integrating\n> connection pooling into the server itself. For that, you want to have\n> a custom process that listens for incoming connections, and launches\n> backends independently of the incoming connections. These hooks would\n> not help with that.\n>\n> Similarly, if you want to integrate a web server into the database\n> server, you probably also want some kind of connection pooling. A\n> one-to-one relationship between HTTP connections and backend processes\n> doesn't seem nice.\n\n    While I'm far from an HTTP/2 expert and I may be wrong, from all I\nknow HTTP/2 allows to create full-duplex, multiplexed, asynchronous\nchannels. So multiple connections can be funneled through a single\nconnection. This decreases the need and aim for connection pooling. It\ndoesn't eliminate it completely, as you may have the channel busy if a\nsingle tenant is sending a lot of data; and you could not have more than\none concurrent action from a single tenant. OTOH, given these HTTP/2\nproperties, a non-pooled HTTP/2 endpoint may provide already significant\nbenefits due to the multiplexing capabilities.\n\n    So definitely we don't need to consider an HTTP endpoint would\nentail a 1:1 mapping between connections and backend processes.\n\n\n    Álvaro\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n", "msg_date": "Fri, 19 Feb 2021 21:32:30 +0100", "msg_from": "=?UTF-8?B?w4FsdmFybyBIZXJuw6FuZGV6?= <aht@ongres.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "\n\nOn 19/2/21 19:30, Jan Wieck wrote:\n> [...]\n>\n> I also am not sure if building a connection pool into a background\n> worker or postmaster is a good idea to begin with. One of the\n> important features of a pool is to be able to suspend traffic and make\n> the server completely idle to for example be able to restart the\n> postmaster without forcibly disconnecting all clients. A pool built\n> into a background worker cannot do that.\n>\n>\n\n    In my opinion, there are different reasons to use a connection pool,\nthat lead to different placements of that connection pool on the\narchitecture of the system. The ability of a pool to suspend (pause)\ntraffic and apply live re-configurations is a very important one to\nimplement high availability practices, transparent scaling, and others.\nBut these poolers belong to middleware layers (as in different processes\nin different servers), where these pausing operations make complete sense.\n\n    Connection poolers fronting the database have other specific\nmissions, namely to control the fan-in of connections to the database.\nThese connection poolers make sense being as close to the database as\npossible (ideally: embedded) but don't need to perform pause operations\nhere.\n\n\n    Álvaro\n\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n", "msg_date": "Fri, 19 Feb 2021 21:39:36 +0100", "msg_from": "=?UTF-8?B?w4FsdmFybyBIZXJuw6FuZGV6?= <aht@ongres.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Fri, 19 Feb 2021 at 15:39, Álvaro Hernández <aht@ongres.com> wrote:\n\n>\n>\n> On 19/2/21 19:30, Jan Wieck wrote:\n> > [...]\n> >\n> > I also am not sure if building a connection pool into a background\n> > worker or postmaster is a good idea to begin with. One of the\n> > important features of a pool is to be able to suspend traffic and make\n> > the server completely idle to for example be able to restart the\n> > postmaster without forcibly disconnecting all clients. A pool built\n> > into a background worker cannot do that.\n> >\n> >\n>\n\n\n\nYes, when did it become a good idea to put a connection pooler in the\nbackend???\n\nDave Cramer\nwww.postgres.rocks\n\nOn Fri, 19 Feb 2021 at 15:39, Álvaro Hernández <aht@ongres.com> wrote:\n\nOn 19/2/21 19:30, Jan Wieck wrote:\n> [...]\n>\n> I also am not sure if building a connection pool into a background\n> worker or postmaster is a good idea to begin with. One of the\n> important features of a pool is to be able to suspend traffic and make\n> the server completely idle to for example be able to restart the\n> postmaster without forcibly disconnecting all clients. A pool built\n> into a background worker cannot do that.\n>\n> Yes, when did it become a good idea to put a connection pooler in the backend???Dave Cramerwww.postgres.rocks", "msg_date": "Mon, 22 Feb 2021 07:34:51 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 2/22/21 7:34 AM, Dave Cramer wrote:\n\n> Yes, when did it become a good idea to put a connection pooler in the \n> backend???\n> \n> Dave Cramer\n> www.postgres.rocks\n\nAs Alvaro said, different pool purposes lead to different pool \narchitecture and placement.\n\nHowever, the changes proposed here, aiming at the ability to load \nmodified or entirely different wire protocol handlers, do not limit such \nconnection pooling. To the contrary.\n\nAny connection pool, that wants to maintain more client connections than \nactual database backends, must know when it is appropriate to do so. \nUsually the right moment to break the current client-backend association \nis when the backend is outside a transaction block and waiting for the \nnext client request. To do so pools cannot blindly shovel data back and \nforth. They need to scan one way or another for the backend's 'Z' \nmessage, sent in tcop/dest.c ReadyForQuery(), where the backend also \nreports the current transaction state. IOW the pool must follow the flow \nof libpq messages on all connections, message by message, row by row, \njust for the purpose of seeing that one, single bit. It is possible to \ntransmit that information to the pool on a separate channel.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n", "msg_date": "Mon, 22 Feb 2021 09:55:53 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 2/19/21 4:36 AM, Kuntal Ghosh wrote:\n> On Thu, Feb 18, 2021 at 9:32 PM Jan Wieck <jan@wi3ck.info> wrote:\n\n> Few comments in the extension code (although experimental):\n> \n> 1. In telnet_srv.c,\n> + static int pe_port;\n> ..\n> + DefineCustomIntVariable(\"telnet_srv.port\",\n> + \"Telnet server port.\",\n> + NULL,\n> + &pe_port,\n> + pe_port,\n> + 1024,\n> + 65536,\n> + PGC_POSTMASTER,\n> + 0,\n> + NULL,\n> + NULL,\n> + NULL);\n> \n> The variable pe_port should be initialized to a value which is > 1024\n> and < 65536. Otherwise, the following assert will fail,\n> TRAP: FailedAssertion(\"newval >= conf->min\", File: \"guc.c\", Line:\n> 5541, PID: 12100)\n> \n> 2. The function pq_putbytes shouldn't be used by anyone other than\n> old-style COPY out.\n> + pq_putbytes(msg, strlen(msg));\n> Otherwise, the following assert will fail in the same function:\n> /* Should only be called by old-style COPY OUT */\n> Assert(DoingCopyOut);\n> \n\nAttached are an updated patch and telnet_srv addressing the above problems.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services", "msg_date": "Mon, 22 Feb 2021 10:01:41 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Mon, Feb 22, 2021 at 07:34:51AM -0500, Dave Cramer wrote:\n> On Fri, 19 Feb 2021 at 15:39, �lvaro Hern�ndez <aht@ongres.com> wrote:\n> \n> > On 19/2/21 19:30, Jan Wieck wrote:\n> > > [...]\n> > >\n> > > I also am not sure if building a connection pool into a\n> > > background worker or postmaster is a good idea to begin with.\n> > > One of the important features of a pool is to be able to suspend\n> > > traffic and make the server completely idle to for example be\n> > > able to restart the postmaster without forcibly disconnecting\n> > > all clients. A pool built into a background worker cannot do\n> > > that.\n> \n> Yes, when did it become a good idea to put a connection pooler in\n> the backend???\n\nIt became a great idea when we noticed just how large and\nresource-intensive backends were, especially in light of applications'\nbroad tendency to assume that they're free. While I agree that that's\nnot a good assumption, it's one that's so common everywhere in\ncomputing that we really need to face up to the fact that it's not\ngoing away any time soon.\n\nDecoupling the parts that serve requests from the parts that execute\nqueries also goes a long way toward things we've wanted for quite\nawhile like admission control systems and/or seamless zero-downtime\nupgrades.\n\nSeparately, as the folks at AWS and elsewhere have mentioned, being\nable to pretend at some level to be a different RDBMS can only happen\nif we respond to its wire protocol.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 22 Feb 2021 17:19:01 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 2/10/21 1:10 PM, Tom Lane wrote:\n> What I'm actually more concerned about, in this whole line of development,\n> is the follow-on requests that will surely occur to kluge up Postgres\n> to make its behavior more like $whatever. As in \"well, now that we\n> can serve MySQL clients protocol-wise, can't we pretty please have a\n> mode that makes the parser act more like MySQL\".\n\nThose requests will naturally follow. But I don't see it as the main \nproject's responsibility to satisfy them. It would be rather natural to \ndevelop the two things together. The same developer or group of \ndevelopers, who are trying to connect a certain client, will want to \nhave other compatibility features.\n\nAs Jim Mlodgenski just posted in [0], having the ability to also extend \nand/or replace the parser will give them the ability to do just that.\n\n\nRegards, Jan\n\n[0] \nhttps://www.postgresql.org/message-id/CAB_5SReoPJAPO26Z8+WN6ugfBb2UDc3c21rRz9=BziBmCaph5Q@mail.gmail.com\n\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n", "msg_date": "Mon, 22 Feb 2021 11:44:52 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> On 2/10/21 1:10 PM, Tom Lane wrote:\n>> What I'm actually more concerned about, in this whole line of development,\n>> is the follow-on requests that will surely occur to kluge up Postgres\n>> to make its behavior more like $whatever. As in \"well, now that we\n>> can serve MySQL clients protocol-wise, can't we pretty please have a\n>> mode that makes the parser act more like MySQL\".\n\n> Those requests will naturally follow. But I don't see it as the main \n> project's responsibility to satisfy them. It would be rather natural to \n> develop the two things together. The same developer or group of \n> developers, who are trying to connect a certain client, will want to \n> have other compatibility features.\n\n> As Jim Mlodgenski just posted in [0], having the ability to also extend \n> and/or replace the parser will give them the ability to do just that.\n\nYeah, and as I pointed out somewhere upthread, trying to replace the\nwhole parser will just end in a maintenance nightmare. The constructs\nthat the parser has to emit are complex, Postgres-specific, and\nconstantly evolving. We are NOT going to promise any sort of cross\nversion compatibility for parse trees.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Feb 2021 13:01:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Mon, Feb 22, 2021 at 1:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jan Wieck <jan@wi3ck.info> writes:\n> > As Jim Mlodgenski just posted in [0], having the ability to also extend\n> > and/or replace the parser will give them the ability to do just that.\n>\n> Yeah, and as I pointed out somewhere upthread, trying to replace the\n> whole parser will just end in a maintenance nightmare. The constructs\n> that the parser has to emit are complex, Postgres-specific, and\n> constantly evolving. We are NOT going to promise any sort of cross\n> version compatibility for parse trees.\n>\n\nWholeheartedly agreed. Core should only ever maintain the hooks, never\ntheir usage. It's the responsibility of the extension author to maintain\ntheir code just as it is to manage their use of all other hook usages. Yes,\nit's sometimes a maintenance nightmare - but with great power comes great\nresponsibility... as is anything loaded directly into the process.\n\n-- \nJonah H. Harris\n\nOn Mon, Feb 22, 2021 at 1:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jan Wieck <jan@wi3ck.info> writes:> As Jim Mlodgenski just posted in [0], having the ability to also extend \n> and/or replace the parser will give them the ability to do just that.\n\nYeah, and as I pointed out somewhere upthread, trying to replace the\nwhole parser will just end in a maintenance nightmare.  The constructs\nthat the parser has to emit are complex, Postgres-specific, and\nconstantly evolving.  We are NOT going to promise any sort of cross\nversion compatibility for parse trees.Wholeheartedly agreed. Core should only ever maintain the hooks, never their usage. It's the responsibility of the extension author to maintain their code just as it is to manage their use of all other hook usages. Yes, it's sometimes a maintenance nightmare - but with great power comes great responsibility... as is anything loaded directly into the process.-- Jonah H. Harris", "msg_date": "Mon, 22 Feb 2021 13:13:32 -0500", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 2/22/21 1:01 PM, Tom Lane wrote:\n> Yeah, and as I pointed out somewhere upthread, trying to replace the\n> whole parser will just end in a maintenance nightmare. The constructs\n> that the parser has to emit are complex, Postgres-specific, and\n> constantly evolving. We are NOT going to promise any sort of cross\n> version compatibility for parse trees.\n\nAbsolutely agreed. We cannot promise that the parsetree generated in one \nversion will work with the planner, optimizer and executor of the next. \nThese types of projects will need to pay close attention and more \nimportantly, develop their own regression test suites that detect when \nsomething has changed in core. That said, discussion about the parser \nhook should happen in the other thread.\n\nI don't even expect that we can guarantee that the functions I am trying \nto allow to be redirected for the wire protocol will be stable forever. \nlibpq V4 may need to change some of the call signatures, which has \nhappened before. For example, the function to send the command \ncompletion message to the frontend (tcop/dest.c EndCommand()) changed \nfrom 12 to 13.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n", "msg_date": "Mon, 22 Feb 2021 14:00:51 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 2/19/21 10:13 AM, Jan Wieck wrote:\n\n> Give the function, that postmaster is calling to accept a connection\n> when a server_fd is ready, a return code that it can use to tell\n> postmaster \"forget about it, don't fork or do anything else with it\".\n> This function is normally calling StreamConnection() before the\n> postmaster then forks the backend. But it could instead hand over the\n> socket to the pool background worker (I presume Jonah is transferring\n> them from process to process via UDP packet). The pool worker is then\n> launching the actual backends which receive a requesting client via the\n> same socket transfer to perform one or more transactions, then hand the\n> socket back to the pool worker.\n\nThe function in question, which is StreamConnection() and with this \npatch can be replaced with an extension funtion via the fn_accept \npointer, already has that capability. If StreamConnection() or its \nreplacement returns a NULL pointer, the postmaster just skips calling \nBackendStartup(). So everything is already in place for the above to work.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n", "msg_date": "Wed, 24 Feb 2021 10:46:54 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "\nI think, the way the abstractions are chosen in this patch, it is still \nvery much tied to how the libpq protocol works. For example, there is a \ncancel key and a ready-for-query message. Some of the details of the \nsimple and the extended query are exposed. So you could create a \nprotocol that has a different way of encoding the payloads, as your \ntelnet example does, but I don't believe that you could implement a \ncompetitor's protocol through this. Unless you have that done and want \nto show it?\n\n\n", "msg_date": "Wed, 3 Mar 2021 20:43:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "I have not looked at the actual patch, but does it allow you to set up\nits own channels to listen to ?\n\nFor example if I'd want to set up a server to listen to incoming connections\nover QUIC [1] - a protocol which create a connection over UDP and allows\nclients to move to new IP addresses (among other things) then would the\ncurrent extensibility proposal cover this ?\n\nMaybe a correct approach would be to just start up a separate\n\"postmaster\" to listen to a different protocol ?\n\n[1] https://en.wikipedia.org/wiki/QUIC\n\nCheers\nHannu\n\n\n", "msg_date": "Wed, 3 Mar 2021 21:27:45 +0100", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 3/3/21 2:43 PM, Peter Eisentraut wrote:\n> \n> I think, the way the abstractions are chosen in this patch, it is still\n> very much tied to how the libpq protocol works. For example, there is a\n> cancel key and a ready-for-query message. Some of the details of the\n> simple and the extended query are exposed. So you could create a\n> protocol that has a different way of encoding the payloads, as your\n> telnet example does, but I don't believe that you could implement a\n> competitor's protocol through this. Unless you have that done and want\n> to show it?\n> \n\nCorrect, a lot of what this patch does is to allow a developer of such \nprotocol extension to just \"extend\" what the server side libpq does, by \nbuilding a wrapper around the function they are interested in. That \ndoesn't change the protocol, but rather allows additional functionality \nlike the telemetry data gathering, Fabrizio was talking about.\n\nThe telnet_srv tutorial extension (which needs more documentation) is an \nexample of how far one can go by replacing those funcitons, in that it \nactually implements a very different wire protocol. This one still fits \ninto the regular libpq message flow.\n\nAnother possibility, and this is what is being used by the AWS team \nimplementing the TDS protocol for Babelfish, is to completely replace \nthe entire TCOP mainloop function PostgresMain(). That is of course a \nrather drastic move and requires a lot more coding on the extension \nside, but the whole thing was developed that way from the beginning and \nit is working. I don't have a definitive date when that code will be \npresented. Kuntal or Prateek may be able to fill in more details.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n", "msg_date": "Thu, 4 Mar 2021 15:55:29 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On Thu, Mar 4, 2021 at 9:55 PM Jan Wieck <jan@wi3ck.info> wrote:\n>\n> Another possibility, and this is what is being used by the AWS team\n> implementing the TDS protocol for Babelfish, is to completely replace\n> the entire TCOP mainloop function PostgresMain().\n\nI suspect this is the only reasonable way to do it for protocols which are\nnot very close to libpq.\n\n> That is of course a\n> rather drastic move and requires a lot more coding on the extension\n> side,\n\nNot necessarily - if the new protocol is close to existing one, then it is\ncopy/paste + some changes.\n\nIf it is radically different, then trying to fit it into the current\nmainloop will\nbe even harder than writing from scratch.\n\nAnd will very likely fail in the end anyway :)\n\n> but the whole thing was developed that way from the beginning and\n> it is working. I don't have a definitive date when that code will be\n> presented. Kuntal or Prateek may be able to fill in more details.\n\nAre you really fully replacing the main loop, or are you running a second\nmain loop in parallel in the same database server instance, perhaps as\na separate TDS_postmaster backend ?\n\nWill the data still also be accessible \"as postgres\" via port 5432 when\nTDS/SQLServer support is active ?\n\n\n", "msg_date": "Fri, 5 Mar 2021 01:38:02 +0100", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" }, { "msg_contents": "On 3/4/21 7:38 PM, Hannu Krosing wrote:\n> On Thu, Mar 4, 2021 at 9:55 PM Jan Wieck <jan@wi3ck.info> wrote:\n>> but the whole thing was developed that way from the beginning and\n>> it is working. I don't have a definitive date when that code will be\n>> presented. Kuntal or Prateek may be able to fill in more details.\n> \n> Are you really fully replacing the main loop, or are you running a second\n> main loop in parallel in the same database server instance, perhaps as\n> a separate TDS_postmaster backend ?\n> \n> Will the data still also be accessible \"as postgres\" via port 5432 when\n> TDS/SQLServer support is active ?\n\nThe individual backend (session) is running a different main loop. A \nlibpq based client will still get the regular libpq and the original \nPostgresMain() behavior on port 5432. The default port for TDS is 1433 \nand with everything in place I can connect to the same database on that \nport with Microsoft's SQLCMD.\n\nThe whole point of all this is to allow the postmaster to listen to more \nthan just 5432 and have different communication protocols on those \n*additional* ports. Nothing is really *replaced*. The parts of the \nbackend, that do actual socket communication, are just routed through \nfunction pointers so that an extension can change their behavior.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n", "msg_date": "Thu, 4 Mar 2021 20:38:00 -0500", "msg_from": "Jan Wieck <jan@wi3ck.info>", "msg_from_op": true, "msg_subject": "Re: Extensibility of the PostgreSQL wire protocol" } ]
[ { "msg_contents": "I attach a series of proposed patches to slightly improve some minor\nthings related to the CLOG code.\n\n0001 - Always call StartupCLOG() just after initializing\nShmemVariableCache. Right now, the hot_standby=off code path does this\nat end of recovery, and the hot_standby=on code path does it at the\nbeginning of recovery. It's better to do this in only one place\nbecause (a) it's simpler, (b) StartupCLOG() is trivial so trying to\npostpone it in the hot_standby=off case has no value, and (c) it\nallows for 0002 and therefore 0003, which make things even simpler.\n\n0002 - In clog_redo(), don't set XactCtl->shared->latest_page_number.\nThe value that is being set here is actually the oldest page we're not\ntruncating, not the newest page that exists, so it's a bogus value\n(except when we're truncating all but one page). The reason it's like\nthis now is to avoid having SimpleLruTruncate() see an uninitialized\nvalue that might trip a sanity check, but after 0001 that won't be\npossible, so we can just let the sanity check do its thing.\n\n0003 - In TrimCLOG(), don't reset XactCtl->shared->latest_page_number.\nAfter we stop doing 0002 we don't need to do this either, because the\nonly thing this can be doing for us is correcting the error introduced\nby the code which 0002 removes. Relying on the results of replaying\n(authoritative) CLOG/EXTEND records seems better than relying on our\n(approximate) value of nextXid at end of recovery.\n\n0004 - In StartupCLOG(), correct an off-by-one error. Currently, if\nthe nextXid is exactly a multiple of the number of CLOG entries that\nfit on a page, then the value we compute for\nXactCtl->shared->latest_page_number is higher than it should be by 1.\nThat's because nextXid represents the first XID that hasn't yet been\nallocated, not the last one that gets allocated. So, the CLOG page\ngets created when nextXid advances from the first value on the page to\nthe second value on the page, not when it advances from the last value\non the previous page to the first value on the current page.\n\nNote that all of 0001-0003 result in a net removal of code. 0001 comes\nout to more lines total because of the comment changes, but fewer\nexecutable lines.\n\nI don't plan to back-patch any of this because, AFAICS, an incorrect\nvalue of XactCtl->shared->latest_page_number has no real consequences.\nThe SLRU code uses latest_page_number for just two purposes. First,\nthe latest page is never evicted; but that's just a question of\nperformance, not correctness, and the performance impact is small.\nSecond, the sanity check in SimpleLruTruncate() uses it. The present\ncode can make the value substantially inaccurate during recovery, but\nonly in a way that can make the sanity check pass rather than failing,\nso it's not going to really bite anybody except perhaps if they have a\ncorrupted cluster where they would have liked the sanity check to\ncatch some problem. When not in recovery, the value can be off by at\nmost one. I am not sure whether there's a theoretical risk of this\nmaking SimpleLruTruncate()'s sanity check fail when it should have\npassed, but even if there is the chances must be extremely remote.\n\nSome of the other SLRUs have similar issues as a result of\ncopy-and-paste work over the years. I plan to look at tidying that\nstuff up, too. However, I wanted to post (and probably commit) these\npatches first, partly to get some feedback, and also because all the\ncases are a little different and I want to make sure to do a proper\nanalysis of each one.\n\nAny review very much appreciated.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 25 Jan 2021 11:56:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "cleaning up a few CLOG-related things" }, { "msg_contents": "On 25/01/2021 18:56, Robert Haas wrote:\n> I attach a series of proposed patches to slightly improve some minor\n> things related to the CLOG code.\n> \n> [patches 0001 - 0003]\n\nMakes sense.\n\n> 0004 - In StartupCLOG(), correct an off-by-one error. Currently, if\n> the nextXid is exactly a multiple of the number of CLOG entries that\n> fit on a page, then the value we compute for\n> XactCtl->shared->latest_page_number is higher than it should be by 1.\n> That's because nextXid represents the first XID that hasn't yet been\n> allocated, not the last one that gets allocated.\n\nYes.\n\n> So, the CLOG page gets created when nextXid advances from the first\n> value on the page to the second value on the page, not when it\n> advances from the last value on the previous page to the first value\n> on the current page.\nYes. It took me a moment to understand that explanation, though. I'd \nphrase it something like \"nextXid is the next XID that will be used, but \nwe want to set latest_page_number according to the last XID that's \nalready been used. So retreat by one.\"\n\nHaving a separate FullTransactionIdToLatestPageNumber() function for \nthis seems like overkill to me.\n\n> Some of the other SLRUs have similar issues as a result of\n> copy-and-paste work over the years. I plan to look at tidying that\n> stuff up, too. However, I wanted to post (and probably commit) these\n> patches first, partly to get some feedback, and also because all the\n> cases are a little different and I want to make sure to do a proper\n> analysis of each one.\n\nYeah, multixact seems similar at least.\n\n- Heikki\n\n\n", "msg_date": "Mon, 25 Jan 2021 21:11:51 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: cleaning up a few CLOG-related things" }, { "msg_contents": "On Mon, Jan 25, 2021 at 2:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > [patches 0001 - 0003]\n>\n> Makes sense.\n\nGreat. I committed the first one and will proceed with those as well.\n\n> > So, the CLOG page gets created when nextXid advances from the first\n> > value on the page to the second value on the page, not when it\n> > advances from the last value on the previous page to the first value\n> > on the current page.\n> Yes. It took me a moment to understand that explanation, though. I'd\n> phrase it something like \"nextXid is the next XID that will be used, but\n> we want to set latest_page_number according to the last XID that's\n> already been used. So retreat by one.\"\n\nOK, updated the patch to use that language for the comment.\n\n> Having a separate FullTransactionIdToLatestPageNumber() function for\n> this seems like overkill to me.\n\nI initially thought so too, but it turned out to be pretty useful for\nwriting debugging cross-checks and things, so I'm inclined to keep it\neven though I'm not at present proposing to commit any such debugging\ncross-checks. For example I tried making the main redo loop check\nwhether XactCtl->shared->latest_page_number ==\nFullTransactionIdToLatestPageNumber(nextXid) which turned out to be\nsuper-helpful in understanding things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 27 Jan 2021 12:35:30 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: cleaning up a few CLOG-related things" }, { "msg_contents": "On Wed, Jan 27, 2021 at 12:35:30PM -0500, Robert Haas wrote:\n> On Mon, Jan 25, 2021 at 2:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > Having a separate FullTransactionIdToLatestPageNumber() function for\n> > this seems like overkill to me.\n> \n> I initially thought so too, but it turned out to be pretty useful for\n> writing debugging cross-checks and things, so I'm inclined to keep it\n> even though I'm not at present proposing to commit any such debugging\n> cross-checks. For example I tried making the main redo loop check\n> whether XactCtl->shared->latest_page_number ==\n> FullTransactionIdToLatestPageNumber(nextXid) which turned out to be\n> super-helpful in understanding things.\n\n> +/*\n> + * Based on ShmemVariableCache->nextXid, compute the latest CLOG page that\n> + * is expected to exist.\n> + */\n> +static int\n> +FullTransactionIdToLatestPageNumber(FullTransactionId nextXid)\n> +{\n> +\t/*\n> +\t * nextXid is the next XID that will be used, but we want to set\n> +\t * latest_page_number according to the last XID that's already been used.\n> +\t * So retreat by one. See also GetNewTransactionId().\n> +\t */\n> +\tFullTransactionIdRetreat(&nextXid);\n> +\treturn TransactionIdToPage(XidFromFullTransactionId(nextXid));\n> +}\n\nI don't mind the arguably-overkill function. I'd probably have named it\nFullTransactionIdPredecessorToPage(), to focus on its behavior as opposed to\nits caller's behavior, but static function naming isn't a weighty matter.\nOverall, the patch looks fine. If nextXid is the first on a page, the next\nGetNewTransactionId() -> ExtendCLOG() -> ZeroCLOGPage() -> SimpleLruZeroPage()\nis responsible for updating latest_page_number.\n\n\n", "msg_date": "Sun, 21 Mar 2021 01:39:51 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: cleaning up a few CLOG-related things" } ]
[ { "msg_contents": "In SnapBuildFinalSnapshot(), we have this comment:\n> \t/*\n> \t * c) transition from START to BUILDING_SNAPSHOT.\n> \t *\n> \t * In START state, and a xl_running_xacts record with running xacts is\n> \t * encountered. In that case, switch to BUILDING_SNAPSHOT state, and\n> \t * record xl_running_xacts->nextXid. Once all running xacts have finished\n> \t * (i.e. they're all >= nextXid), we have a complete catalog snapshot. It\n> \t * might look that we could use xl_running_xact's ->xids information to\n> \t * get there quicker, but that is problematic because transactions marked\n> \t * as running, might already have inserted their commit record - it's\n> \t * infeasible to change that with locking.\n> \t */\n\nThis was added in commit 955a684e040, before that we did in fact use the \nxl_running_xacts list of XIDs, but it was buggy. Commit 955a684e040 \nfixed that by waiting for *two* xl_running_xacts, such that the second \nxl_running_xact doesn't contain any of the XIDs from the first one.\n\nTo fix the case mentioned in that comment, where a transaction listed in \nxl_running_xacts is in fact already committed or aborted, wouldn't it be \nmore straightforward to check each XID, if they are in fact already \ncommitted or aborted? The CLOG is up-to-date here, I believe.\n\nI bumped into this, after I noticed that read_local_xlog_page() has a \npretty busy polling loop, with just 1 ms delay to keep it from hogging \nCPU. I tried to find the call sites where we might get into that loop, \nand found that the snapshot building code to do that: the \n'delayed_startup' regression test in contrib/test_decoding exercises it. \nIn a primary server, SnapBuildWaitSnapshot() inserts a new running-xacts \nrecord, and then read_local_xlog_page() will poll until that record has \nbeen flushed. We could add an explicit XLogFlush() there to avoid the \nwait. However, if I'm reading the code correctly, in a standby server, \nwe can't write a new running-xacts record so we just wait for one that's \ncreated periodically by bgwriter in the primary. That can take several \nseconds. Or indefinitely, if the standby isn't connected to the primary \nat the moment. Would be nice to not poll.\n\n- Heikki\n\nP.S. There's this in SnapBuildNextPhaseAt():\n\n> \t/*\n> \t * For backward compatibility reasons this has to be stored in the wrongly\n> \t * named field. Will be fixed in next major version.\n> \t */\n> \treturn builder->was_running.was_xmax;\n\nWe could fix that now... Andres, what did you have in mind for a proper \nname?\n\nP.P.S. Two thinkos in comments in snapbuild.c: s/write for/wait for/.\n\n\n", "msg_date": "Mon, 25 Jan 2021 19:28:33 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Snapbuild woes followup" }, { "msg_contents": "Hi,\n\nThomas, CCing you because of the condvar bit below.\n\nOn 2021-01-25 19:28:33 +0200, Heikki Linnakangas wrote:\n> In SnapBuildFinalSnapshot(), we have this comment:\n> > \t/*\n> > \t * c) transition from START to BUILDING_SNAPSHOT.\n> > \t *\n> > \t * In START state, and a xl_running_xacts record with running xacts is\n> > \t * encountered. In that case, switch to BUILDING_SNAPSHOT state, and\n> > \t * record xl_running_xacts->nextXid. Once all running xacts have finished\n> > \t * (i.e. they're all >= nextXid), we have a complete catalog snapshot. It\n> > \t * might look that we could use xl_running_xact's ->xids information to\n> > \t * get there quicker, but that is problematic because transactions marked\n> > \t * as running, might already have inserted their commit record - it's\n> > \t * infeasible to change that with locking.\n> > \t */\n>\n> This was added in commit 955a684e040, before that we did in fact use the\n> xl_running_xacts list of XIDs, but it was buggy. Commit 955a684e040 fixed\n> that by waiting for *two* xl_running_xacts, such that the second\n> xl_running_xact doesn't contain any of the XIDs from the first one.\n\nNot really just that, but we also just don't believe ->xids to be\nconsistent for visibility purposes...\n\n\n> To fix the case mentioned in that comment, where a transaction listed in\n> xl_running_xacts is in fact already committed or aborted, wouldn't it be\n> more straightforward to check each XID, if they are in fact already\n> committed or aborted? The CLOG is up-to-date here, I believe.\n\nWell, we can't *just* go the clog since that will contain transactions\nas committed/aborted even when not yet visible, due to still being in\nthe procarray. And I don't think it's easy to figure out how to\ncrosscheck between clog and procarray in this instance (since we don't\nhave the past procarray). This is different in the recovery path because\nthere we know that changes to the procarray / knownassignedxids\nmachinery are only done by one backend.\n\n\n> I bumped into this, after I noticed that read_local_xlog_page() has a pretty\n> busy polling loop, with just 1 ms delay to keep it from hogging CPU.\n\nHm - but why is that really related to the initial snapshot building\nlogic? Logical decoding constantly waits for WAL outside of that too,\nno?\n\nISTM that we should improve the situation substantially in a fairly easy\nway. Like:\n\n1) Improve ConditionVariableBroadcast() so it doesn't take the spinlock\n if there are no wakers - afaict that's pretty trivial.\n2) Replace WalSndWakeup() with ConditionVariableBroadcast().\n3) Replace places that need to wait for new WAL to be written with a\n call to function doing something like\n\n XLogRecPtr flushed_to = GetAppropriateFlushRecPtr(); // works for both normal / recovery\n\n if (flush_requirement <= flushed_to)\n break;\n\n ConditionVariablePrepareToSleep(&XLogCtl->flush_progress_cv);\n\n while (true)\n {\n flushed_to = GetAppropriateFlushRecPtr();\n\n if (flush_requirement <= flushed_to)\n break;\n\n ConditionVariableSleep(&XLogCtl->flush_progress_cv);\n }\n\nthis should end up being more efficient than the current WalSndWakeup()\nmechanism because we'll only wake up the processes that need to be woken\nup, rather than checking/setting each walsenders latch.\n\n\n> P.S. There's this in SnapBuildNextPhaseAt():\n>\n> > \t/*\n> > \t * For backward compatibility reasons this has to be stored in the wrongly\n> > \t * named field. Will be fixed in next major version.\n> > \t */\n> > \treturn builder->was_running.was_xmax;\n>\n> We could fix that now... Andres, what did you have in mind for a proper\n> name?\n\nnext_phase_at seems like it'd do the trick?\n\n\n> P.P.S. Two thinkos in comments in snapbuild.c: s/write for/wait for/.\n\nWill push a fix.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Jan 2021 12:00:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Snapbuild woes followup" }, { "msg_contents": "Hi,\n\nOn 2021-01-25 12:00:08 -0800, Andres Freund wrote:\n> > > \t/*\n> > > \t * For backward compatibility reasons this has to be stored in the wrongly\n> > > \t * named field. Will be fixed in next major version.\n> > > \t */\n> > > \treturn builder->was_running.was_xmax;\n> >\n> > We could fix that now... Andres, what did you have in mind for a proper\n> > name?\n> \n> next_phase_at seems like it'd do the trick?\n\nSee attached patch...", "msg_date": "Mon, 25 Jan 2021 12:48:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Snapbuild woes followup" }, { "msg_contents": "On Tue, Jan 26, 2021 at 2:18 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-01-25 12:00:08 -0800, Andres Freund wrote:\n> > > > /*\n> > > > * For backward compatibility reasons this has to be stored in the wrongly\n> > > > * named field. Will be fixed in next major version.\n> > > > */\n> > > > return builder->was_running.was_xmax;\n> > >\n> > > We could fix that now... Andres, what did you have in mind for a proper\n> > > name?\n> >\n> > next_phase_at seems like it'd do the trick?\n>\n\nThe new proposed name sounds good to me.\n\n> See attached patch...\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 29 Jan 2021 14:04:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Snapbuild woes followup" }, { "msg_contents": "On 2021-Jan-25, Andres Freund wrote:\n\n> See attached patch...\n\nLooks good to me.\n\nI was wondering if there would be a point in using a FullTransactionId\ninstead of an unadorned one. I don't know what's the true risk of\nan Xid wraparound occurring here, but it seems easier to reason about.\nBut then that's probably a larger change to make all of snapbuild use\nFullTransactionIds, so not for this patch to worry about.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Thu, 4 Feb 2021 12:23:53 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Snapbuild woes followup" }, { "msg_contents": "Hi,\n\nOn 2021-01-29 14:04:47 +0530, Amit Kapila wrote:\n> On Tue, Jan 26, 2021 at 2:18 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2021-01-25 12:00:08 -0800, Andres Freund wrote:\n> > > > > /*\n> > > > > * For backward compatibility reasons this has to be stored in the wrongly\n> > > > > * named field. Will be fixed in next major version.\n> > > > > */\n> > > > > return builder->was_running.was_xmax;\n> > > >\n> > > > We could fix that now... Andres, what did you have in mind for a proper\n> > > > name?\n> > >\n> > > next_phase_at seems like it'd do the trick?\n> >\n> \n> The new proposed name sounds good to me.\n\nAnd pushed.\n\n> > See attached patch...\n> \n> LGTM.\n\nThanks for looking over - should have added your name to reviewed-by,\nsorry...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:12:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Snapbuild woes followup" } ]
[ { "msg_contents": "Hi all,\r\n\r\nI was running tests with a GSS-enabled stack, and ran into some very\r\nlong psql timeouts after running the Kerberos test suite. It turns out\r\nthe suite pushes test credentials into the user's global cache, and\r\nthese no-longer-useful credentials persist after the suite has\r\nfinished. (You can see this in action by running the test/kerberos\r\nsuite and then running `klist`.) This leads to long hangs, I assume\r\nwhile the GSS implementation tries to contact a KDC that no longer\r\nexists.\r\nAttached is a patch that initializes a local credentials cache inside\r\ntmp_check/krb5cc, and tells psql to use it via the KRB5CCNAME envvar.\r\nThis prevents the global cache pollution. WDYT?\r\n\r\n--Jacob", "msg_date": "Mon, 25 Jan 2021 18:33:18 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> I was running tests with a GSS-enabled stack, and ran into some very\n> long psql timeouts after running the Kerberos test suite. It turns out\n> the suite pushes test credentials into the user's global cache, and\n> these no-longer-useful credentials persist after the suite has\n> finished. (You can see this in action by running the test/kerberos\n> suite and then running `klist`.) This leads to long hangs, I assume\n> while the GSS implementation tries to contact a KDC that no longer\n> exists.\n> Attached is a patch that initializes a local credentials cache inside\n> tmp_check/krb5cc, and tells psql to use it via the KRB5CCNAME envvar.\n> This prevents the global cache pollution. WDYT?\n\nAh, yeah, that generally seems like a good idea.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Jan 2021 13:36:46 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Jacob Champion (pchampion@vmware.com) wrote:\n>> I was running tests with a GSS-enabled stack, and ran into some very\n>> long psql timeouts after running the Kerberos test suite. It turns out\n>> the suite pushes test credentials into the user's global cache, and\n>> these no-longer-useful credentials persist after the suite has\n>> finished. (You can see this in action by running the test/kerberos\n>> suite and then running `klist`.) This leads to long hangs, I assume\n>> while the GSS implementation tries to contact a KDC that no longer\n>> exists.\n>> Attached is a patch that initializes a local credentials cache inside\n>> tmp_check/krb5cc, and tells psql to use it via the KRB5CCNAME envvar.\n>> This prevents the global cache pollution. WDYT?\n\n> Ah, yeah, that generally seems like a good idea.\n\nYeah, changing global state is just awful. However, I don't\nactually see any change here (RHEL8):\n\n$ klist\nklist: Credentials cache 'KCM:1001' not found\n$ make check\n...\nResult: PASS\n$ klist\nklist: Credentials cache 'KCM:1001' not found\n\nI suppose in an environment where someone was really using Kerberos,\nthe random kinit would be more of a problem.\n\nAlso, why are you only setting the ENV variable within narrow parts\nof the test script? I'd be inclined to enforce it throughout.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Jan 2021 13:49:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "On Mon, 2021-01-25 at 13:49 -0500, Tom Lane wrote:\r\n> Yeah, changing global state is just awful. However, I don't\r\n> actually see any change here (RHEL8):\r\n\r\nInteresting. I'm running Ubuntu 20.04:\r\n\r\n$ klist\r\nklist: No credentials cache found (filename: /tmp/krb5cc_1000)\r\n\r\n$ make check\r\n...\r\n\r\n$ klist\r\nTicket cache: FILE:/tmp/krb5cc_1000\r\nDefault principal: test1@EXAMPLE.COM\r\n\r\nValid starting Expires Service principal\r\n... krbtgt/EXAMPLE.COM@EXAMPLE.COM\r\n... postgres/auth-test-localhost.postgresql.example.com@\r\n... postgres/auth-test-localhost.postgresql.example.com@EXAMPLE.COM\r\n\r\nI wonder if your use of a KCM cache type rather than FILE makes the\r\ndifference?\r\n\r\n> Also, why are you only setting the ENV variable within narrow parts\r\n> of the test script? I'd be inclined to enforce it throughout.\r\n\r\nI considered it and decided I didn't want to pollute the server's\r\nenvironment with it, since the server shouldn't need the client cache.\r\nBut I think it'd be fine (and match the current situation) if it were\r\nset once for the whole script, if you prefer.\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 25 Jan 2021 19:00:41 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> On Mon, 2021-01-25 at 13:49 -0500, Tom Lane wrote:\n>> Yeah, changing global state is just awful. However, I don't\n>> actually see any change here (RHEL8):\n\n> Interesting. I'm running Ubuntu 20.04:\n\nHmm. I'll poke harder.\n\n>> Also, why are you only setting the ENV variable within narrow parts\n>> of the test script? I'd be inclined to enforce it throughout.\n\n> I considered it and decided I didn't want to pollute the server's\n> environment with it, since the server shouldn't need the client cache.\n\nTrue, but if it did try to access the cache, accessing the user's\nnormal cache would be strictly worse than accessing the test cache.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:04:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "On Mon, 2021-01-25 at 14:04 -0500, Tom Lane wrote:\r\n> Jacob Champion <pchampion@vmware.com> writes:\r\n> > On Mon, 2021-01-25 at 13:49 -0500, Tom Lane wrote:\r\n> > > Also, why are you only setting the ENV variable within narrow parts\r\n> > > of the test script? I'd be inclined to enforce it throughout.\r\n> > I considered it and decided I didn't want to pollute the server's\r\n> > environment with it, since the server shouldn't need the client cache.\r\n> \r\n> True, but if it did try to access the cache, accessing the user's\r\n> normal cache would be strictly worse than accessing the test cache.\r\n\r\nThat's fair. Attached is a v2 that just sets KRB5CCNAME globally. Makes\r\nfor a much smaller patch :)\r\n\r\n--Jacob", "msg_date": "Mon, 25 Jan 2021 19:31:16 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "I wrote:\n> Jacob Champion <pchampion@vmware.com> writes:\n>> Interesting. I'm running Ubuntu 20.04:\n\n> Hmm. I'll poke harder.\n\nAh ... on both RHEL8 and Fedora 33, I find this:\n\n--- snip ---\n$ cat /etc/krb5.conf.d/kcm_default_ccache\n# This file should normally be installed by your distribution into a\n# directory that is included from the Kerberos configuration file (/etc/krb5.conf)\n# On Fedora/RHEL/CentOS, this is /etc/krb5.conf.d/\n#\n# To enable the KCM credential cache enable the KCM socket and the service:\n# systemctl enable sssd-secrets.socket sssd-kcm.socket\n# systemctl start sssd-kcm.socket\n#\n# To disable the KCM credential cache, comment out the following lines.\n\n[libdefaults]\n default_ccache_name = KCM:\n--- snip ---\n\nEven more interesting, that service seems to be enabled by default\n(I'm pretty darn sure I didn't ask for it...)\n\nHowever, this doesn't seem to explain why the test script isn't\ncausing a global state change. Whether the state is held in a\nfile or the sssd daemon shouldn't matter, it seems like.\n\nAlso, it looks like the test causes /tmp/krb5cc_<uid> to get\ncreated or updated despite this setting. If I force klist\nto look at that:\n\n$ KRB5CCNAME=/tmp/krb5cc_1001 klist\nTicket cache: FILE:/tmp/krb5cc_1001\nDefault principal: test1@EXAMPLE.COM\n\nValid starting Expires Service principal\n01/25/21 14:31:57 01/26/21 14:31:57 krbtgt/EXAMPLE.COM@EXAMPLE.COM\n01/25/21 14:31:57 01/26/21 14:31:57 postgres/auth-test-localhost.postgresql.example.com@\n Ticket server: postgres/auth-test-localhost.postgresql.example.com@EXAMPLE.COM\n\nwhere the time corresponds to my having just run the test again.\n\nSo I'm still mightily confused, but it is clear that the test's\nkinit is touching a file it shouldn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:36:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "On Mon, 2021-01-25 at 14:36 -0500, Tom Lane wrote:\r\n> However, this doesn't seem to explain why the test script isn't\r\n> causing a global state change. Whether the state is held in a\r\n> file or the sssd daemon shouldn't matter, it seems like.\r\n> \r\n> Also, it looks like the test causes /tmp/krb5cc_<uid> to get\r\n> created or updated despite this setting.\r\n\r\nHuh. I wonder, if you run `klist -A` after running the tests, do you\r\nget anything more interesting? I am seeing a few bugs on Red Hat's\r\nBugzilla that center around strange KCM behavior [1]. But we're now\r\nwell outside my area of competence.\r\n\r\n--Jacob\r\n\r\n[1] https://bugzilla.redhat.com/show_bug.cgi?id=1712875\r\n", "msg_date": "Mon, 25 Jan 2021 19:50:47 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> On Mon, 2021-01-25 at 14:04 -0500, Tom Lane wrote:\n>> True, but if it did try to access the cache, accessing the user's\n>> normal cache would be strictly worse than accessing the test cache.\n\n> That's fair. Attached is a v2 that just sets KRB5CCNAME globally. Makes\n> for a much smaller patch :)\n\nI tweaked this to make it look a bit more like the rest of the script,\nand pushed it. Thanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:54:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> On Mon, 2021-01-25 at 14:36 -0500, Tom Lane wrote:\n>> Also, it looks like the test causes /tmp/krb5cc_<uid> to get\n>> created or updated despite this setting.\n\n> Huh. I wonder, if you run `klist -A` after running the tests, do you\n> get anything more interesting?\n\n\"klist -A\" prints nothing.\n\n> I am seeing a few bugs on Red Hat's\n> Bugzilla that center around strange KCM behavior [1]. But we're now\n> well outside my area of competence.\n\nMine too. But I verified that the /tmp file is no longer modified\nwith the adjusted script, so one way or the other this is better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:58:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fixing cache pollution in the Kerberos test suite" } ]
[ { "msg_contents": "\nHi, hackers\n\nWhen I read the discussion in [1], I found that update subscription's publications\nis complicated.\n\nFor example, I have 5 publications in subscription.\n\n CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432 dbname=postgres'\n PUBLICATION mypub1, mypub2, mypub3, mypub4, mypub5;\n\nIf I want to drop \"mypub4\", we should use the following command:\n\n ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5;\n\nAlso, if I want to add \"mypub7\" and \"mypub8\", it will use:\n\n ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5, mypub7, mypub8;\n\nAttached implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax, for the above\ntwo cases, we can use the following:\n\n ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub4;\n\n ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub7, mypub8;\n\nI think it's more convenient. Any thoughts?\n\n[1] - https://www.postgresql.org/message-id/MEYP282MB16690CD5EC5319FC35B2F78AB6BD0%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 26 Jan 2021 11:47:52 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Tue, 26 Jan 2021 at 11:47, japin <japinli@hotmail.com> wrote:\n> Hi, hackers\n>\n> When I read the discussion in [1], I found that update subscription's publications\n> is complicated.\n>\n> For example, I have 5 publications in subscription.\n>\n> CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432 dbname=postgres'\n> PUBLICATION mypub1, mypub2, mypub3, mypub4, mypub5;\n>\n> If I want to drop \"mypub4\", we should use the following command:\n>\n> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5;\n>\n> Also, if I want to add \"mypub7\" and \"mypub8\", it will use:\n>\n> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5, mypub7, mypub8;\n>\n> Attached implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax, for the above\n> two cases, we can use the following:\n>\n> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub4;\n>\n> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub7, mypub8;\n>\n> I think it's more convenient. Any thoughts?\n>\n> [1] - https://www.postgresql.org/message-id/MEYP282MB16690CD5EC5319FC35B2F78AB6BD0%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\nSorry, I forgot to attach the patch.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Tue, 26 Jan 2021 11:55:19 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Tue, Jan 26, 2021 at 9:25 AM japin <japinli@hotmail.com> wrote:\n> > I think it's more convenient. Any thoughts?\n>\n> Sorry, I forgot to attach the patch.\n\nAs I mentioned earlier in [1], +1 from my end to have the new syntax\nfor adding/dropping publications from subscriptions i.e. ALTER\nSUBSCRIPTION ... ADD/DROP PUBLICATION. But I'm really not sure why\nthat syntax was not added earlier. Are we missing something here?\n\nI would like to hear opinions from other hackers.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACVGDNZDQk3wfv%3D3zYTg%3DqKUaEa5s1f%2B9_PYxN0QyAUdCw%40mail.gmail.com\n\nSome quick comments on the patch, although I have not taken a deeper look at it:\n\n1. IMO, it will be good if the patch is divided into say coding, test\ncases and documentation\n2. Looks like AlterSubscription() is already having ~200 LOC, why\ncan't we have separate functions for each ALTER_SUBSCRIPTION_XXXX case\nor at least for the new code that's getting added for this patch?\n3. The new code added for ALTER_SUBSCRIPTION_ADD_PUBLICATION and\nALTER_SUBSCRIPTION_DROP_PUBLICATION looks almost same maybe with\nlittle difference, so why can't we have single function\n(alter_subscription_add_or_drop_publication or\nhanlde_add_or_drop_publication or some better name?) and pass in a\nflag to differentiate the code that differs for both commands.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jan 2021 10:25:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "\nHi, Bharath\n\nThanks for your reviewing.\n\nOn Tue, 26 Jan 2021 at 12:55, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Tue, Jan 26, 2021 at 9:25 AM japin <japinli@hotmail.com> wrote:\n>> > I think it's more convenient. Any thoughts?\n>>\n>> Sorry, I forgot to attach the patch.\n>\n> As I mentioned earlier in [1], +1 from my end to have the new syntax\n> for adding/dropping publications from subscriptions i.e. ALTER\n> SUBSCRIPTION ... ADD/DROP PUBLICATION. But I'm really not sure why\n> that syntax was not added earlier. Are we missing something here?\n>\n\nYeah, we should figure out why we do not support this syntax earlier. It seems\nALTER SUBSCRIPTION is introduced in 665d1fad99e, however the commit do not\ncontains any discussion URL.\n\n> I would like to hear opinions from other hackers.\n>\n> [1] - https://www.postgresql.org/message-id/CALj2ACVGDNZDQk3wfv%3D3zYTg%3DqKUaEa5s1f%2B9_PYxN0QyAUdCw%40mail.gmail.com\n>\n> Some quick comments on the patch, although I have not taken a deeper look at it:\n>\n> 1. IMO, it will be good if the patch is divided into say coding, test\n> cases and documentation\n\nAgreed. I will refactor it in next patch.\n\n> 2. Looks like AlterSubscription() is already having ~200 LOC, why\n> can't we have separate functions for each ALTER_SUBSCRIPTION_XXXX case\n> or at least for the new code that's getting added for this patch?\n\nI'm not sure it is necessary to provide a separate functions for each\nALTER_SUBSCRIPTION_XXX, so I just followed current style.\n\n> 3. The new code added for ALTER_SUBSCRIPTION_ADD_PUBLICATION and\n> ALTER_SUBSCRIPTION_DROP_PUBLICATION looks almost same maybe with\n> little difference, so why can't we have single function\n> (alter_subscription_add_or_drop_publication or\n> hanlde_add_or_drop_publication or some better name?) and pass in a\n> flag to differentiate the code that differs for both commands.\n\nAgreed.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 26 Jan 2021 13:46:11 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Tue, 26 Jan 2021 at 13:46, japin <japinli@hotmail.com> wrote:\n> Hi, Bharath\n>\n> Thanks for your reviewing.\n>\n> On Tue, 26 Jan 2021 at 12:55, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> On Tue, Jan 26, 2021 at 9:25 AM japin <japinli@hotmail.com> wrote:\n>>> > I think it's more convenient. Any thoughts?\n>>>\n>>> Sorry, I forgot to attach the patch.\n>>\n>> As I mentioned earlier in [1], +1 from my end to have the new syntax\n>> for adding/dropping publications from subscriptions i.e. ALTER\n>> SUBSCRIPTION ... ADD/DROP PUBLICATION. But I'm really not sure why\n>> that syntax was not added earlier. Are we missing something here?\n>>\n>\n> Yeah, we should figure out why we do not support this syntax earlier. It seems\n> ALTER SUBSCRIPTION is introduced in 665d1fad99e, however the commit do not\n> contains any discussion URL.\n>\n>> I would like to hear opinions from other hackers.\n>>\n>> [1] - https://www.postgresql.org/message-id/CALj2ACVGDNZDQk3wfv%3D3zYTg%3DqKUaEa5s1f%2B9_PYxN0QyAUdCw%40mail.gmail.com\n>>\n>> Some quick comments on the patch, although I have not taken a deeper look at it:\n>>\n>> 1. IMO, it will be good if the patch is divided into say coding, test\n>> cases and documentation\n>\n> Agreed. I will refactor it in next patch.\n>\n\nSplit v1 path into coding, test cases, documentation and tab-complete.\n\n>> 2. Looks like AlterSubscription() is already having ~200 LOC, why\n>> can't we have separate functions for each ALTER_SUBSCRIPTION_XXXX case\n>> or at least for the new code that's getting added for this patch?\n>\n> I'm not sure it is necessary to provide a separate functions for each\n> ALTER_SUBSCRIPTION_XXX, so I just followed current style.\n>\n>> 3. The new code added for ALTER_SUBSCRIPTION_ADD_PUBLICATION and\n>> ALTER_SUBSCRIPTION_DROP_PUBLICATION looks almost same maybe with\n>> little difference, so why can't we have single function\n>> (alter_subscription_add_or_drop_publication or\n>> hanlde_add_or_drop_publication or some better name?) and pass in a\n>> flag to differentiate the code that differs for both commands.\n>\n> Agreed.\n\nAt present, I create a new function merge_subpublications() to merge the origin\npublications and add/drop publications. Thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Tue, 26 Jan 2021 18:37:33 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Tue, Jan 26, 2021 at 9:18 AM japin <japinli@hotmail.com> wrote:\n>\n>\n> Hi, hackers\n>\n> When I read the discussion in [1], I found that update subscription's publications\n> is complicated.\n>\n> For example, I have 5 publications in subscription.\n>\n> CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432 dbname=postgres'\n> PUBLICATION mypub1, mypub2, mypub3, mypub4, mypub5;\n>\n> If I want to drop \"mypub4\", we should use the following command:\n>\n> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5;\n>\n> Also, if I want to add \"mypub7\" and \"mypub8\", it will use:\n>\n> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5, mypub7, mypub8;\n>\n> Attached implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax, for the above\n> two cases, we can use the following:\n>\n> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub4;\n>\n> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub7, mypub8;\n>\n> I think it's more convenient. Any thoughts?\n\n+1 for the idea\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 13:32:43 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Tue, Jan 26, 2021 at 9:18 AM japin <japinli@hotmail.com> wrote:\n>\n>\n> When I read the discussion in [1], I found that update subscription's publications\n> is complicated.\n>\n> For example, I have 5 publications in subscription.\n>\n> CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432 dbname=postgres'\n> PUBLICATION mypub1, mypub2, mypub3, mypub4, mypub5;\n>\n> If I want to drop \"mypub4\", we should use the following command:\n>\n> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5;\n>\n> Also, if I want to add \"mypub7\" and \"mypub8\", it will use:\n>\n> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5, mypub7, mypub8;\n>\n> Attached implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax, for the above\n> two cases, we can use the following:\n>\n> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub4;\n>\n> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub7, mypub8;\n>\n> I think it's more convenient. Any thoughts?\n>\n\nWhile the new proposed syntax does seem to provide some ease for users\nbut it has nothing which we can't do with current syntax. Also, in the\ncurrent syntax, there is an additional provision for refreshing the\nexisting publications as well. So, if the user has to change the\nexisting subscription such that it has to (a) add new publication(s),\n(b) remove some publication(s), (c) refresh existing publication(s)\nthen all can be done in one command whereas with your new proposed\nsyntax user has to write three separate commands.\n\nHaving said that, I don't deny the appeal of having separate commands\nfor each of (a), (b), and (c).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Jan 2021 14:29:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Wed, Jan 27, 2021 at 2:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 26, 2021 at 9:18 AM japin <japinli@hotmail.com> wrote:\n> >\n> >\n> > When I read the discussion in [1], I found that update subscription's publications\n> > is complicated.\n> >\n> > For example, I have 5 publications in subscription.\n> >\n> > CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432 dbname=postgres'\n> > PUBLICATION mypub1, mypub2, mypub3, mypub4, mypub5;\n> >\n> > If I want to drop \"mypub4\", we should use the following command:\n> >\n> > ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5;\n> >\n> > Also, if I want to add \"mypub7\" and \"mypub8\", it will use:\n> >\n> > ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5, mypub7, mypub8;\n> >\n> > Attached implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax, for the above\n> > two cases, we can use the following:\n> >\n> > ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub4;\n> >\n> > ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub7, mypub8;\n> >\n> > I think it's more convenient. Any thoughts?\n> >\n>\n> While the new proposed syntax does seem to provide some ease for users\n> but it has nothing which we can't do with current syntax. Also, in the\n> current syntax, there is an additional provision for refreshing the\n> existing publications as well. So, if the user has to change the\n> existing subscription such that it has to (a) add new publication(s),\n> (b) remove some publication(s), (c) refresh existing publication(s)\n> then all can be done in one command whereas with your new proposed\n> syntax user has to write three separate commands.\n\nIIUC the initial patch proposed here, it does allow ALTER SUBSCRIPTION\nmysub1 ADD/DROP PUBLICATION mypub4 WITH (refresh = true);. Isn't this\noption enough to achieve what we can with ALTER SUBSCRIPTION mysub1\nSET PUBLICATION mypub1, mypub2 WITH (refresh = true);? Am I missing\nsomething here?\n\n> Having said that, I don't deny the appeal of having separate commands\n> for each of (a), (b), and (c).\n\nfor (c) i.e. refresh existing publication do we need something like\nALTER SUBSCRIPTION mysub1 REFRESH SUBSCRIPTION or some other syntax\nthat only refreshes the subscription similar to ALTER SUBSCRIPTION\nmysub1 REFRESH PUBLICATION?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 14:57:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Wed, Jan 27, 2021 at 2:57 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jan 27, 2021 at 2:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 26, 2021 at 9:18 AM japin <japinli@hotmail.com> wrote:\n> > >\n> > >\n> > > When I read the discussion in [1], I found that update subscription's publications\n> > > is complicated.\n> > >\n> > > For example, I have 5 publications in subscription.\n> > >\n> > > CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432 dbname=postgres'\n> > > PUBLICATION mypub1, mypub2, mypub3, mypub4, mypub5;\n> > >\n> > > If I want to drop \"mypub4\", we should use the following command:\n> > >\n> > > ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5;\n> > >\n> > > Also, if I want to add \"mypub7\" and \"mypub8\", it will use:\n> > >\n> > > ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5, mypub7, mypub8;\n> > >\n> > > Attached implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax, for the above\n> > > two cases, we can use the following:\n> > >\n> > > ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub4;\n> > >\n> > > ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub7, mypub8;\n> > >\n> > > I think it's more convenient. Any thoughts?\n> > >\n> >\n> > While the new proposed syntax does seem to provide some ease for users\n> > but it has nothing which we can't do with current syntax. Also, in the\n> > current syntax, there is an additional provision for refreshing the\n> > existing publications as well. So, if the user has to change the\n> > existing subscription such that it has to (a) add new publication(s),\n> > (b) remove some publication(s), (c) refresh existing publication(s)\n> > then all can be done in one command whereas with your new proposed\n> > syntax user has to write three separate commands.\n>\n> IIUC the initial patch proposed here, it does allow ALTER SUBSCRIPTION\n> mysub1 ADD/DROP PUBLICATION mypub4 WITH (refresh = true);. Isn't this\n> option enough to achieve what we can with ALTER SUBSCRIPTION mysub1\n> SET PUBLICATION mypub1, mypub2 WITH (refresh = true);? Am I missing\n> something here?\n>\n\nI feel the SET syntax would allow refreshing existing publications as\nwell whereas, in Add, it will be only for new Publications.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Jan 2021 15:00:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Wed, Jan 27, 2021 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > While the new proposed syntax does seem to provide some ease for users\n> > > but it has nothing which we can't do with current syntax. Also, in the\n> > > current syntax, there is an additional provision for refreshing the\n> > > existing publications as well. So, if the user has to change the\n> > > existing subscription such that it has to (a) add new publication(s),\n> > > (b) remove some publication(s), (c) refresh existing publication(s)\n> > > then all can be done in one command whereas with your new proposed\n> > > syntax user has to write three separate commands.\n> >\n> > IIUC the initial patch proposed here, it does allow ALTER SUBSCRIPTION\n> > mysub1 ADD/DROP PUBLICATION mypub4 WITH (refresh = true);. Isn't this\n> > option enough to achieve what we can with ALTER SUBSCRIPTION mysub1\n> > SET PUBLICATION mypub1, mypub2 WITH (refresh = true);? Am I missing\n> > something here?\n> >\n>\n> I feel the SET syntax would allow refreshing existing publications as\n> well whereas, in Add, it will be only for new Publications.\n\nI think the patch v2-0001 will refresh all the publications, I mean\nexisting and newly added publications. IIUC the patch, it first\nfetches all the available publications with the subscriptions and it\nsees if that list has the given publication [1], if not, then adds it\nto the existing publications list and returns that list [2]. If the\nrefresh option is specified as true with ALTER SUBSCRIPTION ... ADD\nPUBLICATION, then it refreshes all the returned publications [3]. I\nbelieve this is also true with ALTER SUBSCRIPTION ... DROP\nPUBLICATION.\n\nSo, I think the new syntax, ALTER SUBSCRIPTION .. ADD/DROP PUBLICATION\nwill refresh the new and existing publications.\n\n[1]\n+\n+/*\n+ * merge current subpublications and user specified by add/drop publications.\n+ *\n+ * If addpub == true, we will add the list of publications into current\n+ * subpublications. Otherwise, we will delete the list of publications from\n+ * current subpublications.\n+ */\n+static List *\n+merge_subpublications(HeapTuple tuple, TupleDesc tupledesc,\n+ List *publications, bool addpub)\n\n[2]\n+ publications = merge_subpublications(tup,\nRelationGetDescr(rel),\n\n[3]\n+ /* Refresh if user asked us to. */\n+ if (refresh)\n+ {\n+ if (!sub->enabled)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"ALTER SUBSCRIPTION with\nrefresh is not allowed for disabled subscriptions\"),\n+ errhint(\"Use ALTER SUBSCRIPTION ...\nSET PUBLICATION ... WITH (refresh = false).\")));\n+\n+ /* Make sure refresh sees the new list of publications. */\n+ sub->publications = publications;\n+\n+ AlterSubscription_refresh(sub, copy_data);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 15:16:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "\nOn Wed, 27 Jan 2021 at 17:46, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Jan 27, 2021 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > > While the new proposed syntax does seem to provide some ease for users\n>> > > but it has nothing which we can't do with current syntax. Also, in the\n>> > > current syntax, there is an additional provision for refreshing the\n>> > > existing publications as well. So, if the user has to change the\n>> > > existing subscription such that it has to (a) add new publication(s),\n>> > > (b) remove some publication(s), (c) refresh existing publication(s)\n>> > > then all can be done in one command whereas with your new proposed\n>> > > syntax user has to write three separate commands.\n>> >\n>> > IIUC the initial patch proposed here, it does allow ALTER SUBSCRIPTION\n>> > mysub1 ADD/DROP PUBLICATION mypub4 WITH (refresh = true);. Isn't this\n>> > option enough to achieve what we can with ALTER SUBSCRIPTION mysub1\n>> > SET PUBLICATION mypub1, mypub2 WITH (refresh = true);? Am I missing\n>> > something here?\n>> >\n>>\n>> I feel the SET syntax would allow refreshing existing publications as\n>> well whereas, in Add, it will be only for new Publications.\n>\n> I think the patch v2-0001 will refresh all the publications, I mean\n> existing and newly added publications. IIUC the patch, it first\n> fetches all the available publications with the subscriptions and it\n> sees if that list has the given publication [1], if not, then adds it\n> to the existing publications list and returns that list [2]. If the\n> refresh option is specified as true with ALTER SUBSCRIPTION ... ADD\n> PUBLICATION, then it refreshes all the returned publications [3]. I\n> believe this is also true with ALTER SUBSCRIPTION ... DROP\n> PUBLICATION.\n>\n> So, I think the new syntax, ALTER SUBSCRIPTION .. ADD/DROP PUBLICATION\n> will refresh the new and existing publications.\n>\n\nYes! It will refresh the new and existing publications.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 27 Jan 2021 17:51:30 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "\nOn Wed, 27 Jan 2021 at 16:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Jan 26, 2021 at 9:18 AM japin <japinli@hotmail.com> wrote:\n>>\n>>\n>> When I read the discussion in [1], I found that update subscription's publications\n>> is complicated.\n>>\n>> For example, I have 5 publications in subscription.\n>>\n>> CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432 dbname=postgres'\n>> PUBLICATION mypub1, mypub2, mypub3, mypub4, mypub5;\n>>\n>> If I want to drop \"mypub4\", we should use the following command:\n>>\n>> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5;\n>>\n>> Also, if I want to add \"mypub7\" and \"mypub8\", it will use:\n>>\n>> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5, mypub7, mypub8;\n>>\n>> Attached implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax, for the above\n>> two cases, we can use the following:\n>>\n>> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub4;\n>>\n>> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub7, mypub8;\n>>\n>> I think it's more convenient. Any thoughts?\n>>\n>\n> While the new proposed syntax does seem to provide some ease for users\n> but it has nothing which we can't do with current syntax. Also, in the\n> current syntax, there is an additional provision for refreshing the\n> existing publications as well. So, if the user has to change the\n> existing subscription such that it has to (a) add new publication(s),\n> (b) remove some publication(s), (c) refresh existing publication(s)\n> then all can be done in one command whereas with your new proposed\n> syntax user has to write three separate commands.\n>\n\nIf we want add and drop some publications, we can use SET PUBLICATION, it\nis more convenient than ADD and DROP PUBLICATION, however if we just want to\nadd (or drop) publication into (or from) subcription which has much publications,\nthen the new syntax is more convenient IMO.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 27 Jan 2021 17:56:36 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Wed, Jan 27, 2021 at 3:26 PM japin <japinli@hotmail.com> wrote:\n>\n>\n> On Wed, 27 Jan 2021 at 16:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Jan 26, 2021 at 9:18 AM japin <japinli@hotmail.com> wrote:\n> >>\n> >>\n> >> When I read the discussion in [1], I found that update subscription's publications\n> >> is complicated.\n> >>\n> >> For example, I have 5 publications in subscription.\n> >>\n> >> CREATE SUBSCRIPTION mysub1 CONNECTION 'host=localhost port=5432 dbname=postgres'\n> >> PUBLICATION mypub1, mypub2, mypub3, mypub4, mypub5;\n> >>\n> >> If I want to drop \"mypub4\", we should use the following command:\n> >>\n> >> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5;\n> >>\n> >> Also, if I want to add \"mypub7\" and \"mypub8\", it will use:\n> >>\n> >> ALTER SUBSCRIPTION mysub1 SET PUBLICATION mypub1, mypub2, mypub3, mypub5, mypub7, mypub8;\n> >>\n> >> Attached implement ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax, for the above\n> >> two cases, we can use the following:\n> >>\n> >> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub4;\n> >>\n> >> ALTER SUBSCRIPTION mysub1 DROP PUBLICATION mypub7, mypub8;\n> >>\n> >> I think it's more convenient. Any thoughts?\n> >>\n> >\n> > While the new proposed syntax does seem to provide some ease for users\n> > but it has nothing which we can't do with current syntax. Also, in the\n> > current syntax, there is an additional provision for refreshing the\n> > existing publications as well. So, if the user has to change the\n> > existing subscription such that it has to (a) add new publication(s),\n> > (b) remove some publication(s), (c) refresh existing publication(s)\n> > then all can be done in one command whereas with your new proposed\n> > syntax user has to write three separate commands.\n> >\n>\n> If we want add and drop some publications, we can use SET PUBLICATION, it\n> is more convenient than ADD and DROP PUBLICATION, however if we just want to\n> add (or drop) publication into (or from) subcription which has much publications,\n> then the new syntax is more convenient IMO.\n>\n\nI agree with you that if we just want to add or remove a few\npublications in the existing subscription then providing the complete\nlist is not convenient. The new syntax is way better, although I am\nnot sure how frequently users need to add/remove publication in the\nsubscription.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 16:17:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Wed, Jan 27, 2021 at 3:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jan 27, 2021 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> So, I think the new syntax, ALTER SUBSCRIPTION .. ADD/DROP PUBLICATION\n> will refresh the new and existing publications.\n>\n\nThat sounds a bit unusual to me because when the user has specifically\nasked to just ADD Publication, we might refresh some existing\nPublication along with it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Jan 2021 16:42:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Wed, Jan 27, 2021 at 4:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 27, 2021 at 3:16 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Jan 27, 2021 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > So, I think the new syntax, ALTER SUBSCRIPTION .. ADD/DROP PUBLICATION\n> > will refresh the new and existing publications.\n> >\n>\n> That sounds a bit unusual to me because when the user has specifically\n> asked to just ADD Publication, we might refresh some existing\n> Publication along with it?\n\nHmm. That's correct. I also feel we should not touch the existing\npublications, only the ones that are added/dropped should be\nrefreshed. Because there will be an overhead of a SQL with more\npublications(in fetch_table_list) when AlterSubscription_refresh() is\ncalled with all the existing publications. We could just pass in the\nnewly added/dropped publications to AlterSubscription_refresh().\n\nI don't see any problem if ALTER SUBSCRIPTION ... ADD PUBLICATION with\nrefresh true refreshes only the newly added publications, because what\nwe do in AlterSubscription_refresh() is that we fetch the tables\nassociated with the publications from the publisher, compare them with\nthe previously fetched tables from that publication and add the new\ntables or remove the table that don't exist in that publication\nanymore.\n\nFor ALTER SUBSCRIPTION ... DROP PUBLICATION, also we can do the same\nthing i.e. refreshes only the dropped publications.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 17:11:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "\r\n\r\n> On Jan 27, 2021, at 19:41, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> \r\n> On Wed, Jan 27, 2021 at 4:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>>> On Wed, Jan 27, 2021 at 3:16 PM Bharath Rupireddy\r\n>>> <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n>>>> On Wed, Jan 27, 2021 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>>> So, I think the new syntax, ALTER SUBSCRIPTION .. ADD/DROP PUBLICATION\r\n>>> will refresh the new and existing publications.\r\n>> That sounds a bit unusual to me because when the user has specifically\r\n>> asked to just ADD Publication, we might refresh some existing\r\n>> Publication along with it?\r\n> \r\n> Hmm. That's correct. I also feel we should not touch the existing\r\n> publications, only the ones that are added/dropped should be\r\n> refreshed. Because there will be an overhead of a SQL with more\r\n> publications(in fetch_table_list) when AlterSubscription_refresh() is\r\n> called with all the existing publications. We could just pass in the\r\n> newly added/dropped publications to AlterSubscription_refresh().\r\n> \r\n> I don't see any problem if ALTER SUBSCRIPTION ... ADD PUBLICATION with\r\n> refresh true refreshes only the newly added publications, because what\r\n> we do in AlterSubscription_refresh() is that we fetch the tables\r\n> associated with the publications from the publisher, compare them with\r\n> the previously fetched tables from that publication and add the new\r\n> tables or remove the table that don't exist in that publication\r\n> anymore.\r\n> \r\n> For ALTER SUBSCRIPTION ... DROP PUBLICATION, also we can do the same\r\n> thing i.e. refreshes only the dropped publications.\r\n> \r\n> Thoughts?\r\n\r\nAgreed. We just only need to refresh the added/dropped publications. Furthermore, for publications that will be dropped, we do not need the “copy_data” option, right?\r\n\r\n-- \r\nRegrads,\r\nJapin Li.\r\nChengDu WenWu Information Technology Co.,Ltd.\r\n\r\n", "msg_date": "Wed, 27 Jan 2021 14:05:06 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Wed, Jan 27, 2021 at 7:35 PM Li Japin <japinli@hotmail.com> wrote:\n> > I don't see any problem if ALTER SUBSCRIPTION ... ADD PUBLICATION with\n> > refresh true refreshes only the newly added publications, because what\n> > we do in AlterSubscription_refresh() is that we fetch the tables\n> > associated with the publications from the publisher, compare them with\n> > the previously fetched tables from that publication and add the new\n> > tables or remove the table that don't exist in that publication\n> > anymore.\n> >\n> > For ALTER SUBSCRIPTION ... DROP PUBLICATION, also we can do the same\n> > thing i.e. refreshes only the dropped publications.\n> >\n> > Thoughts?\n>\n> Agreed. We just only need to refresh the added/dropped publications. Furthermore, for publications that will be dropped, we do not need the “copy_data” option, right?\n\nI think you are right. The copy_data option doesn't make sense for\nALTER SUBSCRIPTION ... DROP PUBLICATION, maybe we should throw an\nerror if the user specifies it. Of course, we need that option for\nALTER SUBSCRIPTION ... ADD PUBLICATION.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Jan 2021 09:52:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Thu, 28 Jan 2021 at 12:22, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Jan 27, 2021 at 7:35 PM Li Japin <japinli@hotmail.com> wrote:\n>> > I don't see any problem if ALTER SUBSCRIPTION ... ADD PUBLICATION with\n>> > refresh true refreshes only the newly added publications, because what\n>> > we do in AlterSubscription_refresh() is that we fetch the tables\n>> > associated with the publications from the publisher, compare them with\n>> > the previously fetched tables from that publication and add the new\n>> > tables or remove the table that don't exist in that publication\n>> > anymore.\n>> >\n>> > For ALTER SUBSCRIPTION ... DROP PUBLICATION, also we can do the same\n>> > thing i.e. refreshes only the dropped publications.\n>> >\n>> > Thoughts?\n>>\n>> Agreed. We just only need to refresh the added/dropped publications. Furthermore, for publications that will be dropped, we do not need the “copy_data” option, right?\n>\n> I think you are right. The copy_data option doesn't make sense for\n> ALTER SUBSCRIPTION ... DROP PUBLICATION, maybe we should throw an\n> error if the user specifies it. Of course, we need that option for\n> ALTER SUBSCRIPTION ... ADD PUBLICATION.\n>\n\nThanks for your confirm. Attached v3 patch fix it.\n\n* v3-0001\nOnly refresh the publications that will be added/dropped, also remove \"copy_data\"\noption from DROP PUBLICATION.\n\n* v3-0002\nAdd a new testcase for DROP PUBLICATION WITH (copy_data).\n\n* v3-0003\nRemove the reference of REFRESH PUBLICATION in DROP PUBLICATION.\n\n* v3-0004\nDo not change.\n\nAttaching v3 patches, please consider these for further review.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Thu, 28 Jan 2021 12:37:03 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Thu, Jan 28, 2021 at 10:07 AM japin <japinli@hotmail.com> wrote:\n> Attaching v3 patches, please consider these for further review.\n\nI think we can add a commitfest entry for this feature, so that the\npatches will be tested on cfbot. Ignore if done already.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Feb 2021 10:45:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "\nOn Wed, 03 Feb 2021 at 13:15, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Thu, Jan 28, 2021 at 10:07 AM japin <japinli@hotmail.com> wrote:\n>> Attaching v3 patches, please consider these for further review.\n>\n> I think we can add a commitfest entry for this feature, so that the\n> patches will be tested on cfbot. Ignore if done already.\n>\n\nAgreed. I made an entry in the commitfest[1].\n\n[1] - https://commitfest.postgresql.org/32/2965/\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 03 Feb 2021 16:32:10 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Wed, Feb 3, 2021 at 2:02 PM japin <japinli@hotmail.com> wrote:\n> On Wed, 03 Feb 2021 at 13:15, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Thu, Jan 28, 2021 at 10:07 AM japin <japinli@hotmail.com> wrote:\n> >> Attaching v3 patches, please consider these for further review.\n> >\n> > I think we can add a commitfest entry for this feature, so that the\n> > patches will be tested on cfbot. Ignore if done already.\n> >\n>\n> Agreed. I made an entry in the commitfest[1].\n>\n> [1] - https://commitfest.postgresql.org/32/2965/\n\nThanks. Few comments on 0001 patch:\n\n1) Are we throwing an error if the copy_data option is specified for\nDROP? If I'm reading the patch correctly, I think we should let\nparse_subscription_options tell whether the copy_data option is\nprovided irrespective of ADD or DROP, and in case of DROP we should\nthrow an error outside of parse_subscription_options?\n\n2) What's the significance of the cell == NULL else if clause? IIUC,\nwhen we don't enter + foreach(cell, publist) or if we enter and\npublist has become NULL by then, then the cell can be NULL. If my\nunderstanding is correct, we can move publist == NULL check within the\ninner for loop and remote else if (cell == NULL)? Thoughts? If you\nhave a strong reasong retain this error errmsg(\"publication name\n\\\"%s\\\" do not in subscription\", then there's a typo\nerrmsg(\"publication name \\\"%s\\\" does not exists in subscription\".\n\n+ else if (cell == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"publication name \\\"%s\\\" do not in subscription\",\n+ name)));\n+ }\n+\n+ if (publist == NIL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"subscription must contain at least one\npublication\")));\n\n3) In merge_subpublications, instead of doing heap_deform_tuple and\npreparing the existing publist ourselves, can't we reuse\ntextarray_to_stringlist to prepare the publist? Can't we just pass\n\"tup\" and \"form\" to merge_subpublications and do like below:\n\n /* Get publications */\n datum = SysCacheGetAttr(SUBSCRIPTIONOID,\n tup,\n Anum_pg_subscription_subpublications,\n &isnull);\n Assert(!isnull);\n publist = textarray_to_stringlist(DatumGetArrayTypeP(datum));\n\nSee the code in GetSubscription\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 15:20:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Fri, 05 Feb 2021 at 17:50, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Feb 3, 2021 at 2:02 PM japin <japinli@hotmail.com> wrote:\n>> On Wed, 03 Feb 2021 at 13:15, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> > On Thu, Jan 28, 2021 at 10:07 AM japin <japinli@hotmail.com> wrote:\n>> >> Attaching v3 patches, please consider these for further review.\n>> >\n>> > I think we can add a commitfest entry for this feature, so that the\n>> > patches will be tested on cfbot. Ignore if done already.\n>> >\n>>\n>> Agreed. I made an entry in the commitfest[1].\n>>\n>> [1] - https://commitfest.postgresql.org/32/2965/\n>\n> Thanks. Few comments on 0001 patch:\n>\n\nThanks for your reviewing.\n\n> 1) Are we throwing an error if the copy_data option is specified for\n> DROP?\n\nYes, it will throw an error like:\n\nERROR: unrecognized subscription parameter: \"copy_data\"\n\n> If I'm reading the patch correctly, I think we should let\n> parse_subscription_options tell whether the copy_data option is\n> provided irrespective of ADD or DROP, and in case of DROP we should\n> throw an error outside of parse_subscription_options?\n>\n\nIIUC, the parse_subscription_options cannot tell us whether the copy_data option\nis provided or not.\n\nThe comments of parse_subscription_options says:\n\n/*\n * Common option parsing function for CREATE and ALTER SUBSCRIPTION commands.\n *\n * Since not all options can be specified in both commands, this function\n * will report an error on options if the target output pointer is NULL to\n * accommodate that.\n */\n\nSo I think we can do this for DROP.\n\n> 2) What's the significance of the cell == NULL else if clause? IIUC,\n> when we don't enter + foreach(cell, publist) or if we enter and\n> publist has become NULL by then, then the cell can be NULL. If my\n> understanding is correct, we can move publist == NULL check within the\n> inner for loop and remote else if (cell == NULL)? Thoughts?\n\nWe will get cell == NULL when we iterate all items in publist. I use it\nto check whether the dropped publication is in publist or not.\n\n> If you\n> have a strong reasong retain this error errmsg(\"publication name\n> \\\"%s\\\" do not in subscription\", then there's a typo\n> errmsg(\"publication name \\\"%s\\\" does not exists in subscription\".\n>\n\nFixed.\n\n> + else if (cell == NULL)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"publication name \\\"%s\\\" do not in subscription\",\n> + name)));\n> + }\n> +\n> + if (publist == NIL)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"subscription must contain at least one\n> publication\")));\n>\n> 3) In merge_subpublications, instead of doing heap_deform_tuple and\n> preparing the existing publist ourselves, can't we reuse\n> textarray_to_stringlist to prepare the publist? Can't we just pass\n> \"tup\" and \"form\" to merge_subpublications and do like below:\n>\n> /* Get publications */\n> datum = SysCacheGetAttr(SUBSCRIPTIONOID,\n> tup,\n> Anum_pg_subscription_subpublications,\n> &isnull);\n> Assert(!isnull);\n> publist = textarray_to_stringlist(DatumGetArrayTypeP(datum));\n>\n> See the code in GetSubscription\n>\n\nFixed\n\nAttaching v4 patches, please consider these for further review.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Fri, 05 Feb 2021 21:21:37 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Fri, Feb 5, 2021 at 6:51 PM japin <japinli@hotmail.com> wrote:\n> On Fri, 05 Feb 2021 at 17:50, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> We will get cell == NULL when we iterate all items in publist. I use it\n> to check whether the dropped publication is in publist or not.\n>\n> > If you\n> > have a strong reasong retain this error errmsg(\"publication name\n> > \\\"%s\\\" do not in subscription\", then there's a typo\n> > errmsg(\"publication name \\\"%s\\\" does not exists in subscription\".\n>\n> Fixed.\n\nI think we still have a typo in 0002, it's\n+ errmsg(\"publication name \\\"%s\\\" does not exist\nin subscription\",\ninstead of\n+ errmsg(\"publication name \\\"%s\\\" does not exists\nin subscription\",\n\nIIUC, with the current patch, the new ALTER SUBSCRIPTION ... ADD/DROP\nerrors out on the first publication that already exists/that doesn't\nexist right? What if there are multiple publications given in the\nADD/DROP list, and few of them exist/don't exist. Isn't it good if we\nloop over the subscription's publication list and show all the already\nexisting/not existing publications in the error message, instead of\njust erroring out for the first existing/not existing publication?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Feb 2021 19:19:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "Thanks for your review again.\n\nOn Wed, 10 Feb 2021 at 21:49, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Fri, Feb 5, 2021 at 6:51 PM japin <japinli@hotmail.com> wrote:\n>> On Fri, 05 Feb 2021 at 17:50, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> We will get cell == NULL when we iterate all items in publist. I use it\n>> to check whether the dropped publication is in publist or not.\n>>\n>> > If you\n>> > have a strong reasong retain this error errmsg(\"publication name\n>> > \\\"%s\\\" do not in subscription\", then there's a typo\n>> > errmsg(\"publication name \\\"%s\\\" does not exists in subscription\".\n>>\n>> Fixed.\n>\n> I think we still have a typo in 0002, it's\n> + errmsg(\"publication name \\\"%s\\\" does not exist\n> in subscription\",\n> instead of\n> + errmsg(\"publication name \\\"%s\\\" does not exists\n> in subscription\",\n>\n\nFixed.\n\n> IIUC, with the current patch, the new ALTER SUBSCRIPTION ... ADD/DROP\n> errors out on the first publication that already exists/that doesn't\n> exist right? What if there are multiple publications given in the\n> ADD/DROP list, and few of them exist/don't exist. Isn't it good if we\n> loop over the subscription's publication list and show all the already\n> existing/not existing publications in the error message, instead of\n> just erroring out for the first existing/not existing publication?\n>\n\nYes, you are right. Agree with you, I modified it. Please consider v5\nfor further review.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Sat, 13 Feb 2021 14:11:23 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Sat, Feb 13, 2021 at 11:41 AM japin <japinli@hotmail.com> wrote:\n> > IIUC, with the current patch, the new ALTER SUBSCRIPTION ... ADD/DROP\n> > errors out on the first publication that already exists/that doesn't\n> > exist right? What if there are multiple publications given in the\n> > ADD/DROP list, and few of them exist/don't exist. Isn't it good if we\n> > loop over the subscription's publication list and show all the already\n> > existing/not existing publications in the error message, instead of\n> > just erroring out for the first existing/not existing publication?\n> >\n>\n> Yes, you are right. Agree with you, I modified it. Please consider v5\n> for further review.\n\nThanks for the updated patches. I have a comment about reporting the\nexisting/not existing publications code. How about something like the\nattached delta patch on v5-0002? Sorry for attaching\n\nI also think that we could merge 0002 into 0001 and have only 4\npatches in the patch set.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 15 Feb 2021 08:13:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Mon, Feb 15, 2021 at 8:13 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Feb 13, 2021 at 11:41 AM japin <japinli@hotmail.com> wrote:\n> > > IIUC, with the current patch, the new ALTER SUBSCRIPTION ... ADD/DROP\n> > > errors out on the first publication that already exists/that doesn't\n> > > exist right? What if there are multiple publications given in the\n> > > ADD/DROP list, and few of them exist/don't exist. Isn't it good if we\n> > > loop over the subscription's publication list and show all the already\n> > > existing/not existing publications in the error message, instead of\n> > > just erroring out for the first existing/not existing publication?\n> > >\n> >\n> > Yes, you are right. Agree with you, I modified it. Please consider v5\n> > for further review.\n>\n> Thanks for the updated patches. I have a comment about reporting the\n> existing/not existing publications code. How about something like the\n> attached delta patch on v5-0002?\n\nAttaching the v6 patch set so that cfbot can proceed to test the\npatches. The above delta patch was merged into 0002. Please have a\nlook.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 16 Feb 2021 07:28:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "\nOn Tue, 16 Feb 2021 at 09:58, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Mon, Feb 15, 2021 at 8:13 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Sat, Feb 13, 2021 at 11:41 AM japin <japinli@hotmail.com> wrote:\n>> > > IIUC, with the current patch, the new ALTER SUBSCRIPTION ... ADD/DROP\n>> > > errors out on the first publication that already exists/that doesn't\n>> > > exist right? What if there are multiple publications given in the\n>> > > ADD/DROP list, and few of them exist/don't exist. Isn't it good if we\n>> > > loop over the subscription's publication list and show all the already\n>> > > existing/not existing publications in the error message, instead of\n>> > > just erroring out for the first existing/not existing publication?\n>> > >\n>> >\n>> > Yes, you are right. Agree with you, I modified it. Please consider v5\n>> > for further review.\n>>\n>> Thanks for the updated patches. I have a comment about reporting the\n>> existing/not existing publications code. How about something like the\n>> attached delta patch on v5-0002?\n>\n> Attaching the v6 patch set so that cfbot can proceed to test the\n> patches. The above delta patch was merged into 0002. Please have a\n> look.\n>\n\nThanks for the updated patches. I'm on vacation.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 18 Feb 2021 10:31:14 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Thu, Feb 18, 2021 at 8:01 AM japin <japinli@hotmail.com> wrote:\n> On Tue, 16 Feb 2021 at 09:58, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Mon, Feb 15, 2021 at 8:13 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Sat, Feb 13, 2021 at 11:41 AM japin <japinli@hotmail.com> wrote:\n> >> > > IIUC, with the current patch, the new ALTER SUBSCRIPTION ... ADD/DROP\n> >> > > errors out on the first publication that already exists/that doesn't\n> >> > > exist right? What if there are multiple publications given in the\n> >> > > ADD/DROP list, and few of them exist/don't exist. Isn't it good if we\n> >> > > loop over the subscription's publication list and show all the already\n> >> > > existing/not existing publications in the error message, instead of\n> >> > > just erroring out for the first existing/not existing publication?\n> >> > >\n> >> >\n> >> > Yes, you are right. Agree with you, I modified it. Please consider v5\n> >> > for further review.\n> >>\n> >> Thanks for the updated patches. I have a comment about reporting the\n> >> existing/not existing publications code. How about something like the\n> >> attached delta patch on v5-0002?\n> >\n> > Attaching the v6 patch set so that cfbot can proceed to test the\n> > patches. The above delta patch was merged into 0002. Please have a\n> > look.\n> >\n>\n> Thanks for the updated patches. I'm on vacation.\n\nI'm reading through the v6 patches again, here are some comments.\n\n1) IMO, we can merge 0001 into 0002\n2) A typo, it's \"current\" not \"ccurrent\" - + * Merge ccurrent\nsubscription's publications and user specified publications\n3) In merge_subpublications, do we need to error out or do something\ninstead of Assert(!isnull); as in the release build we don't reach\nassert. So, if at all catalogue search returns a null tuple, we don't\nsurprise users.\n4) Can we have a better name for the function merge_subpublications\nsay merge_publications (because it's a local function to\nsubscriptioncmds.c we don't need \"sub\" in function name) or any other\nbetter name?\n5) Instead of doing catalogue look up for the subscriber publications\nin merge_subpublications, why can't we pass in the list from sub =\nGetSubscription(subid, false); (being called in AlterSubscription)\n---> sub->publications. Do you see any problems in doing so? If done\nthat, we can discard the 0001 patch and comments (1) and (3) becomes\nirrelevant.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 7 Mar 2021 17:13:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Sun, 07 Mar 2021 at 19:43, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> I'm reading through the v6 patches again, here are some comments.\n>\n\nThanks for your review again.\n\n> 1) IMO, we can merge 0001 into 0002\n> 2) A typo, it's \"current\" not \"ccurrent\" - + * Merge ccurrent\n> subscription's publications and user specified publications\n\nFixed.\n\n> 3) In merge_subpublications, do we need to error out or do something\n> instead of Assert(!isnull); as in the release build we don't reach\n> assert. So, if at all catalogue search returns a null tuple, we don't\n> surprise users.\n> 4) Can we have a better name for the function merge_subpublications\n> say merge_publications (because it's a local function to\n> subscriptioncmds.c we don't need \"sub\" in function name) or any other\n> better name?\n\nRename merge_subpublications to merge_publications as you suggested.\n\n> 5) Instead of doing catalogue look up for the subscriber publications\n> in merge_subpublications, why can't we pass in the list from sub =\n> GetSubscription(subid, false); (being called in AlterSubscription)\n> ---> sub->publications. Do you see any problems in doing so? If done\n> that, we can discard the 0001 patch and comments (1) and (3) becomes\n> irrelevant.\n\nThank you point out this. Fixed it in v7 patch set.\n\nPlease consider the v7 patch for futher review.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Sun, 07 Mar 2021 21:50:41 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Sun, Mar 7, 2021 at 7:21 PM Japin Li <japinli@hotmail.com> wrote:\n> Thank you point out this. Fixed it in v7 patch set.\n>\n> Please consider the v7 patch for futher review.\n\nThanks for the patches. I just found the following behaviour with the\nnew ADD/DROP syntax: when the specified publication list has\nduplicates, the patch is throwing \"publication is already present\"\nerror. It's adding the first instance of the duplicate into the list\nand the second instance is being checked in the added list and\nthrowing the \"already present error\". The error message means that the\npublication is already present in the subscription but it's not true.\nSee my testing at [1].\n\nI think we have two cases:\ncase 1: the publication/s specified in the new ADD/DROP syntax may/may\nnot have already been associated with the subscription, so the error\n\"publication is already present\"/\"publication doesn't exist\" error\nmakes sense.\ncase 2: there can be duplicate publications specified in the new\nADD/DROP syntax, in this case the error \"publication name \"mypub2\"\nused more than once\" makes more sense much like [2].\n\n[1]\npostgres=# select subpublications from pg_subscription;\n subpublications\n-----------------\n {mypub,mypub1}\n\npostgres=# alter subscription mysub add publication mypub2, mypub2;\nERROR: publication \"mypub2\" is already present in the subscription\n\npostgres=# select subpublications from pg_subscription;\n subpublications\n-----------------------\n {mypub,mypub1,mypub2}\n\npostgres=# alter subscription mysub drop publication mypub2, mypub2;\nERROR: publication \"mypub2\" doesn't exist in the subscription\n\n[2]\npostgres=# alter subscription mysub set publication mypub2, mypub2;\nERROR: publication name \"mypub2\" used more than once\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Mar 2021 08:44:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Mon, 22 Mar 2021 at 11:14, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sun, Mar 7, 2021 at 7:21 PM Japin Li <japinli@hotmail.com> wrote:\n>> Thank you point out this. Fixed it in v7 patch set.\n>>\n>> Please consider the v7 patch for futher review.\n>\n> Thanks for the patches. I just found the following behaviour with the\n> new ADD/DROP syntax: when the specified publication list has\n> duplicates, the patch is throwing \"publication is already present\"\n> error. It's adding the first instance of the duplicate into the list\n> and the second instance is being checked in the added list and\n> throwing the \"already present error\". The error message means that the\n> publication is already present in the subscription but it's not true.\n> See my testing at [1].\n>\n> I think we have two cases:\n> case 1: the publication/s specified in the new ADD/DROP syntax may/may\n> not have already been associated with the subscription, so the error\n> \"publication is already present\"/\"publication doesn't exist\" error\n> makes sense.\n> case 2: there can be duplicate publications specified in the new\n> ADD/DROP syntax, in this case the error \"publication name \"mypub2\"\n> used more than once\" makes more sense much like [2].\n>\n> [1]\n> postgres=# select subpublications from pg_subscription;\n> subpublications\n> -----------------\n> {mypub,mypub1}\n>\n> postgres=# alter subscription mysub add publication mypub2, mypub2;\n> ERROR: publication \"mypub2\" is already present in the subscription\n>\n> postgres=# select subpublications from pg_subscription;\n> subpublications\n> -----------------------\n> {mypub,mypub1,mypub2}\n>\n> postgres=# alter subscription mysub drop publication mypub2, mypub2;\n> ERROR: publication \"mypub2\" doesn't exist in the subscription\n>\n> [2]\n> postgres=# alter subscription mysub set publication mypub2, mypub2;\n> ERROR: publication name \"mypub2\" used more than once\n>\n\nThanks for your review.\n\nI check the duplicates for newpublist in merge_publications(). The code is\ncopied from publicationListToArray().\n\nI do not check for all duplicates because it will make the code more complex.\nFor example:\n\nALTER SUBSCRIPTION mysub ADD PUBLICATION mypub2, mypub2, mypub2;\n\nIf we record the duplicate publication names in list A, when we find a\nduplication in newpublist, we should check whether the publication is\nin list A or not, to make the error message make sense (do not have\nduplicate publication names in error message).\n\nThought?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Tue, 23 Mar 2021 23:08:43 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Tue, Mar 23, 2021 at 8:39 PM Japin Li <japinli@hotmail.com> wrote:\n> I check the duplicates for newpublist in merge_publications(). The code is\n> copied from publicationListToArray().\n\nIMO, we can have the same code inside a function, probably named\n\"check_duplicates_in_publist\" or some other better name:\nstatic void\ncheck_duplicates_in_publist(List *publist, Datum *datums)\n{\n int j = 0;\n ListCell *cell;\n\n foreach(cell, publist)\n {\n char *name = strVal(lfirst(cell));\n ListCell *pcell;\n\n /* Check for duplicates. */\n foreach(pcell, publist)\n {\n char *pname = strVal(lfirst(pcell));\n\n if (pcell == cell)\n break;\n\n if (strcmp(name, pname) == 0)\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"publication name \\\"%s\\\" used more than once\",\n pname)));\n }\n\n if (datums)\n datums[j++] = CStringGetTextDatum(name);\n }\n}\n\n From publicationListToArray, call check_duplicates_in_publist(publist, datums);\n From merge_publications, call check_duplicates_in_publist(newpublist, NULL);\n\n> I do not check for all duplicates because it will make the code more complex.\n> For example:\n>\n> ALTER SUBSCRIPTION mysub ADD PUBLICATION mypub2, mypub2, mypub2;\n\nThat's fine because we anyways, error out.\n\n0002:\nThe tests added in subscription.sql look fine to me and they cover\nmost of the syntax related code. But it will be good if we can add\ntests to see if the data of the newly added/dropped publications\nwould/would not reflect on the subscriber, maybe you can consider\nadding these tests into 001_rep_changes.pl, similar to ALTER\nSUBSCRIPTION ... SET PUBLICATION test there.\n\n0003:\nI think it's not \"set_publication_option\", they are\n\"add_publication_option\" and \"drop_publication_option\" for ADD and\nDROP respectively. Please change it wherever \"set_publication_option\"\nis used instead.\n+ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\nADD PUBLICATION <replaceable\nclass=\"parameter\">publication_name</replaceable> [, ...] [ WITH (\n<replaceable class=\"parameter\">set_publication_option</replaceable> [=\n<replaceable class=\"parameter\">value</replaceable>] [, ... ] ) ]\n+ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable>\nDROP PUBLICATION <replaceable\nclass=\"parameter\">publication_name</replaceable> [, ...] [ WITH (\n<replaceable class=\"parameter\">set_publication_option</replaceable> [=\n<replaceable class=\"parameter\">value</replaceable>] [, ... ] ) ]\n\n0004:\nLGTM.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Apr 2021 21:23:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On 23.03.21 16:08, Japin Li wrote:\n> I check the duplicates for newpublist in merge_publications(). The code is\n> copied from publicationListToArray().\n> \n> I do not check for all duplicates because it will make the code more complex.\n> For example:\n> \n> ALTER SUBSCRIPTION mysub ADD PUBLICATION mypub2, mypub2, mypub2;\n> \n> If we record the duplicate publication names in list A, when we find a\n> duplication in newpublist, we should check whether the publication is\n> in list A or not, to make the error message make sense (do not have\n> duplicate publication names in error message).\n\nThe code you have in merge_publications() to report all existing \npublications is pretty messy and is not properly internationalized. I \nthink what you are trying to do there is excessive. Compare this \nsimilar case:\n\ncreate table t1 (a int, b int);\nalter table t1 add column a int, add column b int;\nERROR: 42701: column \"a\" of relation \"t1\" already exists\n\nI think you can make both this and the duplicate checking much simpler \nif you just report the first conflict.\n\nI think this patch is about ready to commit, but please provide a final \nversion in good time.\n\n(Also, please combine your patches into a single patch.)\n\n\n", "msg_date": "Fri, 2 Apr 2021 21:59:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On Sat, Apr 3, 2021 at 1:29 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> The code you have in merge_publications() to report all existing\n> publications is pretty messy and is not properly internationalized. I\n> think what you are trying to do there is excessive. Compare this\n> similar case:\n>\n> create table t1 (a int, b int);\n> alter table t1 add column a int, add column b int;\n> ERROR: 42701: column \"a\" of relation \"t1\" already exists\n>\n> I think you can make both this and the duplicate checking much simpler\n> if you just report the first conflict.\n\nYes, we are erroring out on the first conflict for both duplicates and\nin merge_publications.\n\n> I think this patch is about ready to commit, but please provide a final\n> version in good time.\n\nI took the liberty to address all the review comments and provide a v9\npatch on top of Japin's v8 patch-set.\n\n> (Also, please combine your patches into a single patch.)\n\nDone.\n\nAttaching v9 patch, please review it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 3 Apr 2021 10:50:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "\nOn Sat, 03 Apr 2021 at 13:20, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sat, Apr 3, 2021 at 1:29 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> The code you have in merge_publications() to report all existing\n>> publications is pretty messy and is not properly internationalized. I\n>> think what you are trying to do there is excessive. Compare this\n>> similar case:\n>>\n>> create table t1 (a int, b int);\n>> alter table t1 add column a int, add column b int;\n>> ERROR: 42701: column \"a\" of relation \"t1\" already exists\n>>\n>> I think you can make both this and the duplicate checking much simpler\n>> if you just report the first conflict.\n>\n> Yes, we are erroring out on the first conflict for both duplicates and\n> in merge_publications.\n>\n>> I think this patch is about ready to commit, but please provide a final\n>> version in good time.\n>\n> I took the liberty to address all the review comments and provide a v9\n> patch on top of Japin's v8 patch-set.\n>\n>> (Also, please combine your patches into a single patch.)\n>\n> Done.\n>\n> Attaching v9 patch, please review it.\n>\n\nSorry for the late reply! Thanks for your updating the new patch, and it looks\ngood to me.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 06 Apr 2021 13:24:23 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "On 06.04.21 07:24, Japin Li wrote:\n>>> I think this patch is about ready to commit, but please provide a final\n>>> version in good time.\n>> I took the liberty to address all the review comments and provide a v9\n>> patch on top of Japin's v8 patch-set.\n>>\n>>> (Also, please combine your patches into a single patch.)\n>> Done.\n>>\n>> Attaching v9 patch, please review it.\n>>\n> Sorry for the late reply! Thanks for your updating the new patch, and it looks\n> good to me.\n\nCommitted.\n\nNote that you can use syntax like \"ADD|DROP|SET\" in the tab completion \ncode. I have simplified some of your code like that.\n\nI also deduplicated the documentation additions a bit.\n\n\n\n", "msg_date": "Tue, 6 Apr 2021 11:56:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" }, { "msg_contents": "\nOn Tue, 06 Apr 2021 at 17:56, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> On 06.04.21 07:24, Japin Li wrote:\n>>>> I think this patch is about ready to commit, but please provide a final\n>>>> version in good time.\n>>> I took the liberty to address all the review comments and provide a v9\n>>> patch on top of Japin's v8 patch-set.\n>>>\n>>>> (Also, please combine your patches into a single patch.)\n>>> Done.\n>>>\n>>> Attaching v9 patch, please review it.\n>>>\n>> Sorry for the late reply! Thanks for your updating the new patch, and it looks\n>> good to me.\n>\n> Committed.\n>\n> Note that you can use syntax like \"ADD|DROP|SET\" in the tab completion\n> code. I have simplified some of your code like that.\n>\n> I also deduplicated the documentation additions a bit.\n\nThanks!\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 07 Apr 2021 10:49:02 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" } ]
[ { "msg_contents": "Hi,\n\nWhen I read the documentation of ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (...),\nit says \"set_publication_option\" only support \"refresh\" in documentation [1].\nHowever, we can also supply the \"copy_data\" option, and the code is:\n\n case ALTER_SUBSCRIPTION_PUBLICATION:\n {\n bool copy_data;\n bool refresh;\n\n parse_subscription_options(stmt->options,\n NULL, /* no \"connect\" */\n NULL, NULL, /* no \"enabled\" */\n NULL, /* no \"create_slot\" */\n NULL, NULL, /* no \"slot_name\" */\n &copy_data,\n NULL, /* no \"synchronous_commit\" */\n &refresh,\n NULL, NULL, /* no \"binary\" */\n NULL, NULL); /* no \"streaming\" */\n values[Anum_pg_subscription_subpublications - 1] =\n publicationListToArray(stmt->publication);\n replaces[Anum_pg_subscription_subpublications - 1] = true;\n\n update_tuple = true;\n\n /* Refresh if user asked us to. */\n if (refresh)\n {\n if (!sub->enabled)\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"ALTER SUBSCRIPTION with refresh is not allowed for disabled subscriptions\"),\n errhint(\"Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).\")));\n\n /* Make sure refresh sees the new list of publications. */\n sub->publications = stmt->publication;\n\n AlterSubscription_refresh(sub, copy_data);\n }\n\n break;\n }\n\nShould we fix the documentation or the code? I'd be inclined fix the documentation.\n\n[1] - https://www.postgresql.org/docs/devel/sql-altersubscription.html\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Tue, 26 Jan 2021 19:26:24 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Fix ALTER SUBSCRIPTION ... SET PUBLICATION documentation" }, { "msg_contents": "On Tue, Jan 26, 2021 at 4:56 PM japin <japinli@hotmail.com> wrote:\n>\n>\n> Hi,\n>\n> When I read the documentation of ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (...),\n> it says \"set_publication_option\" only support \"refresh\" in documentation [1].\n> However, we can also supply the \"copy_data\" option, and the code is:\n>\n\nI think there is a reference to the 'copy_data' option as well. There\nis a sentence saying: \"Additionally, refresh options as described\nunder REFRESH PUBLICATION may be specified.\" and then if you Refresh\noption, there we do mention about 'copy_data', isn't that sufficient?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Jan 2021 16:56:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix ALTER SUBSCRIPTION ... SET PUBLICATION documentation" }, { "msg_contents": "On Wed, Jan 27, 2021 at 4:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 26, 2021 at 4:56 PM japin <japinli@hotmail.com> wrote:\n> >\n> >\n> > Hi,\n> >\n> > When I read the documentation of ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (...),\n> > it says \"set_publication_option\" only support \"refresh\" in documentation [1].\n> > However, we can also supply the \"copy_data\" option, and the code is:\n> >\n>\n> I think there is a reference to the 'copy_data' option as well. There\n> is a sentence saying: \"Additionally, refresh options as described\n> under REFRESH PUBLICATION may be specified.\" and then if you Refresh\n> option, there we do mention about 'copy_data', isn't that sufficient?\n\nRight. It looks like the copy_option is indirectly mentioned with the\nstatement \"Additionally, refresh options as described under REFRESH\nPUBLICATION may be specified.\" under \"set_publication_option\". IMHO,\nwe can keep it that way.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 17:17:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix ALTER SUBSCRIPTION ... SET PUBLICATION documentation" }, { "msg_contents": "\nOn Wed, 27 Jan 2021 at 19:47, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Jan 27, 2021 at 4:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Jan 26, 2021 at 4:56 PM japin <japinli@hotmail.com> wrote:\n>> >\n>> >\n>> > Hi,\n>> >\n>> > When I read the documentation of ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (...),\n>> > it says \"set_publication_option\" only support \"refresh\" in documentation [1].\n>> > However, we can also supply the \"copy_data\" option, and the code is:\n>> >\n>>\n>> I think there is a reference to the 'copy_data' option as well. There\n>> is a sentence saying: \"Additionally, refresh options as described\n>> under REFRESH PUBLICATION may be specified.\" and then if you Refresh\n>> option, there we do mention about 'copy_data', isn't that sufficient?\n>\n> Right. It looks like the copy_option is indirectly mentioned with the\n> statement \"Additionally, refresh options as described under REFRESH\n> PUBLICATION may be specified.\" under \"set_publication_option\". IMHO,\n> we can keep it that way.\n>\n\nMy bad. It may be sufficient. Sorry for noise.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 27 Jan 2021 21:21:24 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix ALTER SUBSCRIPTION ... SET PUBLICATION documentation" } ]
[ { "msg_contents": "Here are the remaining parts of the original 64bit XID patch set that I was able to apply manually, albeit with TBD's and FIXME's. I was unable to apply some small parts of the original patch set. What is contained here should 'git apply' cleanly today, but doesn't compile. It documents most of what was done with this approach in the hope that it will inform a future 64-bit table AM implementation about the many things that need to be considered.", "msg_date": "Tue, 26 Jan 2021 19:06:57 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Challenges preventing us moving to 64 bit transaction id (XID)?" } ]
[ { "msg_contents": "I've created a new page, and added some unresolved items that I've been keeping\nin my head.\n\nhttps://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\n\n", "msg_date": "Tue, 26 Jan 2021 15:55:15 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "wiki:PostgreSQL_14_Open_Items" }, { "msg_contents": "On Tue, Jan 26, 2021 at 03:55:15PM -0600, Justin Pryzby wrote:\n> I've created a new page, and added some unresolved items that I've been keeping\n> in my head.\n> \n> https://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\nThanks, Justin!\n--\nMichael", "msg_date": "Wed, 27 Jan 2021 10:40:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: wiki:PostgreSQL_14_Open_Items" } ]
[ { "msg_contents": "Hi Hackers.\n\nAs discovered elsewhere [ak0125] there is a potential race condition\nin the pg_replication_origin_drop API\n\nThe current code of pg_replication_origin_drop looks like:\n====\nroident = replorigin_by_name(name, false);\nAssert(OidIsValid(roident));\n\nreplorigin_drop(roident, true);\n====\n\nUsers cannot deliberately drop a non-existent origin\n(replorigin_by_name passes missing_ok = false) but there is still a\nsmall window where concurrent processes may be able to call\nreplorigin_drop for the same valid roident.\n\nLocking within replorigin_drop guards against concurrent drops so the\n1st execution will succeed, but then the 2nd execution would give\ninternal cache error: elog(ERROR, \"cache lookup failed for replication\norigin with oid %u\", roident);\n\nSome ideas to fix this include:\n1. Do nothing except write a comment about this in the code. The\ninternal ERROR is not ideal for a user API there is no great harm\ndone.\n2. Change the behavior of replorigin_drop to be like\nreplorigin_drop_IF_EXISTS, so the 2nd execution of this race would\nsilently do nothing when it finds the roident is already gone.\n3. Same as 2, but make the NOP behavior more explicit by introducing a\nnew \"missing_ok\" parameter for replorigin_drop.\n\nThoughts?\n\n----\n[ak0125] https://www.postgresql.org/message-id/CAA4eK1%2ByeLwBCkTvTdPM-hSk1fr6jT8KJc362CN8zrGztq_JqQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 27 Jan 2021 10:27:46 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Wed, Jan 27, 2021 at 4:58 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Hackers.\n>\n> As discovered elsewhere [ak0125] there is a potential race condition\n> in the pg_replication_origin_drop API\n>\n> The current code of pg_replication_origin_drop looks like:\n> ====\n> roident = replorigin_by_name(name, false);\n> Assert(OidIsValid(roident));\n>\n> replorigin_drop(roident, true);\n> ====\n>\n> Users cannot deliberately drop a non-existent origin\n> (replorigin_by_name passes missing_ok = false) but there is still a\n> small window where concurrent processes may be able to call\n> replorigin_drop for the same valid roident.\n>\n> Locking within replorigin_drop guards against concurrent drops so the\n> 1st execution will succeed, but then the 2nd execution would give\n> internal cache error: elog(ERROR, \"cache lookup failed for replication\n> origin with oid %u\", roident);\n>\n> Some ideas to fix this include:\n> 1. Do nothing except write a comment about this in the code. The\n> internal ERROR is not ideal for a user API there is no great harm\n> done.\n> 2. Change the behavior of replorigin_drop to be like\n> replorigin_drop_IF_EXISTS, so the 2nd execution of this race would\n> silently do nothing when it finds the roident is already gone.\n> 3. Same as 2, but make the NOP behavior more explicit by introducing a\n> new \"missing_ok\" parameter for replorigin_drop.\n>\n\nHow about if we call replorigin_by_name() inside replorigin_drop after\nacquiring the lock? Wouldn't that close this race condition? We are\ndoing something similar for pg_replication_origin_advance().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Feb 2021 17:47:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Wed, Feb 3, 2021 at 11:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> How about if we call replorigin_by_name() inside replorigin_drop after\n> acquiring the lock? Wouldn't that close this race condition? We are\n> doing something similar for pg_replication_origin_advance().\n>\n\nYes, that seems ok.\n\nI wonder if it is better to isolate that locked portion\n(replyorigin_by_name + replorigin_drop) so that in addition to being\ncalled from pg_replication_origin_drop, we can call it internally from\nPG code to safely drop the origins.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 4 Feb 2021 15:27:34 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Thu, Feb 4, 2021 at 9:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 11:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > How about if we call replorigin_by_name() inside replorigin_drop after\n> > acquiring the lock? Wouldn't that close this race condition? We are\n> > doing something similar for pg_replication_origin_advance().\n> >\n>\n> Yes, that seems ok.\n>\n> I wonder if it is better to isolate that locked portion\n> (replyorigin_by_name + replorigin_drop) so that in addition to being\n> called from pg_replication_origin_drop, we can call it internally from\n> PG code to safely drop the origins.\n>\n\nYeah, I think that would be really good.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Feb 2021 11:13:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Thu, Feb 4, 2021 at 4:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 9:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Feb 3, 2021 at 11:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > How about if we call replorigin_by_name() inside replorigin_drop after\n> > > acquiring the lock? Wouldn't that close this race condition? We are\n> > > doing something similar for pg_replication_origin_advance().\n> > >\n> >\n> > Yes, that seems ok.\n> >\n> > I wonder if it is better to isolate that locked portion\n> > (replyorigin_by_name + replorigin_drop) so that in addition to being\n> > called from pg_replication_origin_drop, we can call it internally from\n> > PG code to safely drop the origins.\n> >\n>\n> Yeah, I think that would be really good.\n\nPSA a patch which I think implements what we are talking about.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 4 Feb 2021 19:00:56 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Thu, Feb 4, 2021 at 1:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA a patch which I think implements what we are talking about.\n>\n\nThis doesn't seem correct to me. Have you tested that the patch\nresolves the problem reported originally? Because the lockmode\n(RowExclusiveLock) you have used in the patch will allow multiple\ncallers to acquire at the same time. The other thing I don't like\nabout this is that first, it acquires lock in the function\nreplorigin_drop_by_name and then again we acquire the same lock in a\ndifferent mode in replorigin_drop.\n\nWhat I was imagining was to have a code same as replorigin_drop with\nthe first parameter as the name instead of id and additionally, it\nwill check the existence of origin by replorigin_by_name after\nacquiring the lock. So you can move all the common code from\nreplorigin_drop (starting from restart till end leaving table_close)\nto a separate function say replorigin_drop_guts and then call it from\nboth replorigin_drop and replorigin_drop_by_name.\n\nNow, I have also thought to directly change replorigin_drop but this\nis an exposed API so let's keep it as it is because some extensions\nmight be using it. We can anyway later drop it if required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Feb 2021 15:49:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Thu, Feb 4, 2021 at 9:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 1:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA a patch which I think implements what we are talking about.\n> >\n>\n> This doesn't seem correct to me. Have you tested that the patch\n> resolves the problem reported originally? Because the lockmode\n> (RowExclusiveLock) you have used in the patch will allow multiple\n> callers to acquire at the same time. The other thing I don't like\n> about this is that first, it acquires lock in the function\n> replorigin_drop_by_name and then again we acquire the same lock in a\n> different mode in replorigin_drop.\n>\n> What I was imagining was to have a code same as replorigin_drop with\n> the first parameter as the name instead of id and additionally, it\n> will check the existence of origin by replorigin_by_name after\n> acquiring the lock. So you can move all the common code from\n> replorigin_drop (starting from restart till end leaving table_close)\n> to a separate function say replorigin_drop_guts and then call it from\n> both replorigin_drop and replorigin_drop_by_name.\n>\n> Now, I have also thought to directly change replorigin_drop but this\n> is an exposed API so let's keep it as it is because some extensions\n> might be using it. We can anyway later drop it if required.\n>\n\nPSA patch updated per above suggestions.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 5 Feb 2021 15:16:12 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Fri, Feb 5, 2021 at 9:46 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA patch updated per above suggestions.\n>\n\nThanks, I have tested your patch and before the patch, I was getting\nerrors like \"tuple concurrently deleted\" or \"cache lookup failed for\nreplication origin with oid 1\" and after the patch, I am getting\n\"replication origin \"origin-1\" does not exist\" which is clearly better\nand user-friendly.\n\nBefore Patch\npostgres=# select pg_replication_origin_drop('origin-1');\nERROR: tuple concurrently deleted\npostgres=# select pg_replication_origin_drop('origin-1');\nERROR: cache lookup failed for replication origin with oid 1\n\nAfter Patch\npostgres=# select pg_replication_origin_drop('origin-1');\nERROR: replication origin \"origin-1\" does not exist\n\nI wonder why you haven't changed the usage of the existing\nreplorigin_drop in the code? I have changed the same, added few\ncomments, ran pgindent, and updated the commit message in the\nattached.\n\nI am not completely whether we should retire replorigin_drop or just\nkeep it for backward compatibility? What do you think? Anybody else\nhas any opinion?\n\nFor others, the purpose of this patch is to \"make\npg_replication_origin_drop safe against concurrent drops.\". Currently,\nwe get the origin id from the name and then drop the origin by taking\nExclusiveLock on ReplicationOriginRelationId. So, two concurrent\nsessions can get the id from the name at the same time, and then when\nthey try to drop the origin, one of the sessions will get either\n\"tuple concurrently deleted\" or \"cache lookup failed for replication\norigin ..\".\n\nTo prevent this race condition we do the entire operation under lock.\nThis obviates the need for replorigin_drop() API but we have kept it\nfor backward compatibility.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 5 Feb 2021 12:31:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Fri, Feb 5, 2021 at 6:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 5, 2021 at 9:46 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA patch updated per above suggestions.\n> >\n>\n> Thanks, I have tested your patch and before the patch, I was getting\n> errors like \"tuple concurrently deleted\" or \"cache lookup failed for\n> replication origin with oid 1\" and after the patch, I am getting\n> \"replication origin \"origin-1\" does not exist\" which is clearly better\n> and user-friendly.\n>\n> Before Patch\n> postgres=# select pg_replication_origin_drop('origin-1');\n> ERROR: tuple concurrently deleted\n> postgres=# select pg_replication_origin_drop('origin-1');\n> ERROR: cache lookup failed for replication origin with oid 1\n>\n> After Patch\n> postgres=# select pg_replication_origin_drop('origin-1');\n> ERROR: replication origin \"origin-1\" does not exist\n>\n> I wonder why you haven't changed the usage of the existing\n> replorigin_drop in the code? I have changed the same, added few\n> comments, ran pgindent, and updated the commit message in the\n> attached.\n\nYou are right.\n\nThe goal of this patch was to fix pg_replication_origin_drop, but\nwhile focussed on fixing that, I forgot the same call pattern was also\nin the DropSubscription.\n\n>\n> I am not completely whether we should retire replorigin_drop or just\n> keep it for backward compatibility? What do you think? Anybody else\n> has any opinion?\n\nIt is still good code, but just not being used atm.\n\nI don't know what is the PG convention for dead code - to remove it\nimmedaitely at first sight, or to leave it lying around if it still\nmight have future usefulness?\nPersonally, I would leave it, if only because it seems a less radical\nchange from the current HEAD code to keep the existing function\nsignature.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 5 Feb 2021 19:20:20 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Fri, Feb 5, 2021 at 1:50 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Feb 5, 2021 at 6:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 5, 2021 at 9:46 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > PSA patch updated per above suggestions.\n> > >\n> >\n> > Thanks, I have tested your patch and before the patch, I was getting\n> > errors like \"tuple concurrently deleted\" or \"cache lookup failed for\n> > replication origin with oid 1\" and after the patch, I am getting\n> > \"replication origin \"origin-1\" does not exist\" which is clearly better\n> > and user-friendly.\n> >\n> > Before Patch\n> > postgres=# select pg_replication_origin_drop('origin-1');\n> > ERROR: tuple concurrently deleted\n> > postgres=# select pg_replication_origin_drop('origin-1');\n> > ERROR: cache lookup failed for replication origin with oid 1\n> >\n> > After Patch\n> > postgres=# select pg_replication_origin_drop('origin-1');\n> > ERROR: replication origin \"origin-1\" does not exist\n> >\n> > I wonder why you haven't changed the usage of the existing\n> > replorigin_drop in the code? I have changed the same, added few\n> > comments, ran pgindent, and updated the commit message in the\n> > attached.\n>\n> You are right.\n>\n> The goal of this patch was to fix pg_replication_origin_drop, but\n> while focussed on fixing that, I forgot the same call pattern was also\n> in the DropSubscription.\n>\n> >\n> > I am not completely whether we should retire replorigin_drop or just\n> > keep it for backward compatibility? What do you think? Anybody else\n> > has any opinion?\n>\n> It is still good code, but just not being used atm.\n>\n> I don't know what is the PG convention for dead code - to remove it\n> immedaitely at first sight, or to leave it lying around if it still\n> might have future usefulness?\n>\n\nI am mostly worried about the extensions outside pg-core. For example,\non a quick search, it seems there seem to be a few such usages in\npglogical [1][2]. Then, I see a similar usage pattern (search by name\nand then drop) in one of the pglogical [3].\n\n[1] - https://github.com/2ndQuadrant/pglogical/issues/160\n[2] - https://github.com/2ndQuadrant/pglogical/issues/124\n[3] - https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_functions.c\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 5 Feb 2021 14:44:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Fri, Feb 5, 2021, at 4:01 AM, Amit Kapila wrote:\n> I am not completely whether we should retire replorigin_drop or just\n> keep it for backward compatibility? What do you think? Anybody else\n> has any opinion?\nWe could certainly keep some code for backward compatibility, however, we have\nto consider if it is (a) an exposed API and/or (b) a critical path. We break\nseveral extensions every release due to Postgres extensibility. For (a), it is\nnot an exposed function, I mean, we are not changing\n`pg_replication_origin_drop`. Hence, there is no need to keep it. In (b), we\ncould risk slowing down some critical paths that we decide to keep the old\nfunction and create a new one that contains additional features. It is not the\ncase for this function. It is rare that an extension does not have a few #ifdef\nif it supports multiple Postgres versions. IMO we should keep as little code as\npossible into the core in favor of maintainability.\n\n- replorigin_drop(roident, true);\n+ replorigin_drop_by_name(name, false /* missing_ok */ , true /* nowait */ );\n\nA modern IDE would certainly show you the function definition that allows you\nto check what each parameter value is without having to go back and forth. I\nsaw a few occurrences of this pattern in the source code and IMO it could be\nused when it is not obvious what that value means. Booleans are easier to\nfigure out, however, sometimes integer and text are not.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Feb 5, 2021, at 4:01 AM, Amit Kapila wrote:I am not completely whether we should retire replorigin_drop or justkeep it for backward compatibility? What do you think? Anybody elsehas any opinion?We could certainly keep some code for backward compatibility, however, we haveto consider if it is (a) an exposed API and/or (b) a critical path. We breakseveral extensions every release due to Postgres extensibility. For (a), it isnot an exposed function, I mean, we are not changing`pg_replication_origin_drop`. Hence, there is no need to keep it. In (b), wecould risk slowing down some critical paths that we decide to keep the oldfunction and create a new one that contains additional features. It is not thecase for this function. It is rare that an extension does not have a few #ifdefif it supports multiple Postgres versions. IMO we should keep as little code aspossible into the core in favor of maintainability.-\treplorigin_drop(roident, true);+\treplorigin_drop_by_name(name, false /* missing_ok */ , true /* nowait */ );A modern IDE would certainly show you the function definition that allows youto check what each parameter value is without having to go back and forth. Isaw a few occurrences of this pattern in the source code and IMO it could beused when it is not obvious what that value means. Booleans are easier tofigure out, however, sometimes integer and text are not.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Fri, 05 Feb 2021 10:14:23 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Fri, Feb 5, 2021 at 6:45 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Fri, Feb 5, 2021, at 4:01 AM, Amit Kapila wrote:\n>\n> I am not completely whether we should retire replorigin_drop or just\n> keep it for backward compatibility? What do you think? Anybody else\n> has any opinion?\n>\n> We could certainly keep some code for backward compatibility, however, we have\n> to consider if it is (a) an exposed API and/or (b) a critical path. We break\n> several extensions every release due to Postgres extensibility. For (a), it is\n> not an exposed function, I mean, we are not changing\n> `pg_replication_origin_drop`. Hence, there is no need to keep it. In (b), we\n> could risk slowing down some critical paths that we decide to keep the old\n> function and create a new one that contains additional features. It is not the\n> case for this function. It is rare that an extension does not have a few #ifdef\n> if it supports multiple Postgres versions. IMO we should keep as little code as\n> possible into the core in favor of maintainability.\n>\n\nYeah, that makes. I was a bit worried about pglogical but I think they\ncan easily update it if required, so removed as per your suggestion.\nPetr, any opinion on this matter? I am planning to push this early\nnext week (by Tuesday) unless you or someone else think it is not a\ngood idea.\n\n> - replorigin_drop(roident, true);\n> + replorigin_drop_by_name(name, false /* missing_ok */ , true /* nowait */ );\n>\n> A modern IDE would certainly show you the function definition that allows you\n> to check what each parameter value is without having to go back and forth. I\n> saw a few occurrences of this pattern in the source code and IMO it could be\n> used when it is not obvious what that value means. Booleans are easier to\n> figure out, however, sometimes integer and text are not.\n>\n\nFair enough, removed in the attached patch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 6 Feb 2021 11:59:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On 06/02/2021 07:29, Amit Kapila wrote:\n> On Fri, Feb 5, 2021 at 6:45 PM Euler Taveira <euler@eulerto.com> wrote:\n>> On Fri, Feb 5, 2021, at 4:01 AM, Amit Kapila wrote:\n>>\n>> I am not completely whether we should retire replorigin_drop or just\n>> keep it for backward compatibility? What do you think? Anybody else\n>> has any opinion?\n>>\n>> We could certainly keep some code for backward compatibility, however, we have\n>> to consider if it is (a) an exposed API and/or (b) a critical path. We break\n>> several extensions every release due to Postgres extensibility. For (a), it is\n>> not an exposed function, I mean, we are not changing\n>> `pg_replication_origin_drop`. Hence, there is no need to keep it. In (b), we\n>> could risk slowing down some critical paths that we decide to keep the old\n>> function and create a new one that contains additional features. It is not the\n>> case for this function. It is rare that an extension does not have a few #ifdef\n>> if it supports multiple Postgres versions. IMO we should keep as little code as\n>> possible into the core in favor of maintainability.\n>>\n> Yeah, that makes. I was a bit worried about pglogical but I think they\n> can easily update it if required, so removed as per your suggestion.\n> Petr, any opinion on this matter? I am planning to push this early\n> next week (by Tuesday) unless you or someone else think it is not a\n> good idea.\n>\n>> - replorigin_drop(roident, true);\n>> + replorigin_drop_by_name(name, false /* missing_ok */ , true /* nowait */ );\n>>\n>> A modern IDE would certainly show you the function definition that allows you\n>> to check what each parameter value is without having to go back and forth. I\n>> saw a few occurrences of this pattern in the source code and IMO it could be\n>> used when it is not obvious what that value means. Booleans are easier to\n>> figure out, however, sometimes integer and text are not.\n>>\n> Fair enough, removed in the attached patch.\n\n\nTo be fair the logical replication framework is full of these comments \nso it's pretty natural to add them to new code as well, but I agree with \nEuler that it's unnecessary with any reasonable development tooling.\n\nThe patch as posted looks good to me, as an extension author I normally \nhave origin cached by id, so the api change means I have to do name \nlookup now, but given this is just for drop, it does not really matter.\n\n-- \nPetr\n\n\n\n", "msg_date": "Sat, 6 Feb 2021 10:56:01 +0100", "msg_from": "Petr Jelinek <pjmodos@pjmodos.net>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Sat, Feb 6, 2021 at 3:26 PM Petr Jelinek <pjmodos@pjmodos.net> wrote:\n>\n> On 06/02/2021 07:29, Amit Kapila wrote:\n> > On Fri, Feb 5, 2021 at 6:45 PM Euler Taveira <euler@eulerto.com> wrote:\n> >> - replorigin_drop(roident, true);\n> >> + replorigin_drop_by_name(name, false /* missing_ok */ , true /* nowait */ );\n> >>\n> >> A modern IDE would certainly show you the function definition that allows you\n> >> to check what each parameter value is without having to go back and forth. I\n> >> saw a few occurrences of this pattern in the source code and IMO it could be\n> >> used when it is not obvious what that value means. Booleans are easier to\n> >> figure out, however, sometimes integer and text are not.\n> >>\n> > Fair enough, removed in the attached patch.\n>\n>\n> To be fair the logical replication framework is full of these comments\n> so it's pretty natural to add them to new code as well, but I agree with\n> Euler that it's unnecessary with any reasonable development tooling.\n>\n> The patch as posted looks good to me,\n>\n\nThanks, but today again testing this API, I observed that we can still\nget \"tuple concurrently deleted\" because we are releasing the lock on\nReplicationOriginRelationId at the end of API replorigin_drop_by_name.\nSo there is no guarantee that invalidation reaches other backend doing\nthe same operation. I think we need to keep the lock till the end of\nxact as we do in other drop operations (see DropTableSpace, dropdb).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 6 Feb 2021 17:47:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Sat, Feb 6, 2021 at 5:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 6, 2021 at 3:26 PM Petr Jelinek <pjmodos@pjmodos.net> wrote:\n> >\n> > On 06/02/2021 07:29, Amit Kapila wrote:\n> > > On Fri, Feb 5, 2021 at 6:45 PM Euler Taveira <euler@eulerto.com> wrote:\n> > >> - replorigin_drop(roident, true);\n> > >> + replorigin_drop_by_name(name, false /* missing_ok */ , true /* nowait */ );\n> > >>\n> > >> A modern IDE would certainly show you the function definition that allows you\n> > >> to check what each parameter value is without having to go back and forth. I\n> > >> saw a few occurrences of this pattern in the source code and IMO it could be\n> > >> used when it is not obvious what that value means. Booleans are easier to\n> > >> figure out, however, sometimes integer and text are not.\n> > >>\n> > > Fair enough, removed in the attached patch.\n> >\n> >\n> > To be fair the logical replication framework is full of these comments\n> > so it's pretty natural to add them to new code as well, but I agree with\n> > Euler that it's unnecessary with any reasonable development tooling.\n> >\n> > The patch as posted looks good to me,\n> >\n>\n> Thanks, but today again testing this API, I observed that we can still\n> get \"tuple concurrently deleted\" because we are releasing the lock on\n> ReplicationOriginRelationId at the end of API replorigin_drop_by_name.\n> So there is no guarantee that invalidation reaches other backend doing\n> the same operation. I think we need to keep the lock till the end of\n> xact as we do in other drop operations (see DropTableSpace, dropdb).\n>\n\nFixed the problem as mentioned above in the attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 8 Feb 2021 11:53:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Mon, Feb 8, 2021, at 3:23 AM, Amit Kapila wrote:\n> Fixed the problem as mentioned above in the attached.\nThis new version looks good to me.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Feb 8, 2021, at 3:23 AM, Amit Kapila wrote:Fixed the problem as mentioned above in the attached.This new version looks good to me.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 08 Feb 2021 13:51:08 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "> +void\n> +replorigin_drop_by_name(char *name, bool missing_ok, bool nowait)\n> +{\n> +\tRepOriginId roident;\n> +\tRelation\trel;\n> +\n> +\tAssert(IsTransactionState());\n> +\n> +\t/*\n> +\t * To interlock against concurrent drops, we hold ExclusiveLock on\n> +\t * pg_replication_origin throughout this function.\n> +\t */\n\nThis comment is now wrong though; should s/throughout.*/till xact commit/\nto reflect the new reality.\n\nI do wonder if this is going to be painful in some way, since the lock\nis now going to be much longer-lived. My impression is that it's okay,\nsince dropping an origin is not a very frequent occurrence. It is going\nto block pg_replication_origin_advance() with *any* origin, which\nacquires RowExclusiveLock on the same relation. If this is a problem,\nthen we could use LockSharedObject() in both places (and make it last\ntill end of xact for the case of deletion), instead of holding this\ncatalog-level lock till end of transaction.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"On the other flipper, one wrong move and we're Fatal Exceptions\"\n(T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)\n\n\n", "msg_date": "Mon, 8 Feb 2021 15:00:37 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Mon, Feb 8, 2021 at 11:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > +void\n> > +replorigin_drop_by_name(char *name, bool missing_ok, bool nowait)\n> > +{\n> > + RepOriginId roident;\n> > + Relation rel;\n> > +\n> > + Assert(IsTransactionState());\n> > +\n> > + /*\n> > + * To interlock against concurrent drops, we hold ExclusiveLock on\n> > + * pg_replication_origin throughout this function.\n> > + */\n>\n> This comment is now wrong though; should s/throughout.*/till xact commit/\n> to reflect the new reality.\n>\n\nRight, I'll fix in the next version.\n\n> I do wonder if this is going to be painful in some way, since the lock\n> is now going to be much longer-lived. My impression is that it's okay,\n> since dropping an origin is not a very frequent occurrence. It is going\n> to block pg_replication_origin_advance() with *any* origin, which\n> acquires RowExclusiveLock on the same relation. If this is a problem,\n> then we could use LockSharedObject() in both places (and make it last\n> till end of xact for the case of deletion), instead of holding this\n> catalog-level lock till end of transaction.\n>\n\nIIUC, you are suggesting to use lock for the particular origin instead\nof locking the corresponding catalog table in functions\npg_replication_origin_advance and replorigin_drop_by_name. If so, I\ndon't see any problem with the same but please note that we do take\ncatalog-level lock in replorigin_create() which would have earlier\nprevented create and drop to run concurrently. Having said that, I\ndon't see any problem with it because I think till the drop is\ncommitted, the create will see the corresponding row as visible and we\nwon't generate the wrong origin_id. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 Feb 2021 09:27:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Tue, Feb 9, 2021 at 9:27 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 8, 2021 at 11:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > > +void\n> > > +replorigin_drop_by_name(char *name, bool missing_ok, bool nowait)\n> > > +{\n> > > + RepOriginId roident;\n> > > + Relation rel;\n> > > +\n> > > + Assert(IsTransactionState());\n> > > +\n> > > + /*\n> > > + * To interlock against concurrent drops, we hold ExclusiveLock on\n> > > + * pg_replication_origin throughout this function.\n> > > + */\n> >\n> > This comment is now wrong though; should s/throughout.*/till xact commit/\n> > to reflect the new reality.\n> >\n>\n> Right, I'll fix in the next version.\n>\n\nFixed in the attached.\n\n> > I do wonder if this is going to be painful in some way, since the lock\n> > is now going to be much longer-lived. My impression is that it's okay,\n> > since dropping an origin is not a very frequent occurrence. It is going\n> > to block pg_replication_origin_advance() with *any* origin, which\n> > acquires RowExclusiveLock on the same relation. If this is a problem,\n> > then we could use LockSharedObject() in both places (and make it last\n> > till end of xact for the case of deletion), instead of holding this\n> > catalog-level lock till end of transaction.\n> >\n>\n> IIUC, you are suggesting to use lock for the particular origin instead\n> of locking the corresponding catalog table in functions\n> pg_replication_origin_advance and replorigin_drop_by_name. If so, I\n> don't see any problem with the same\n>\n\nI think it won't be that straightforward as we don't have origin_id.\nSo what we instead need to do is first to acquire a lock on\nReplicationOriginRelationId, get the origin_id, lock the specific\norigin and then re-check if the origin still exists. I feel some\nsimilar changes might be required in pg_replication_origin_advance.\nNow, we can do this optimization if we want but I am not sure if\norigin_drop would be a frequent enough operation that we add such an\noptimization. For now, I have added a note in the comments so that if\nwe find any such use case we can implement such optimization in the\nfuture. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 9 Feb 2021 10:58:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On 2021-Feb-09, Amit Kapila wrote:\n\n> > IIUC, you are suggesting to use lock for the particular origin instead\n> > of locking the corresponding catalog table in functions\n> > pg_replication_origin_advance and replorigin_drop_by_name.\n\nRight.\n\n> I think it won't be that straightforward as we don't have origin_id.\n> So what we instead need to do is first to acquire a lock on\n> ReplicationOriginRelationId, get the origin_id, lock the specific\n> origin and then re-check if the origin still exists. I feel some\n> similar changes might be required in pg_replication_origin_advance.\n\nHmm, ok.\n\n> Now, we can do this optimization if we want but I am not sure if\n> origin_drop would be a frequent enough operation that we add such an\n> optimization. For now, I have added a note in the comments so that if\n> we find any such use case we can implement such optimization in the\n> future. What do you think?\n\nBy all means let's get the bug fixed. Then, in another patch, we can\noptimize further, if there really is a problem.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Siempre hay que alimentar a los dioses, aunque la tierra est� seca\" (Orual)\n\n\n", "msg_date": "Tue, 9 Feb 2021 07:46:37 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Tue, Feb 9, 2021 at 4:16 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Feb-09, Amit Kapila wrote:\n>\n> > Now, we can do this optimization if we want but I am not sure if\n> > origin_drop would be a frequent enough operation that we add such an\n> > optimization. For now, I have added a note in the comments so that if\n> > we find any such use case we can implement such optimization in the\n> > future. What do you think?\n>\n> By all means let's get the bug fixed.\n>\n\nI am planning to push this in HEAD only as there is no user reported\nproblem and this is actually more about giving correct information to\nthe user rather than some misleading message. Do you see any need to\nback-patch this change?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 Feb 2021 16:41:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On 2021-Feb-09, Amit Kapila wrote:\n\n> On Tue, Feb 9, 2021 at 4:16 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > By all means let's get the bug fixed.\n> \n> I am planning to push this in HEAD only as there is no user reported\n> problem and this is actually more about giving correct information to\n> the user rather than some misleading message. Do you see any need to\n> back-patch this change?\n\nmaster-only sounds OK.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Cuando ma�ana llegue pelearemos segun lo que ma�ana exija\" (Mowgli)\n\n\n", "msg_date": "Tue, 9 Feb 2021 09:23:53 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" }, { "msg_contents": "On Tue, Feb 9, 2021 at 5:53 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Feb-09, Amit Kapila wrote:\n>\n> > On Tue, Feb 9, 2021 at 4:16 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > By all means let's get the bug fixed.\n> >\n> > I am planning to push this in HEAD only as there is no user reported\n> > problem and this is actually more about giving correct information to\n> > the user rather than some misleading message. Do you see any need to\n> > back-patch this change?\n>\n> master-only sounds OK.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Feb 2021 08:03:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_drop API potential race condition" } ]
[ { "msg_contents": "Hi,\n\n${subject} happened while executing ${attached query} at regresssion\ndatabase, using 14dev (commit\nd5a83d79c9f9b660a6a5a77afafe146d3c8c6f46) and produced ${attached\nstack trace}.\n\nSadly just loading the regression database and executing this query is\nnot enough to reproduce. Not sure what else I can do to help with this\none.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Wed, 27 Jan 2021 01:52:16 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "FailedAssertion in heap_index_delete_tuples at heapam.c:7220" }, { "msg_contents": "On Tue, Jan 26, 2021 at 10:52 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> ${subject} happened while executing ${attached query} at regresssion\n> database, using 14dev (commit\n> d5a83d79c9f9b660a6a5a77afafe146d3c8c6f46) and produced ${attached\n> stack trace}.\n\nI see the bug: gistprunepage() calls\nindex_compute_xid_horizon_for_tuples() (which ultimately calls the\nheapam.c callback for heap_index_delete_tuples()) with an empty array,\nwhich we don't expect. The similar code within _hash_vacuum_one_page()\nalready only calls index_compute_xid_horizon_for_tuples() when\nndeletable > 0.\n\nThe fix is obvious: Bring gistprunepage() in line with\n_hash_vacuum_one_page(). I'll go push a fix for that now.\n\nThanks for the report!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 26 Jan 2021 23:09:14 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion in heap_index_delete_tuples at heapam.c:7220" }, { "msg_contents": "On Wed, Jan 27, 2021 at 2:09 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Jan 26, 2021 at 10:52 PM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> > ${subject} happened while executing ${attached query} at regresssion\n> > database, using 14dev (commit\n> > d5a83d79c9f9b660a6a5a77afafe146d3c8c6f46) and produced ${attached\n> > stack trace}.\n>\n> I see the bug: gistprunepage() calls\n> index_compute_xid_horizon_for_tuples() (which ultimately calls the\n> heapam.c callback for heap_index_delete_tuples()) with an empty array,\n> which we don't expect. The similar code within _hash_vacuum_one_page()\n> already only calls index_compute_xid_horizon_for_tuples() when\n> ndeletable > 0.\n>\n> The fix is obvious: Bring gistprunepage() in line with\n> _hash_vacuum_one_page(). I'll go push a fix for that now.\n>\n\nThanks\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Wed, 27 Jan 2021 09:10:00 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: FailedAssertion in heap_index_delete_tuples at heapam.c:7220" } ]
[ { "msg_contents": "While reading pg_rewind code I found two things could speed up pg_rewind.\r\nAttached are the patches.\r\n\r\nFirst one: pg_rewind would fsync the whole pgdata directory on the target by default,\r\nbut that is a waste since usually just part of the files/directories on\r\nthe target are modified. Other files on the target should have been flushed\r\nsince pg_rewind requires a clean shutdown before doing the real work. This\r\nwould help the scenario that the target postgres instance includes millions of\r\nfiles, which has been seen in a real environment.\r\n\r\nThere are several things that may need further discussions:\r\n\r\n1. PG_FLUSH_DATA_WORKS was introduced as \"Define PG_FLUSH_DATA_WORKS if we have an implementation for pg_flush_data”,\r\n but now the code guarded by it is just pre_sync_fname() relevant so we might want\r\n to rename it as HAVE_PRE_SYNC kind of name?\r\n\r\n2. Pre_sync_fname() implementation\r\n\r\n The code looks like this:\r\n #if defined(HAVE_SYNC_FILE_RANGE)\r\n (void) sync_file_range(fd, 0, 0, SYNC_FILE_RANGE_WRITE);\r\n #elif defined(USE_POSIX_FADVISE) && defined(POSIX_FADV_DONTNEED)\r\n (void) posix_fadvise(fd, 0, 0, POSIX_FADV_DONTNEED);\r\n\r\n I’m a bit suspicious about calling posix_fadvise() with POSIX_FADV_DONTNEED.\r\n I did not check the Linux Kernel code but according to the man\r\n page I suspect that this option might cause the kernel tends to evict the related kernel\r\n pages from the page cache, which might not be something we expect. This is\r\n not a big issue since sync_file_range() should exist on many widely used Linux.\r\n\r\n Also I’m not sure how much we could benefit from the pre_sync code. Also note if the\r\n directory has a lot of files or the IO is fast, pre_sync_fname() might slow down\r\n the process instead. The reasons are: If there are a lot of files it is possible that we need\r\n to read the already-synced-and-evicted inode from disk (by open()-ing) after rewinding since\r\n the inode cache in Linux Kernel is limited; also if the IO is faster and kernel do background\r\n dirty page flush quickly, pre_sync_fname() might just waste cpu cycles.\r\n\r\n A better solution might be launch a separate pthread and do fsync one by one\r\n when pg_rewind finishes handling one file. pg_basebackup could use the solution also.\r\n\r\n Anyway this is independent of this patch.\r\n\r\nSecond one is use copy_file_range() for the local rewind case to replace read()+write().\r\nThis introduces copy_file_range() check and HAVE_COPY_FILE_RANGE so other\r\ncode could use copy_file_range() if needed. copy_file_range() was introduced\r\nIn high-version Linux Kernel, in low-version Linux or other Unix-like OS mmap()\r\nmight be better than read()+write() but copy_file_range() is more interesting\r\ngiven that it could skip the data copying in some file systems - this could benefit more\r\non Linux fs on network-based block storage.\r\n\r\nRegards,\r\nPaul", "msg_date": "Wed, 27 Jan 2021 09:18:48 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Two patches to speed up pg_rewind." }, { "msg_contents": "On Wed, Jan 27, 2021 at 09:18:48AM +0000, Paul Guo wrote:\n> Second one is use copy_file_range() for the local rewind case to replace read()+write().\n> This introduces copy_file_range() check and HAVE_COPY_FILE_RANGE so other\n> code could use copy_file_range() if needed. copy_file_range() was introduced\n> In high-version Linux Kernel, in low-version Linux or other Unix-like OS mmap()\n> might be better than read()+write() but copy_file_range() is more interesting\n> given that it could skip the data copying in some file systems - this could benefit more\n> on Linux fs on network-based block storage.\n\nHave you done some measurements?\n--\nMichael", "msg_date": "Thu, 28 Jan 2021 16:31:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Jan 28, 2021, at 3:31 PM, Michael Paquier <michael@paquier.xyz<mailto:michael@paquier.xyz>> wrote:\n\nOn Wed, Jan 27, 2021 at 09:18:48AM +0000, Paul Guo wrote:\nSecond one is use copy_file_range() for the local rewind case to replace read()+write().\nThis introduces copy_file_range() check and HAVE_COPY_FILE_RANGE so other\ncode could use copy_file_range() if needed. copy_file_range() was introduced\nIn high-version Linux Kernel, in low-version Linux or other Unix-like OS mmap()\nmight be better than read()+write() but copy_file_range() is more interesting\ngiven that it could skip the data copying in some file systems - this could benefit more\non Linux fs on network-based block storage.\n\nHave you done some measurements?\n\nI did not test pg_rewind but for patch 2, I tested copy_fiile_range() vs read()+write()\non XFS in Ubuntu 20.04.1 when working on the patches,\n\nHere is the test time of 1G file (fully populated with random data) copy. The test is a simple C program.\n\ncopy_file_range() loop (actually it finished after one call) + fsync()\n0m0.048s\n\nFor read()+write() loop with read/write buffer size 32K + fsync()\n0m5.004s\n\nFor patch 1, it skips syncing less files so it surely benefits the performance.\n\n\n\n\n\n\n\n\n\n\nOn Jan 28, 2021, at 3:31 PM, Michael Paquier <michael@paquier.xyz> wrote:\n\n\nOn Wed, Jan 27, 2021 at 09:18:48AM +0000, Paul Guo wrote:\nSecond one is use copy_file_range() for the local rewind case to replace read()+write().\nThis introduces copy_file_range() check and HAVE_COPY_FILE_RANGE so other\ncode could use copy_file_range() if needed. copy_file_range() was introduced\nIn high-version Linux Kernel, in low-version Linux or other Unix-like OS mmap()\nmight be better than read()+write() but copy_file_range() is more interesting\ngiven that it could skip the data copying in some file systems - this could benefit more\non Linux fs on network-based block storage.\n\n\nHave you done some measurements?\n\n\n\n\n\nI did not test pg_rewind but for patch 2, I tested copy_fiile_range() vs read()+write()\non XFS in Ubuntu 20.04.1 when working on the patches,\n\n\nHere is the test time of 1G file (fully populated with random data) copy. The test is a simple C program.\n\n\ncopy_file_range() loop (actually it finished after one call) + fsync()\n0m0.048s\n\n\nFor read()+write() loop with read/write buffer size 32K + fsync()\n\n0m5.004s\n\n\n\n\nFor patch 1, it skips syncing less files so it surely benefits the performance.", "msg_date": "Tue, 2 Feb 2021 09:55:43 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "Refactored the code a bit along with fixes. Manually tested them on centos\r\n& Ubuntu (the later has copy_file_range())\r\n\r\nFor the first patch, actually I have some concerns. My assumption is that\r\nthe target pg_data directory should be fsync-ed already. This should be\r\ncorrect normally but there is one scenario: a cleanly-shutdown database’s\r\npgdata directory was copied to another directory, in this case the new pgdata\r\nis not fsync-ed - I’m not sure if that exists in real production environment or not,\r\nbut even considering this we could still use the optimization for the case that\r\ncalls ensureCleanShutdown() since this ensures a pgdata fsync on the target.", "msg_date": "Fri, 19 Feb 2021 02:33:13 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "> On 2021/2/19, 10:33 AM, \"Paul Guo\" <guopa@vmware.com> wrote:\n\n> Refactored the code a bit along with fixes. Manually tested them on centos\n> & Ubuntu (the later has copy_file_range())\n\n> For the first patch, actually I have some concerns. My assumption is that\n> the target pg_data directory should be fsync-ed already. This should be\n> correct normally but there is one scenario: a cleanly-shutdown database’s\n> pgdata directory was copied to another directory, in this case the new pgdata\n> is not fsync-ed - I’m not sure if that exists in real production environment or not,\n> but even considering this we could still use the optimization for the case that\n> calls ensureCleanShutdown() since this ensures a pgdata fsync on the target.\nDid some small modification and rebased the code. See attached for the new version.", "msg_date": "Fri, 28 May 2021 05:30:51 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Fri, May 28, 2021 at 05:30:51AM +0000, Paul Guo wrote:\n> Did some small modification and rebased the code. See attached for the new version.\n\nRegarding patch 0002, I find the inter-dependencies between\nwrite_target_range() and copy_target_range() a bit confusing. There\nis also a bit of duplication for dry_run, fetch_done and the progress\nreporting. Perhaps it would be cleaner to have a fallback\nimplementation of copy_file_range() in src/port/ and reduce the\nfootprint of the patch in pg_rewind?\n\nNote: FreeBSD 13~ has support for copy_file_range(), nice.. Adding\nThomas in CC in case I am missing something.\n--\nMichael", "msg_date": "Wed, 2 Jun 2021 14:20:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Wed, Jun 2, 2021 at 5:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Note: FreeBSD 13~ has support for copy_file_range(), nice.. Adding\n> Thomas in CC in case I am missing something.\n\nYeah, so at least in theory Linux and FreeBSD can now both do tricks\nlike pushing copies down to network filesystems, COW file systems, and\n(I believe not actually done by anyone yet, could be wrong) SCSI and\nNVMe devices (they have commands like XCOPY that can copy block ranges\ndirectly). I read a few things about all that, and I had a trivial\npatch to try to use it in the places in the backend where we copy\nfiles (like cloning a database with CREATE DATABASE and moving files\nwith ALTER TABLE SET TABLESPACE), but I hadn't got as far as actually\ntrying it on any interesting filesystems or figuring out any really\ngood uses for it. FWIW, here it is:\n\nhttps://github.com/postgres/postgres/compare/master...macdice:copy_file_range\n\nThe main thing I noticed was that Linux < 5.3 can fail with EXDEV if\nyou cross a filesystem boundary, is that something we need to worry\nabou there?\n\n\n", "msg_date": "Wed, 2 Jun 2021 18:20:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Wed, Jun 02, 2021 at 06:20:30PM +1200, Thomas Munro wrote:\n> The main thing I noticed was that Linux < 5.3 can fail with EXDEV if\n> you cross a filesystem boundary, is that something we need to worry\n> about there?\n\nHmm. Good point. That may justify having a switch to control that.\n--\nMichael", "msg_date": "Wed, 2 Jun 2021 17:02:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Wed, Jun 02, 2021 at 05:02:10PM +0900, Michael Paquier wrote:\n> On Wed, Jun 02, 2021 at 06:20:30PM +1200, Thomas Munro wrote:\n> > The main thing I noticed was that Linux < 5.3 can fail with EXDEV if\n> > you cross a filesystem boundary, is that something we need to worry\n> > about there?\n> \n> Hmm. Good point. That may justify having a switch to control that.\n\nPaul, the patch set still needs some work, so I am switching it as\nwaiting on author. I am pretty sure that we had better have a\nfallback implementation of copy_file_range() in src/port/, and that we\nare going to need an extra switch in pg_rewind to allow users to\nbypass copy_file_range()/EXDEV if they do a local rewind operation\nacross different FSes with a kernel < 5.3.\n--\nMichael", "msg_date": "Thu, 17 Jun 2021 16:18:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "No worry I’m work on this.\n\nOn 2021/6/17, 3:18 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\nOn Wed, Jun 02, 2021 at 05:02:10PM +0900, Michael Paquier wrote:\n> On Wed, Jun 02, 2021 at 06:20:30PM +1200, Thomas Munro wrote:\n> > The main thing I noticed was that Linux < 5.3 can fail with EXDEV if\n> > you cross a filesystem boundary, is that something we need to worry\n> > about there?\n>\n> Hmm. Good point. That may justify having a switch to control that.\n\nPaul, the patch set still needs some work, so I am switching it as\nwaiting on author. I am pretty sure that we had better have a\nfallback implementation of copy_file_range() in src/port/, and that we\nare going to need an extra switch in pg_rewind to allow users to\nbypass copy_file_range()/EXDEV if they do a local rewind operation\nacross different FSes with a kernel < 5.3.\n--\nMichael\n\n\n\n\n\n\n\n\n\n\nNo worry I’m work on this. \n \nOn 2021/6/17, 3:18 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n\nOn Wed, Jun 02, 2021 at 05:02:10PM +0900, Michael Paquier wrote:\n\n\n> On Wed, Jun 02, 2021 at 06:20:30PM +1200, Thomas Munro wrote:\n\n\n> > The main thing I noticed was that Linux < 5.3 can fail with EXDEV if\n\n\n> > you cross a filesystem boundary, is that something we need to worry\n\n\n> > about there?\n\n\n> \n\n\n> Hmm.  Good point.  That may justify having a switch to control that.\n\n\n \n\n\nPaul, the patch set still needs some work, so I am switching it as\n\n\nwaiting on author.  I am pretty sure that we had better have a\n\n\nfallback implementation of copy_file_range() in src/port/, and that we\n\n\nare going to need an extra switch in pg_rewind to allow users to\n\n\nbypass copy_file_range()/EXDEV if they do a local rewind operation\n\n\nacross different FSes with a kernel < 5.3.\n\n\n--\n\n\nMichael", "msg_date": "Thu, 17 Jun 2021 07:42:03 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Thu, Jun 17, 2021 at 3:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jun 02, 2021 at 05:02:10PM +0900, Michael Paquier wrote:\n> > On Wed, Jun 02, 2021 at 06:20:30PM +1200, Thomas Munro wrote:\n> > > The main thing I noticed was that Linux < 5.3 can fail with EXDEV if\n> > > you cross a filesystem boundary, is that something we need to worry\n> > > about there?\n> >\n> > Hmm. Good point. That may justify having a switch to control that.\n>\n> Paul, the patch set still needs some work, so I am switching it as\n> waiting on author. I am pretty sure that we had better have a\n> fallback implementation of copy_file_range() in src/port/, and that we\n> are going to need an extra switch in pg_rewind to allow users to\n> bypass copy_file_range()/EXDEV if they do a local rewind operation\n> across different FSes with a kernel < 5.3.\n> --\n\nI did modification on the copy_file_range() patch yesterday by simply falling\nback to read()+write() but I think it could be improved further.\n\nWe may add a function to determine two file/path are copy_file_range()\ncapable or not (using POSIX standard statvfs():f_fsid?) - that could be used\nby other copy_file_range() users although in the long run the function\nis not needed.\nAnd even having this we may still need the fallback code if needed.\n\n- For pg_rewind, we may just determine that ability once on src/dst pgdata, but\n since there might be soft link (tablespace/wal) in pgdata so we should still\n allow fallback for those non copy_fie_range() capable file copying.\n- Also it seems that sometimes copy_file_range() could return ENOTSUP/EOPNOTSUP\n (the file system does not support that and the kernel does not fall\nback to simple copying?)\n although this is not documented and it seems not usual?\n\nAny idea?\n\n\n", "msg_date": "Tue, 22 Jun 2021 11:08:07 +0800", "msg_from": "Paul Guo <paulguo@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Tue, Jun 22, 2021 at 11:08 AM Paul Guo <paulguo@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 3:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Jun 02, 2021 at 05:02:10PM +0900, Michael Paquier wrote:\n> > > On Wed, Jun 02, 2021 at 06:20:30PM +1200, Thomas Munro wrote:\n> > > > The main thing I noticed was that Linux < 5.3 can fail with EXDEV if\n> > > > you cross a filesystem boundary, is that something we need to worry\n> > > > about there?\n> > >\n> > > Hmm. Good point. That may justify having a switch to control that.\n> >\n> > Paul, the patch set still needs some work, so I am switching it as\n> > waiting on author. I am pretty sure that we had better have a\n> > fallback implementation of copy_file_range() in src/port/, and that we\n> > are going to need an extra switch in pg_rewind to allow users to\n> > bypass copy_file_range()/EXDEV if they do a local rewind operation\n> > across different FSes with a kernel < 5.3.\n> > --\n>\n> I did modification on the copy_file_range() patch yesterday by simply falling\n> back to read()+write() but I think it could be improved further.\n>\n> We may add a function to determine two file/path are copy_file_range()\n> capable or not (using POSIX standard statvfs():f_fsid?) - that could be used\n> by other copy_file_range() users although in the long run the function\n> is not needed.\n> And even having this we may still need the fallback code if needed.\n>\n> - For pg_rewind, we may just determine that ability once on src/dst pgdata, but\n> since there might be soft link (tablespace/wal) in pgdata so we should still\n> allow fallback for those non copy_fie_range() capable file copying.\n> - Also it seems that sometimes copy_file_range() could return ENOTSUP/EOPNOTSUP\n> (the file system does not support that and the kernel does not fall\n> back to simple copying?)\n> although this is not documented and it seems not usual?\n>\n> Any idea?\n\nI modified the copy_file_range() patch using the below logic:\n\nIf the first call of copy_file_range() fails with errno EXDEV or\nENOTSUP, pg_rewind\nwould not use copy_file_range() in rest code, and if copy_file_range() fails we\nfallback to use the previous read()+write() code logic for the file.", "msg_date": "Thu, 5 Aug 2021 18:18:03 +0800", "msg_from": "Paul Guo <paulguo@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Thu, Aug 05, 2021 at 06:18:03PM +0800, Paul Guo wrote:\n> I modified the copy_file_range() patch using the below logic:\n> \n> If the first call of copy_file_range() fails with errno EXDEV or\n> ENOTSUP, pg_rewind\n> would not use copy_file_range() in rest code, and if copy_file_range() fails we\n> fallback to use the previous read()+write() code logic for the file.\n\nI have looked at 0001, and I don't really like it. One argument\nagainst this approach is that if pg_rewind fails in the middle of its\noperation then we would have done a set of fsync() for nothing, with\nthe data folder still unusable. I would be curious to see some\nnumbers to see how much it matters with many physical files (say cases\nwith thousands of small relations?).\n\n+/* Define PG_FLUSH_DATA_WORKS if we have an implementation for pg_flush_data */\n+#if defined(HAVE_SYNC_FILE_RANGE)\n+#define PG_FLUSH_DATA_WORKS 1\n+#elif !defined(WIN32) && defined(MS_ASYNC)\n+#define PG_FLUSH_DATA_WORKS 1\n+#elif defined(USE_POSIX_FADVISE) && defined(POSIX_FADV_DONTNEED)\n+#define PG_FLUSH_DATA_WORKS 1\n\nThis is wrong for the code frontend on platforms that may finish by\nusing MS_ASYNC, no? There is no such implementation in file_utils.c\nbut there is one in fd.c.\n\n+ fsync_fname(\"global/pg_control\", false);\n+ fsync_fname(\"backup_label\", false);\n+ if (access(\"recovery.conf\", F_OK) == 0)\n+ fsync_fname(\"recovery.conf\", false);\n+ if (access(\"postgresql.auto.conf\", F_OK) == 0)\n+ fsync_fname(\"postgresql.auto.conf\", false);\n\nThis list is wrong on various aspects, no? This would miss custom\nconfiguration files, or included files.\n\n- if (showprogress)\n- pg_log_info(\"syncing target data directory\");\n- sync_target_dir();\n-\n /* Also update the standby configuration, if requested. */\n if (writerecoveryconf && !dry_run)\n\tWriteRecoveryConfig(conn, datadir_target,\n\t\t GenerateRecoveryConfig(conn, NULL));\n\n+ if (showprogress)\n+ pg_log_info(\"syncing target data directory\");\n+ perform_sync(filemap);\n\nWhy inverting the order here?\n\n+ * Pre Linux 5.3 does not allow cross-fs copy_file_range() call\n+ * (return EXDEV). Some fs do not support copy_file_range() (return\n+ * ENOTSUP). Here we explicitly disable copy_file_range() for the\n+ * two scenarios. For other failures we still allow subsequent\n+ * copy_file_range() try.\n+ */\n+ if (errno == ENOTSUP || errno == EXDEV)\n+ copy_file_range_support = false;\nAre you sure that it is right to always cancel the support of\ncopy_file_range() after it does not work once? Couldn't it be\npossible that things work properly depending on the tablespace being\nworked on by pg_rewind?\n\nHaving the facility for copy_file_range() in pg_rewind is not nice at\nthe end, and we are going to need a run-time check to fallback\ndynamically to an equivalent implementation on errno={EXDEV,ENOTSUP}.\nHmm. What about locating all that in file_utils.c instead, with a\nbrand new routine name (pg_copy_file_range would be one choice)? We\nstill need the ./configure check, except that the conditions to use\nthe fallback implementation is in this routine, aka fallback on EXDEV,\nENOTSUP or !HAVE_COPY_FILE_RANGE. The backend may need to solve this\nproblem at some point, but logging and fd handling will likely just\nlocate that in fd.c, so having one routine for the purpose of all\nfrontends looks like a step in the right direction.\n--\nMichael", "msg_date": "Tue, 17 Aug 2021 16:47:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Tue, Aug 17, 2021 at 04:47:44PM +0900, Michael Paquier wrote:\n> One argument\n> against this approach is that if pg_rewind fails in the middle of its\n> operation then we would have done a set of fsync() for nothing, with\n> the data folder still unusable.\n\nI was skimming through the patch this morning, and that argument does\nnot hold much water as the flushes happen in the same place. Seems\nlike I got confused, sorry about that.\n\n> I would be curious to see some\n> numbers to see how much it matters with many physical files (say cases\n> with thousands of small relations?).\n\nFor this one, one simple idea would be to create a lot of fake\nrelation files with a pre-determined size and check how things\nchange.\n--\nMichael", "msg_date": "Wed, 18 Aug 2021 09:43:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "Thanks for reviewing, please see the replies below.\n\nOn Tue, Aug 17, 2021 at 3:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Aug 05, 2021 at 06:18:03PM +0800, Paul Guo wrote:\n> > I modified the copy_file_range() patch using the below logic:\n> >\n> > If the first call of copy_file_range() fails with errno EXDEV or\n> > ENOTSUP, pg_rewind\n> > would not use copy_file_range() in rest code, and if copy_file_range() fails we\n> > fallback to use the previous read()+write() code logic for the file.\n>\n> I have looked at 0001, and I don't really like it. One argument\n> against this approach is that if pg_rewind fails in the middle of its\n> operation then we would have done a set of fsync() for nothing, with\n> the data folder still unusable. I would be curious to see some\n> numbers to see how much it matters with many physical files (say cases\n> with thousands of small relations?).\n> +/* Define PG_FLUSH_DATA_WORKS if we have an implementation for pg_flush_data */\n> +#if defined(HAVE_SYNC_FILE_RANGE)\n> +#define PG_FLUSH_DATA_WORKS 1\n> +#elif !defined(WIN32) && defined(MS_ASYNC)\n> +#define PG_FLUSH_DATA_WORKS 1\n> +#elif defined(USE_POSIX_FADVISE) && defined(POSIX_FADV_DONTNEED)\n> +#define PG_FLUSH_DATA_WORKS 1\n>\n> This is wrong for the code frontend on platforms that may finish by\n> using MS_ASYNC, no? There is no such implementation in file_utils.c\n> but there is one in fd.c.\n\nYes, it seems that we need to add the MS_ASYNC code (refer that in fd.c) in\nsrc/common/file_utils.c:pre_sync_fname().\n\n> + fsync_fname(\"global/pg_control\", false);\n> + fsync_fname(\"backup_label\", false);\n> + if (access(\"recovery.conf\", F_OK) == 0)\n> + fsync_fname(\"recovery.conf\", false);\n> + if (access(\"postgresql.auto.conf\", F_OK) == 0)\n> + fsync_fname(\"postgresql.auto.conf\", false);\n>\n> This list is wrong on various aspects, no? This would miss custom\n> configuration files, or included files.\n\nI did not understand this. Can you please clarify? Anyway let me\nexplain, here we fsync\nthese files additionally because pg_rewind (possibly) modified these\nfiles after rewinding.\nThese files may not be handled/logged in filemap\n\npg_control action is FILE_ACTION_NONE\nbackup_label is excluded\nrecovery.conf is not logged in filemap\npostgresql.auto.conf may be logged but let's fsync this file for safety.\n\n>\n> - if (showprogress)\n> - pg_log_info(\"syncing target data directory\");\n> - sync_target_dir();\n> -\n> /* Also update the standby configuration, if requested. */\n> if (writerecoveryconf && !dry_run)\n> WriteRecoveryConfig(conn, datadir_target,\n> GenerateRecoveryConfig(conn, NULL));\n>\n> + if (showprogress)\n> + pg_log_info(\"syncing target data directory\");\n> + perform_sync(filemap);\n>\n> Why inverting the order here?\n\nWe need to synchronize the recoveryconf change finally in perform_sync().\n\n>\n> + * Pre Linux 5.3 does not allow cross-fs copy_file_range() call\n> + * (return EXDEV). Some fs do not support copy_file_range() (return\n> + * ENOTSUP). Here we explicitly disable copy_file_range() for the\n> + * two scenarios. For other failures we still allow subsequent\n> + * copy_file_range() try.\n> + */\n> + if (errno == ENOTSUP || errno == EXDEV)\n> + copy_file_range_support = false;\n> Are you sure that it is right to always cancel the support of\n> copy_file_range() after it does not work once? Couldn't it be\n> possible that things work properly depending on the tablespace being\n> worked on by pg_rewind?\n\nIdeally we should retry when first running into a symlink (e.g.\ntablespace, wal),\nbut it seems not easy to do gracefully.\n\n> Having the facility for copy_file_range() in pg_rewind is not nice at\n> the end, and we are going to need a run-time check to fallback\n> dynamically to an equivalent implementation on errno={EXDEV,ENOTSUP}.\n> Hmm. What about locating all that in file_utils.c instead, with a\n> brand new routine name (pg_copy_file_range would be one choice)? We\n> still need the ./configure check, except that the conditions to use\n> the fallback implementation is in this routine, aka fallback on EXDEV,\n> ENOTSUP or !HAVE_COPY_FILE_RANGE. The backend may need to solve this\n> problem at some point, but logging and fd handling will likely just\n> locate that in fd.c, so having one routine for the purpose of all\n> frontends looks like a step in the right direction.\n\nYes, seems better to make it generic.\n\n\n", "msg_date": "Fri, 20 Aug 2021 11:33:33 +0800", "msg_from": "Paul Guo <paulguo@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." }, { "msg_contents": "On Fri, Aug 20, 2021 at 11:33:33AM +0800, Paul Guo wrote:\n> On Tue, Aug 17, 2021 at 3:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > + fsync_fname(\"global/pg_control\", false);\n> > + fsync_fname(\"backup_label\", false);\n> > + if (access(\"recovery.conf\", F_OK) == 0)\n> > + fsync_fname(\"recovery.conf\", false);\n> > + if (access(\"postgresql.auto.conf\", F_OK) == 0)\n> > + fsync_fname(\"postgresql.auto.conf\", false);\n> >\n> > This list is wrong on various aspects, no? This would miss custom\n> > configuration files, or included files.\n> \n> I did not understand this. Can you please clarify? Anyway let me\n> explain, here we fsync\n> these files additionally because pg_rewind (possibly) modified these\n> files after rewinding.\n> These files may not be handled/logged in filemap\n> \n> pg_control action is FILE_ACTION_NONE\n> backup_label is excluded\n> recovery.conf is not logged in filemap\n> postgresql.auto.conf may be logged but let's fsync this file for safety.\n\nI am referring to new files copied from the origin cluster to the\ntarget, as pg_rewind copies everything. postgresql.conf could for\nexample include a foo.conf, which would be ignored here.\n\nAnyway, it seems to me that the copy_file_range() bits of the patch\nwith the copy optimizations are more interesting that the flush\noptimizations, so I would tend to focus on that first.\n\n>> + * Pre Linux 5.3 does not allow cross-fs copy_file_range() call\n>> + * (return EXDEV). Some fs do not support copy_file_range() (return\n>> + * ENOTSUP). Here we explicitly disable copy_file_range() for the\n>> + * two scenarios. For other failures we still allow subsequent\n>> + * copy_file_range() try.\n>> + */\n>> + if (errno == ENOTSUP || errno == EXDEV)\n>> + copy_file_range_support = false;\n>> Are you sure that it is right to always cancel the support of\n>> copy_file_range() after it does not work once? Couldn't it be\n>> possible that things work properly depending on the tablespace being\n>> worked on by pg_rewind?\n> \n> Ideally we should retry when first running into a symlink (e.g.\n> tablespace, wal),\n> but it seems not easy to do gracefully.\n\nMy guess here is that we should just remove this flag, and attempt\ncopy_range_file() for each file. That brings an extra point that\nrequires benchmarking, actually.\n\n>> Having the facility for copy_file_range() in pg_rewind is not nice at\n>> the end, and we are going to need a run-time check to fallback\n>> dynamically to an equivalent implementation on errno={EXDEV,ENOTSUP}.\n>> Hmm. What about locating all that in file_utils.c instead, with a\n>> brand new routine name (pg_copy_file_range would be one choice)? We\n>> still need the ./configure check, except that the conditions to use\n>> the fallback implementation is in this routine, aka fallback on EXDEV,\n>> ENOTSUP or !HAVE_COPY_FILE_RANGE. The backend may need to solve this\n>> problem at some point, but logging and fd handling will likely just\n>> locate that in fd.c, so having one routine for the purpose of all\n>> frontends looks like a step in the right direction.\n> \n> Yes, seems better to make it generic.\n\nOne disadvantage of having a fallback implementation in file_utils.c,\nnow that I look closely, is that we would make the progress reporting\nless verbose as we now call write_target_range() every 8kB for each\nblock. So on this point, your approach keeps the code simpler, while\nmy suggestion makes this logic more complicated.\n\nAnyway, I think that it would be good to do more benchmarking for this\npatch first, and the patch you are proposing is enough for that.\nThere are two scenarios I can think as useful to look at, to emulate\ncases where pg_rewind has to copy a set of files:\n- A small number of large files (say 3~5 files of 600MB~1GB).\n- Many small files (say 8MB~16MB with 200~ files).\nThose numbers can be tweaked up or down, as long as a difference can\nbe measured while avoiding noise in runtimes.\n\nI only have at hand now a system with ext4 on a 5.10 kernel, that does\nnot have any acceleration techniques as far as I know, so I have just\nmeasured that this introduces no regressions. But it would be good to\nalso see how much performance we'd gain with something that can take\nadvantage of copy_file_range() in full. One good case would be XFS\nwith reflink=1 and see how fast we go with the two cases from above\nwith and without the patch. Anything I have read on the matter means\nthat the copy will be faster, but it would be good to have numbers to\nconfirm.\n\ncopy_file_range_support in the patch is also something I am worrying\nabout, as it could lead to incorrect decisions depending on the order\nof the paths processed depending on the mount points of the origin\nand/or the target. Removing it means that it would be important to\nmeasure the case where we use copy_file_range(), but fail all (or some\nof!) its calls on EXDEV. That would imply a test to compare the\nperformance with and without the patch where the origin and target \nfolders are on different mount points. I looked at some kernel code\nfrom 5.2 and anything looks cheap enough with a lookup at the inodes\nof the source and targets to check the mount points involved.\n\nSo, what do you think?\n\nBy the way, the progress reporting is wrong in this new code path:\n\n+bool\n+copy_target_range(int srcfd, off_t begin, size_t size)\n+{\n+ ssize_t copylen;\n+\n+ /* update progress report */\n+ fetch_done += size;\n+ progress_report(false);\n\nIf copy_file_range() returns false, say because of EXDEV, we would\nfinish by counting the same size twice, with write_target_range()\nreporting this amount of size a second time for each block of 8kB.\n--\nMichael", "msg_date": "Fri, 20 Aug 2021 14:23:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two patches to speed up pg_rewind." } ]
[ { "msg_contents": "Here is the simple patch,\r\n\r\ndiff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c\r\nindex dce882012e..0391699423 100644\r\n--- a/src/backend/commands/createas.c\r\n+++ b/src/backend/commands/createas.c\r\n@@ -552,7 +552,7 @@ intorel_startup(DestReceiver *self, int operation, TupleDesc typeinfo)\r\n myState->rel = intoRelationDesc;\r\n myState->reladdr = intoRelationAddr;\r\n myState->output_cid = GetCurrentCommandId(true);\r\n- myState->ti_options = TABLE_INSERT_SKIP_FSM;\r\n+ myState->ti_options = TABLE_INSERT_SKIP_FSM | TABLE_INSERT_FROZEN;\r\n\r\nMatView code already does this and COPY does this if specified. I’m not sure how\r\ndoes the community think about this. Actually personally I expect more about the\r\nall-visible setting due to TABLE_INSERT_FROZEN since I could easier use index only scan\r\nif we create an index and table use CTAS, else people have to use index only scan\r\nafter vacuum. If people do not expect freeze could we at least introduce a option to\r\nspecify the visibility during inserting?\r\n\r\nRegards,\r\nPaul", "msg_date": "Wed, 27 Jan 2021 09:28:48 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Freeze the inserted tuples during CTAS?" }, { "msg_contents": "Hi,\n\nI confirm that my analytic workflows often do the CTAS and VACUUM of the\nrelation right after, before the index creation, to mark stuff as\nall-visible for IOS to work. Freezing and marking as visible will help.\n\nOn Wed, Jan 27, 2021 at 12:29 PM Paul Guo <guopa@vmware.com> wrote:\n\n> Here is the simple patch,\n>\n> diff --git a/src/backend/commands/createas.c\n> b/src/backend/commands/createas.c\n> index dce882012e..0391699423 100644\n> --- a/src/backend/commands/createas.c\n> +++ b/src/backend/commands/createas.c\n> @@ -552,7 +552,7 @@ intorel_startup(DestReceiver *self, int operation,\n> TupleDesc typeinfo)\n> myState->rel = intoRelationDesc;\n> myState->reladdr = intoRelationAddr;\n> myState->output_cid = GetCurrentCommandId(true);\n> - myState->ti_options = TABLE_INSERT_SKIP_FSM;\n> + myState->ti_options = TABLE_INSERT_SKIP_FSM | TABLE_INSERT_FROZEN;\n>\n> MatView code already does this and COPY does this if specified. I’m not\n> sure how\n> does the community think about this. Actually personally I expect more\n> about the\n> all-visible setting due to TABLE_INSERT_FROZEN since I could easier use\n> index only scan\n> if we create an index and table use CTAS, else people have to use index\n> only scan\n> after vacuum. If people do not expect freeze could we at least introduce a\n> option to\n> specify the visibility during inserting?\n>\n> Regards,\n> Paul\n\n\n\n-- \nDarafei \"Komяpa\" Praliaskouski\nOSM BY Team - http://openstreetmap.by/\n\nHi,I confirm that my analytic workflows often do the CTAS and VACUUM of the relation right after, before the index creation, to mark stuff as all-visible for IOS to work. Freezing and marking as visible will help.On Wed, Jan 27, 2021 at 12:29 PM Paul Guo <guopa@vmware.com> wrote:Here is the simple patch,\n\ndiff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c\nindex dce882012e..0391699423 100644\n--- a/src/backend/commands/createas.c\n+++ b/src/backend/commands/createas.c\n@@ -552,7 +552,7 @@ intorel_startup(DestReceiver *self, int operation, TupleDesc typeinfo)\n    myState->rel = intoRelationDesc;\n    myState->reladdr = intoRelationAddr;\n    myState->output_cid = GetCurrentCommandId(true);\n-   myState->ti_options = TABLE_INSERT_SKIP_FSM;\n+   myState->ti_options = TABLE_INSERT_SKIP_FSM | TABLE_INSERT_FROZEN;\n\nMatView code already does this and COPY does this if specified. I’m not sure how\ndoes the community think about this. Actually personally I expect more about the\nall-visible setting due to TABLE_INSERT_FROZEN since I could easier use index only scan\nif we create an index and table use CTAS, else people have to use index only scan\nafter vacuum. If people do not expect freeze could we at least introduce a option to\nspecify the visibility during inserting?\n\nRegards,\nPaul-- Darafei \"Komяpa\" PraliaskouskiOSM BY Team - http://openstreetmap.by/", "msg_date": "Wed, 27 Jan 2021 12:33:29 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: Freeze the inserted tuples during CTAS?" }, { "msg_contents": "On Jan 27, 2021, at 5:33 PM, Darafei Komяpa Praliaskouski <me@komzpa.net<mailto:me@komzpa.net>> wrote:\r\n\r\nHi,\r\n\r\nI confirm that my analytic workflows often do the CTAS and VACUUM of the relation right after, before the index creation, to mark stuff as all-visible for IOS to work. Freezing and marking as visible will help.\r\n\r\nThanks for letting me know there is such a real case in production environment.\r\nI attached the short patch. If no more other concerns, I will log the patch on commitfest.\r\n\r\n\r\n-Paul\r\n\r\n\r\nOn Wed, Jan 27, 2021 at 12:29 PM Paul Guo <guopa@vmware.com<mailto:guopa@vmware.com>> wrote:\r\nHere is the simple patch,\r\n\r\ndiff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c\r\nindex dce882012e..0391699423 100644\r\n--- a/src/backend/commands/createas.c\r\n+++ b/src/backend/commands/createas.c\r\n@@ -552,7 +552,7 @@ intorel_startup(DestReceiver *self, int operation, TupleDesc typeinfo)\r\n myState->rel = intoRelationDesc;\r\n myState->reladdr = intoRelationAddr;\r\n myState->output_cid = GetCurrentCommandId(true);\r\n- myState->ti_options = TABLE_INSERT_SKIP_FSM;\r\n+ myState->ti_options = TABLE_INSERT_SKIP_FSM | TABLE_INSERT_FROZEN;\r\n\r\nMatView code already does this and COPY does this if specified. I’m not sure how\r\ndoes the community think about this. Actually personally I expect more about the\r\nall-visible setting due to TABLE_INSERT_FROZEN since I could easier use index only scan\r\nif we create an index and table use CTAS, else people have to use index only scan\r\nafter vacuum. If people do not expect freeze could we at least introduce a option to\r\nspecify the visibility during inserting?\r\n\r\nRegards,\r\nPaul", "msg_date": "Fri, 19 Feb 2021 02:39:50 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Freeze the inserted tuples during CTAS?" }, { "msg_contents": "Attached is the v2 version that fixes a test failure due to plan change (bitmap index scan -> index only scan).", "msg_date": "Sun, 21 Feb 2021 07:45:43 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Freeze the inserted tuples during CTAS?" }, { "msg_contents": "On Sun, Feb 21, 2021 at 4:46 PM Paul Guo <guopa@vmware.com> wrote:\n>\n> Attached is the v2 version that fixes a test failure due to plan change (bitmap index scan -> index only scan).\n\nI think this is a good idea.\n\nBTW, how much does this patch affect the CTAS performance? I expect\nit's negligible but If there is much performance degradation due to\npopulating visibility map, it might be better to provide a way to\ndisable it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 3 Mar 2021 14:35:28 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Freeze the inserted tuples during CTAS?" }, { "msg_contents": "> On Mar 3, 2021, at 1:35 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n>> On Sun, Feb 21, 2021 at 4:46 PM Paul Guo <guopa@vmware.com> wrote:\r\n>> Attached is the v2 version that fixes a test failure due to plan change (bitmap index scan -> index only scan).\r\n\r\n> I think this is a good idea.\r\n\r\n> BTW, how much does this patch affect the CTAS performance? I expect\r\n> it's negligible but If there is much performance degradation due to\r\n> populating visibility map, it might be better to provide a way to\r\n> disable it.\r\n\r\nYes, this is a good suggestion. I did a quick test yesterday.\r\n\r\nConfiguration: shared_buffers = 1280M and the test system memory is 7G.\r\n\r\nTest queries:\r\n checkpoint;\r\n \\timing\r\n create table t1 (a, b, c, d) as select i,i,i,i from generate_series(1,20000000) i;\r\n \\timing\r\n select pg_size_pretty(pg_relation_size('t1'));\r\n\r\nHere are the running time:\r\n\r\nHEAD : Time: 10299.268 ms (00:10.299) + 1537.876 ms (00:01.538) \r\nPatch : Time: 12257.044 ms (00:12.257) + 14.247 ms \r\n\r\nThe table size is 800+MB so the table should be all in the buffer. I was surprised\r\nto see the patch increases the CTAS time by 19.x%, and also it is not better than\r\n\"CTAS+VACUUM\" on HEAD version. In theory the visibility map buffer change should\r\nnot affect that much. I looked at related code again (heap_insert()). I believe\r\nthe overhead could decrease along with some discussed CTAS optimization\r\nsolutions (multi-insert, or raw-insert, etc).\r\n\r\nI tested 'copy' also. The COPY FREEZE does not involve much overhead than COPY\r\naccording to the experiement results as below. COPY uses multi-insert. Seems there is\r\nno other difference than CTAS when writing a new table.\r\n\r\nCOPY TO + VACUUM\r\n\tTime: 8826.995 ms (00:08.827) + 1599.260 ms (00:01.599)\r\nCOPY TO FREEZE + VACUUM\r\n\tTime: 8836.107 ms (00:08.836) + 13.581 ms\r\n\r\nSo maybe think about doing freeze in CTAS after optimizing the CTAS performance\r\nlater?\r\n\r\nBy the way, ‘REFRESH MatView’ does freeze by default. Matview is quite similar to CTAS.\r\nI did test it also and the conclusion is similar to that of CTAS. Not sure why FREEZE was\r\nenabled though, maybe I missed something?\r\n\r\n", "msg_date": "Wed, 10 Mar 2021 06:57:49 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Freeze the inserted tuples during CTAS?" }, { "msg_contents": "On Wed, Mar 10, 2021 at 3:57 PM Paul Guo <guopa@vmware.com> wrote:\n>\n> > On Mar 3, 2021, at 1:35 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >> On Sun, Feb 21, 2021 at 4:46 PM Paul Guo <guopa@vmware.com> wrote:\n> >> Attached is the v2 version that fixes a test failure due to plan change (bitmap index scan -> index only scan).\n>\n> > I think this is a good idea.\n>\n> > BTW, how much does this patch affect the CTAS performance? I expect\n> > it's negligible but If there is much performance degradation due to\n> > populating visibility map, it might be better to provide a way to\n> > disable it.\n>\n> Yes, this is a good suggestion. I did a quick test yesterday.\n>\n> Configuration: shared_buffers = 1280M and the test system memory is 7G.\n>\n> Test queries:\n> checkpoint;\n> \\timing\n> create table t1 (a, b, c, d) as select i,i,i,i from generate_series(1,20000000) i;\n> \\timing\n> select pg_size_pretty(pg_relation_size('t1'));\n>\n> Here are the running time:\n>\n> HEAD : Time: 10299.268 ms (00:10.299) + 1537.876 ms (00:01.538)\n> Patch : Time: 12257.044 ms (00:12.257) + 14.247 ms\n>\n> The table size is 800+MB so the table should be all in the buffer. I was surprised\n> to see the patch increases the CTAS time by 19.x%, and also it is not better than\n> \"CTAS+VACUUM\" on HEAD version. In theory the visibility map buffer change should\n> not affect that much. I looked at related code again (heap_insert()). I believe\n> the overhead could decrease along with some discussed CTAS optimization\n> solutions (multi-insert, or raw-insert, etc).\n>\n> I tested 'copy' also. The COPY FREEZE does not involve much overhead than COPY\n> according to the experiement results as below. COPY uses multi-insert. Seems there is\n> no other difference than CTAS when writing a new table.\n>\n> COPY TO + VACUUM\n> Time: 8826.995 ms (00:08.827) + 1599.260 ms (00:01.599)\n> COPY TO FREEZE + VACUUM\n> Time: 8836.107 ms (00:08.836) + 13.581 ms\n>\n> So maybe think about doing freeze in CTAS after optimizing the CTAS performance\n> later?\n\nThank you for testing. That's interesting.\n\nI've also done some benchmarks for CTAS (2GB table creation) and got\nsimilar results:\n\nPatched : 44 sec\nHEAD : 34 sec\n\nSince CREATE MATERIALIZED VIEW is also internally treated as CTAS, I\ngot similar results even for CREATE MATVIEW.\n\nAfter investigation, it seems to me that the cause of performance\ndegradation is that heap_insert() set PD_ALL_VISIBLE when inserting a\ntuple for the first time on the page (L2133 in heapam.c). This\nrequires every subsequent heap_insert() to pin a visibility map buffer\n(see RelationGetBufferForTuple()). This problem doesn't exist in\nheap_multi_insert() since it reads vm buffer once to fill the heap\npage with tuples.\n\nGiven such relatively big performance degradation, it seems to me that\nwe should do some optimization for heap_insert() first. Otherwise, we\nwill end up imposing those costs on all users.\n\n> By the way, ‘REFRESH MatView’ does freeze by default. Matview is quite similar to CTAS.\n> I did test it also and the conclusion is similar to that of CTAS. Not sure why FREEZE was\n> enabled though, maybe I missed something?\n\nI’m not sure the reason why setting visibility map and PD_ALL_VISIBLE\nduring REFRESH MATVIEW is enabled by default. By commit 7db0cd2145 and\n39b66a91b, heap_insert and heap_multi_insert() sets visibility map\nbits if HEAP_INSERT_FROZEN is specified. Looking at the commit\nmessages, those changes seem to be intended to COPY FREEZE but they\nindeed affect REFRESH MATVIEW as well. But I could not find discussion\nand mention about REFRESH MATVIEW both in those threads and commit\nmessages.\n\nOne reason that might justify such behavior would be materialized\nviews are read-only. Since visibility map bits never be cleared, even\nif it costs here is some cost to set visibility map during the refresh\nsetting VM bits and PD_ALL_VISIBLE at creation might win. On the other\nhand, a table created by CTAS is read-write. The user might not want\nto pay a cost during creating a table if the table is updated\nfrequently after creation. Not sure. Being said that, I think this\nperformance degradation of REFRESH MATVIEW could be a problem. There\nis no way to avoid the degradation and we also can rely on autovacuum\nto set visibility map bits on materialized views. I'll start a new\nthread to discuss that.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 11 Mar 2021 15:55:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Freeze the inserted tuples during CTAS?" }, { "msg_contents": "> to set visibility map bits on materialized views. I'll start a new\r\n> thread to discuss that.\r\n\r\nThanks. Also I withdrew the patch.\r\n\r\n", "msg_date": "Mon, 15 Mar 2021 14:33:24 +0000", "msg_from": "Paul Guo <guopa@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Freeze the inserted tuples during CTAS?" } ]
[ { "msg_contents": "\r\n\r\n> On Jan 27, 2021, at 19:41, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> \r\n> On Wed, Jan 27, 2021 at 4:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>>> On Wed, Jan 27, 2021 at 3:16 PM Bharath Rupireddy\r\n>>> <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n>>>> On Wed, Jan 27, 2021 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>>> So, I think the new syntax, ALTER SUBSCRIPTION .. ADD/DROP PUBLICATION\r\n>>> will refresh the new and existing publications.\r\n>> That sounds a bit unusual to me because when the user has specifically\r\n>> asked to just ADD Publication, we might refresh some existing\r\n>> Publication along with it?\r\n> \r\n> Hmm. That's correct. I also feel we should not touch the existing\r\n> publications, only the ones that are added/dropped should be\r\n> refreshed. Because there will be an overhead of a SQL with more\r\n> publications(in fetch_table_list) when AlterSubscription_refresh() is\r\n> called with all the existing publications. We could just pass in the\r\n> newly added/dropped publications to AlterSubscription_refresh().\r\n> \r\n> I don't see any problem if ALTER SUBSCRIPTION ... ADD PUBLICATION with\r\n> refresh true refreshes only the newly added publications, because what\r\n> we do in AlterSubscription_refresh() is that we fetch the tables\r\n> associated with the publications from the publisher, compare them with\r\n> the previously fetched tables from that publication and add the new\r\n> tables or remove the table that don't exist in that publication\r\n> anymore.\r\n> \r\n> For ALTER SUBSCRIPTION ... DROP PUBLICATION, also we can do the same\r\n> thing i.e. refreshes only the dropped publications.\r\n> \r\n> Thoughts?\r\n\r\nArgeed. We just only need to refresh the added/dropped publications. Furthermore, for dropped publications we do not need the “copy_data” option, right?\r\n\r\n> With Regards,\r\n> Bharath Rupireddy.\r\n> EnterpriseDB: http://www.enterprisedb.com\r\n", "msg_date": "Wed, 27 Jan 2021 14:05:59 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Support ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION ... syntax" } ]
[ { "msg_contents": "I noticed that some of the newer compilers in the buildfarm\n(e.g., caiman, with gcc 11.0) whine about the definitions of\nrjulmdy() and rmdyjul() not quite matching their external\ndeclarations:\n\ninformix.c:516:23: warning: argument 2 of type `short int[3]' with mismatched bound [-Warray-parameter=]\n 516 | rjulmdy(date d, short mdy[3])\n | ~~~~~~^~~~~~\nIn file included from informix.c:10:\n../include/ecpg_informix.h:38:31: note: previously declared as `short int *'\n 38 | extern int rjulmdy(date, short *);\n | ^~~~~~~\ninformix.c:567:15: warning: argument 1 of type `short int[3]' with mismatched bound [-Warray-parameter=]\n 567 | rmdyjul(short mdy[3], date * d)\n | ~~~~~~^~~~~~\nIn file included from informix.c:10:\n../include/ecpg_informix.h:41:25: note: previously declared as `short int *'\n 41 | extern int rmdyjul(short *, date *);\n | ^~~~~~~\n\nThis isn't a bug really, since per the C spec these declarations\nare equivalent. But it'd be good to silence the warning before\nit gets any more common.\n\nThe most conservative thing to do would be to take the user-visible\nextern declarations as being authoritative, and change informix.c\nto match. I'm slightly tempted to do the opposite though, on the\ngrounds that showing the expected lengths of the arrays is useful.\nBut I wonder if anyone's compatibility checker tools would\n(mistakenly) classify that as an ABI break.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Jan 2021 11:22:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Inconsistent function definitions in ECPG's informix.c" } ]
[ { "msg_contents": "Original thread is:\nhttps://www.postgresql.org/message-id/flat/196f1e1a-5464-ed07-ab3c-0c9920564af7%40postgrespro.ru\n\nFollowing Yugo's advice, I have splitted this patch into two:\n1. Extending auto_explain extension to generate extended statistics in \ncase of bad selectivity estimation.\n2. Taken in account extended statistics when computing join selectivity.\n\n\nNow this thread will contain only patches for join selectivity estimation.\n\n> However,\n> IIUC, the clausesel patch uses only functional dependencies statistics for\n> improving join, so my question was about possibility to consider MCV in the\n> clausesel patch.\nSorry, do not have idea right now how to use MCV for better estimation \nof join selectivity.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 27 Jan 2021 19:51:32 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Improve join selectivity estimation using extended statistics" }, { "msg_contents": "Hi Konstantin,\n\nThanks for working on this! Using extended statistics to improve join\ncardinality estimates was definitely on my radar, and this patch seems\nlike a good start.\n\nI had two basic ideas about how we might improve join estimates:\n\n(a) use per-table extended statistics to estimate join conditions\n\n(b) invent multi-table extended statistics (requires inventing how to\nsample the tables in a coordinated way, etc.)\n\nThis patch aims to do (a) which is perfectly reasonable - I think we can\nachieve significant improvements this way. I have some ideas about (b),\nbut it seems harder and for a separate thread/patch.\n\n\nThe patch includes some *very* interesting ideas, but I think it's does\nthem too late and at the wrong level of abstraction. I mean that:\n\n1) I don't think the code in clausesel.c should deal with extended\nstatistics directly - it requires far too much knowledge about different\ntypes of extended stats, what clauses are supported by them, etc.\nAllowing stats on expressions will make this even worse.\n\nBetter do that in extended_stats.c, like statext_clauselist_selectivity.\n\n2) in clauselist_selectivity_ext, functional dependencies are applied in\nthe part that processes remaining clauses, not estimated using extended\nstatistics. That seems a bit confusing, and I suspect it may lead to\nissues - for example, it only processes the clauses incrementally, in a\nparticular order. That probably affects the result, because it affects\nwhich functional dependencies we can apply.\n\nIn the example query that's not an issue, because it only has two Vars,\nso it either can't apply anything (with one Var) or it can apply\neverything (with two Vars). But with 3 or more Vars the order would\ncertainly matter, so it's problematic.\n\n\nMoreover, it seems a bit strange that this considers dependencies only\non the inner relation. Can't that lead to issues with different join\norders producing different cardinality estimates?\n\n\nI think a better approach would be to either modify the existing block\ndealing with extended stats for a single relation to also handle join\nconditions. Or perhaps we should invent a separate block, dealing with\n*pairs* of relations? And it should deal with *all* join clauses for\nthat pair of relations at once, not one by one.\n\nAs for the exact implementation, I'd imagine we call overall logic to be\nsomething like (for clauses on two joined relations):\n\n- pick a subset of clauses with the same type of extended statistics on\nboth sides (MCV, ndistinct, ...), repeat until we can't apply more\nstatistics\n\n- estimate remaining clauses either using functional dependencies or in\nthe regular (old) way\n\n\nAs for how to use other types of extended statistics, I think eqjoinsel\ncould serve as an inspiration. We should look for an MCV list and\nndistinct stats on both sides of the join (possibly on some subset of\nclauses), and then do the same thing eqjoinsel does, just with multiple\ncolumns.\n\nNote: I'm not sure what to do when we find the stats only on one side.\nPerhaps we should assume the other side does not have correlations and\nuse per-column statistics (seems reasonable), or maybe just not apply\nanything (seems a bit too harsh).\n\nAnyway, if there are some non-estimated clauses, we could try applying\nfunctional dependencies similarly to what this patch does. It's also\nconsistent with statext_clauselist_selectivity - that also tries to\napply MCV lists first, and only then we try functional dependencies.\n\n\nBTW, should this still rely on oprrest (e.g. F_EQSEL). That's the\nselectivity function for restriction (non-join) clauses, so maybe we\nshould be looking at oprjoin when dealing with joins? Not sure.\n\n\nOne bit that I find *very* interesting is the calc_joinrel_size_estimate\npart, with this comment:\n\n /*\n * Try to take in account functional dependencies between attributes\n * of clauses pushed-down to joined relations and retstrictlist\n * clause. Right now we consider only case of restrictlist consists of\n * one clause.\n */\n\nIf I understand the comment and the code after it, it essentially tries\nto apply extended statistics from both the join clauses and filters at\nthe relation level. That is, with a query like\n\n SELECT * FROM t1 JOIN t2 ON (t1.a = t2.a) WHERE t1.b = 10\n\nwe would be looking at statistics on t1(a,b), because we're interested\nin estimating conditional probability distribution\n\n P(t1.a = a? | t1.b = 10)\n\nI think that's extremely interesting and powerful, because it allows us\nto \"restrict\" the multi-column MCV lists, we could probably estimate\nnumber of distinct \"a\" values in rows with \"b=10\" like:\n\n ndistinct(a,b) / ndistinct(b)\n\nand do various interesting stuff like this.\n\nThat will require some improvements to the extended statistics code (to\nallow passing a list of conditions), but that's quite doable. I think\nthe code actually did something like that originally ;-)\n\n\nObviously, none of this is achievable for PG14, as we're in the middle\nof the last CF. But if you're interested in working on this for PG15,\nI'd love to cooperate on that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 11 Mar 2021 01:47:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Improve join selectivity estimation using extended statistics" }, { "msg_contents": "\n\nOn 11.03.2021 03:47, Tomas Vondra wrote:\n> Hi Konstantin,\n>\n> Thanks for working on this! Using extended statistics to improve join\n> cardinality estimates was definitely on my radar, and this patch seems\n> like a good start.\n>\n> I had two basic ideas about how we might improve join estimates:\n>\n> (a) use per-table extended statistics to estimate join conditions\n>\n> (b) invent multi-table extended statistics (requires inventing how to\n> sample the tables in a coordinated way, etc.)\n>\n> This patch aims to do (a) which is perfectly reasonable - I think we can\n> achieve significant improvements this way. I have some ideas about (b),\n> but it seems harder and for a separate thread/patch.\n>\n>\n> The patch includes some *very* interesting ideas, but I think it's does\n> them too late and at the wrong level of abstraction. I mean that:\n>\n> 1) I don't think the code in clausesel.c should deal with extended\n> statistics directly - it requires far too much knowledge about different\n> types of extended stats, what clauses are supported by them, etc.\n> Allowing stats on expressions will make this even worse.\n>\n> Better do that in extended_stats.c, like statext_clauselist_selectivity.\n>\n> 2) in clauselist_selectivity_ext, functional dependencies are applied in\n> the part that processes remaining clauses, not estimated using extended\n> statistics. That seems a bit confusing, and I suspect it may lead to\n> issues - for example, it only processes the clauses incrementally, in a\n> particular order. That probably affects the result, because it affects\n> which functional dependencies we can apply.\n>\n> In the example query that's not an issue, because it only has two Vars,\n> so it either can't apply anything (with one Var) or it can apply\n> everything (with two Vars). But with 3 or more Vars the order would\n> certainly matter, so it's problematic.\n>\n>\n> Moreover, it seems a bit strange that this considers dependencies only\n> on the inner relation. Can't that lead to issues with different join\n> orders producing different cardinality estimates?\n>\n>\n> I think a better approach would be to either modify the existing block\n> dealing with extended stats for a single relation to also handle join\n> conditions. Or perhaps we should invent a separate block, dealing with\n> *pairs* of relations? And it should deal with *all* join clauses for\n> that pair of relations at once, not one by one.\n>\n> As for the exact implementation, I'd imagine we call overall logic to be\n> something like (for clauses on two joined relations):\n>\n> - pick a subset of clauses with the same type of extended statistics on\n> both sides (MCV, ndistinct, ...), repeat until we can't apply more\n> statistics\n>\n> - estimate remaining clauses either using functional dependencies or in\n> the regular (old) way\n>\n>\n> As for how to use other types of extended statistics, I think eqjoinsel\n> could serve as an inspiration. We should look for an MCV list and\n> ndistinct stats on both sides of the join (possibly on some subset of\n> clauses), and then do the same thing eqjoinsel does, just with multiple\n> columns.\n>\n> Note: I'm not sure what to do when we find the stats only on one side.\n> Perhaps we should assume the other side does not have correlations and\n> use per-column statistics (seems reasonable), or maybe just not apply\n> anything (seems a bit too harsh).\n>\n> Anyway, if there are some non-estimated clauses, we could try applying\n> functional dependencies similarly to what this patch does. It's also\n> consistent with statext_clauselist_selectivity - that also tries to\n> apply MCV lists first, and only then we try functional dependencies.\n>\n>\n> BTW, should this still rely on oprrest (e.g. F_EQSEL). That's the\n> selectivity function for restriction (non-join) clauses, so maybe we\n> should be looking at oprjoin when dealing with joins? Not sure.\n>\n>\n> One bit that I find *very* interesting is the calc_joinrel_size_estimate\n> part, with this comment:\n>\n> /*\n> * Try to take in account functional dependencies between attributes\n> * of clauses pushed-down to joined relations and retstrictlist\n> * clause. Right now we consider only case of restrictlist consists of\n> * one clause.\n> */\n>\n> If I understand the comment and the code after it, it essentially tries\n> to apply extended statistics from both the join clauses and filters at\n> the relation level. That is, with a query like\n>\n> SELECT * FROM t1 JOIN t2 ON (t1.a = t2.a) WHERE t1.b = 10\n>\n> we would be looking at statistics on t1(a,b), because we're interested\n> in estimating conditional probability distribution\n>\n> P(t1.a = a? | t1.b = 10)\n>\n> I think that's extremely interesting and powerful, because it allows us\n> to \"restrict\" the multi-column MCV lists, we could probably estimate\n> number of distinct \"a\" values in rows with \"b=10\" like:\n>\n> ndistinct(a,b) / ndistinct(b)\n>\n> and do various interesting stuff like this.\n>\n> That will require some improvements to the extended statistics code (to\n> allow passing a list of conditions), but that's quite doable. I think\n> the code actually did something like that originally ;-)\n>\n>\n> Obviously, none of this is achievable for PG14, as we're in the middle\n> of the last CF. But if you're interested in working on this for PG15,\n> I'd love to cooperate on that.\n>\n>\n> regards\n>\nHi Tomas,\nThank you for review of my patch.\nMy primary attention was to implement some kid of adaptive query \noptimization based online_analyze extension and building extended \nstatistic on demand.\nI have change clausesel.c because right now extended statistic is not \nused for join selectivity estimation and manual or automatic adding such \nstatistic can help to\nchoose more efficient plan for queries with joins.\nI agree wit you that it can be done in better way, handling more use cases.\nI will be glad to cooperate with you in improving join selectivity \nestimation using extended statistic.\n\n\n\n", "msg_date": "Mon, 15 Mar 2021 18:41:49 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Improve join selectivity estimation using extended statistics" }, { "msg_contents": "On Mon, Mar 15, 2021 at 8:42 PM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 11.03.2021 03:47, Tomas Vondra wrote:\n> > Hi Konstantin,\n> >\n> > Thanks for working on this! Using extended statistics to improve join\n> > cardinality estimates was definitely on my radar, and this patch seems\n> > like a good start.\n> >\n> > I had two basic ideas about how we might improve join estimates:\n> >\n> > (a) use per-table extended statistics to estimate join conditions\n> >\n> > (b) invent multi-table extended statistics (requires inventing how to\n> > sample the tables in a coordinated way, etc.)\n> >\n> > This patch aims to do (a) which is perfectly reasonable - I think we can\n> > achieve significant improvements this way. I have some ideas about (b),\n> > but it seems harder and for a separate thread/patch.\n> >\n> >\n> > The patch includes some *very* interesting ideas, but I think it's does\n> > them too late and at the wrong level of abstraction. I mean that:\n> >\n> > 1) I don't think the code in clausesel.c should deal with extended\n> > statistics directly - it requires far too much knowledge about different\n> > types of extended stats, what clauses are supported by them, etc.\n> > Allowing stats on expressions will make this even worse.\n> >\n> > Better do that in extended_stats.c, like statext_clauselist_selectivity.\n> >\n> > 2) in clauselist_selectivity_ext, functional dependencies are applied in\n> > the part that processes remaining clauses, not estimated using extended\n> > statistics. That seems a bit confusing, and I suspect it may lead to\n> > issues - for example, it only processes the clauses incrementally, in a\n> > particular order. That probably affects the result, because it affects\n> > which functional dependencies we can apply.\n> >\n> > In the example query that's not an issue, because it only has two Vars,\n> > so it either can't apply anything (with one Var) or it can apply\n> > everything (with two Vars). But with 3 or more Vars the order would\n> > certainly matter, so it's problematic.\n> >\n> >\n> > Moreover, it seems a bit strange that this considers dependencies only\n> > on the inner relation. Can't that lead to issues with different join\n> > orders producing different cardinality estimates?\n> >\n> >\n> > I think a better approach would be to either modify the existing block\n> > dealing with extended stats for a single relation to also handle join\n> > conditions. Or perhaps we should invent a separate block, dealing with\n> > *pairs* of relations? And it should deal with *all* join clauses for\n> > that pair of relations at once, not one by one.\n> >\n> > As for the exact implementation, I'd imagine we call overall logic to be\n> > something like (for clauses on two joined relations):\n> >\n> > - pick a subset of clauses with the same type of extended statistics on\n> > both sides (MCV, ndistinct, ...), repeat until we can't apply more\n> > statistics\n> >\n> > - estimate remaining clauses either using functional dependencies or in\n> > the regular (old) way\n> >\n> >\n> > As for how to use other types of extended statistics, I think eqjoinsel\n> > could serve as an inspiration. We should look for an MCV list and\n> > ndistinct stats on both sides of the join (possibly on some subset of\n> > clauses), and then do the same thing eqjoinsel does, just with multiple\n> > columns.\n> >\n> > Note: I'm not sure what to do when we find the stats only on one side.\n> > Perhaps we should assume the other side does not have correlations and\n> > use per-column statistics (seems reasonable), or maybe just not apply\n> > anything (seems a bit too harsh).\n> >\n> > Anyway, if there are some non-estimated clauses, we could try applying\n> > functional dependencies similarly to what this patch does. It's also\n> > consistent with statext_clauselist_selectivity - that also tries to\n> > apply MCV lists first, and only then we try functional dependencies.\n> >\n> >\n> > BTW, should this still rely on oprrest (e.g. F_EQSEL). That's the\n> > selectivity function for restriction (non-join) clauses, so maybe we\n> > should be looking at oprjoin when dealing with joins? Not sure.\n> >\n> >\n> > One bit that I find *very* interesting is the calc_joinrel_size_estimate\n> > part, with this comment:\n> >\n> > /*\n> > * Try to take in account functional dependencies between attributes\n> > * of clauses pushed-down to joined relations and retstrictlist\n> > * clause. Right now we consider only case of restrictlist consists of\n> > * one clause.\n> > */\n> >\n> > If I understand the comment and the code after it, it essentially tries\n> > to apply extended statistics from both the join clauses and filters at\n> > the relation level. That is, with a query like\n> >\n> > SELECT * FROM t1 JOIN t2 ON (t1.a = t2.a) WHERE t1.b = 10\n> >\n> > we would be looking at statistics on t1(a,b), because we're interested\n> > in estimating conditional probability distribution\n> >\n> > P(t1.a = a? | t1.b = 10)\n> >\n> > I think that's extremely interesting and powerful, because it allows us\n> > to \"restrict\" the multi-column MCV lists, we could probably estimate\n> > number of distinct \"a\" values in rows with \"b=10\" like:\n> >\n> > ndistinct(a,b) / ndistinct(b)\n> >\n> > and do various interesting stuff like this.\n> >\n> > That will require some improvements to the extended statistics code (to\n> > allow passing a list of conditions), but that's quite doable. I think\n> > the code actually did something like that originally ;-)\n> >\n> >\n> > Obviously, none of this is achievable for PG14, as we're in the middle\n> > of the last CF. But if you're interested in working on this for PG15,\n> > I'd love to cooperate on that.\n> >\n> >\n> > regards\n> >\n> Hi Tomas,\n> Thank you for review of my patch.\n> My primary attention was to implement some kid of adaptive query\n> optimization based online_analyze extension and building extended\n> statistic on demand.\n> I have change clausesel.c because right now extended statistic is not\n> used for join selectivity estimation and manual or automatic adding such\n> statistic can help to\n> choose more efficient plan for queries with joins.\n> I agree wit you that it can be done in better way, handling more use cases.\n> I will be glad to cooperate with you in improving join selectivity\n> estimation using extended statistic.\n>\n>\n>\n> The patch does not compile, and needs your attention.\n\nhttps://cirrus-ci.com/task/6397726985289728\n\nclausesel.c:74:28: error: too few arguments to function\n‘choose_best_statistics’\nStatisticExtInfo *stat = choose_best_statistics(rel->statlist,\nSTATS_EXT_DEPENDENCIES,\n^~~~~~~~~~~~~~~~~~~~~~\nIn file included from clausesel.c:24:\n../../../../src/include/statistics/statistics.h:123:26: note: declared here\nexter\n\n\nI am changing the status to \"Waiting on Author\".\n\n\n-- \nIbrar Ahmed\n\nOn Mon, Mar 15, 2021 at 8:42 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\nOn 11.03.2021 03:47, Tomas Vondra wrote:\n> Hi Konstantin,\n>\n> Thanks for working on this! Using extended statistics to improve join\n> cardinality estimates was definitely on my radar, and this patch seems\n> like a good start.\n>\n> I had two basic ideas about how we might improve join estimates:\n>\n> (a) use per-table extended statistics to estimate join conditions\n>\n> (b) invent multi-table extended statistics (requires inventing how to\n> sample the tables in a coordinated way, etc.)\n>\n> This patch aims to do (a) which is perfectly reasonable - I think we can\n> achieve significant improvements this way. I have some ideas about (b),\n> but it seems harder and for a separate thread/patch.\n>\n>\n> The patch includes some *very* interesting ideas, but I think it's does\n> them too late and at the wrong level of abstraction. I mean that:\n>\n> 1) I don't think the code in clausesel.c should deal with extended\n> statistics directly - it requires far too much knowledge about different\n> types of extended stats, what clauses are supported by them, etc.\n> Allowing stats on expressions will make this even worse.\n>\n> Better do that in extended_stats.c, like statext_clauselist_selectivity.\n>\n> 2) in clauselist_selectivity_ext, functional dependencies are applied in\n> the part that processes remaining clauses, not estimated using extended\n> statistics. That seems a bit confusing, and I suspect it may lead to\n> issues - for example, it only processes the clauses incrementally, in a\n> particular order. That probably affects the  result, because it affects\n> which functional dependencies we can apply.\n>\n> In the example query that's not an issue, because it only has two Vars,\n> so it either can't apply anything (with one Var) or it can apply\n> everything (with two Vars). But with 3 or more Vars the order would\n> certainly matter, so it's problematic.\n>\n>\n> Moreover, it seems a bit strange that this considers dependencies only\n> on the inner relation. Can't that lead to issues with different join\n> orders producing different cardinality estimates?\n>\n>\n> I think a better approach would be to either modify the existing block\n> dealing with extended stats for a single relation to also handle join\n> conditions. Or perhaps we should invent a separate block, dealing with\n> *pairs* of relations? And it should deal with *all* join clauses for\n> that pair of relations at once, not one by one.\n>\n> As for the exact implementation, I'd imagine we call overall logic to be\n> something like (for clauses on two joined relations):\n>\n> - pick a subset of clauses with the same type of extended statistics on\n> both sides (MCV, ndistinct, ...), repeat until we can't apply more\n> statistics\n>\n> - estimate remaining clauses either using functional dependencies or in\n> the regular (old) way\n>\n>\n> As for how to use other types of extended statistics, I think eqjoinsel\n> could serve as an inspiration. We should look for an MCV list and\n> ndistinct stats on both sides of the join (possibly on some subset of\n> clauses), and then do the same thing eqjoinsel does, just with multiple\n> columns.\n>\n> Note: I'm not sure what to do when we find the stats only on one side.\n> Perhaps we should assume the other side does not have correlations and\n> use per-column statistics (seems reasonable), or maybe just not apply\n> anything (seems a bit too harsh).\n>\n> Anyway, if there are some non-estimated clauses, we could try applying\n> functional dependencies similarly to what this patch does. It's also\n> consistent with statext_clauselist_selectivity - that also tries to\n> apply MCV lists first, and only then we try functional dependencies.\n>\n>\n> BTW, should this still rely on oprrest (e.g. F_EQSEL). That's the\n> selectivity function for restriction (non-join) clauses, so maybe we\n> should be looking at oprjoin when dealing with joins? Not sure.\n>\n>\n> One bit that I find *very* interesting is the calc_joinrel_size_estimate\n> part, with this comment:\n>\n>    /*\n>     * Try to take in account functional dependencies between attributes\n>     * of clauses pushed-down to joined relations and retstrictlist\n>     * clause. Right now we consider only case of restrictlist consists of\n>     * one clause.\n>     */\n>\n> If I understand the comment and the code after it, it essentially tries\n> to apply extended statistics from both the join clauses and filters at\n> the relation level. That is, with a query like\n>\n>      SELECT * FROM t1 JOIN t2 ON (t1.a = t2.a) WHERE t1.b = 10\n>\n> we would be looking at statistics on t1(a,b), because we're interested\n> in estimating conditional probability distribution\n>\n>     P(t1.a = a? | t1.b = 10)\n>\n> I think that's extremely interesting and powerful, because it allows us\n> to \"restrict\" the multi-column MCV lists, we could probably estimate\n> number of distinct \"a\" values in rows with \"b=10\" like:\n>\n>      ndistinct(a,b) / ndistinct(b)\n>\n> and do various interesting stuff like this.\n>\n> That will require some improvements to the extended statistics code (to\n> allow passing a list of conditions), but that's quite doable. I think\n> the code actually did something like that originally ;-)\n>\n>\n> Obviously, none of this is achievable for PG14, as we're in the middle\n> of the last CF. But if you're interested in working on this for PG15,\n> I'd love to cooperate on that.\n>\n>\n> regards\n>\nHi Tomas,\nThank you for review of my patch.\nMy primary attention was to implement some kid of adaptive query \noptimization based online_analyze extension and building extended \nstatistic on demand.\nI have change clausesel.c because right now extended statistic is not \nused for join selectivity estimation and manual or automatic adding such \nstatistic can help to\nchoose more efficient plan for queries with joins.\nI agree wit you that it can be done in better way, handling more use cases.\nI will be glad to cooperate with you in improving join selectivity \nestimation using extended statistic.\n\n\n\nThe patch does not compile, and needs your attention.https://cirrus-ci.com/task/6397726985289728clausesel.c:74:28: error: too few arguments to function ‘choose_best_statistics’StatisticExtInfo *stat = choose_best_statistics(rel->statlist, STATS_EXT_DEPENDENCIES,^~~~~~~~~~~~~~~~~~~~~~In file included from clausesel.c:24:../../../../src/include/statistics/statistics.h:123:26: note: declared hereexterI am changing the status to \"Waiting on Author\".-- Ibrar Ahmed", "msg_date": "Mon, 19 Jul 2021 15:52:47 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve join selectivity estimation using extended statistics" }, { "msg_contents": "> On 19 Jul 2021, at 12:52, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n> The patch does not compile, and needs your attention.\n> \n> https://cirrus-ci.com/task/6397726985289728 <https://cirrus-ci.com/task/6397726985289728>\n> \n> clausesel.c:74:28: error: too few arguments to function ‘choose_best_statistics’\n> StatisticExtInfo *stat = choose_best_statistics(rel->statlist, STATS_EXT_DEPENDENCIES,\n> ^~~~~~~~~~~~~~~~~~~~~~\n> In file included from clausesel.c:24:\n> ../../../../src/include/statistics/statistics.h:123:26: note: declared here\n> exter\n> \n> I am changing the status to \"Waiting on Author\".\n\nAnd since this is still the case one CF later, I'm marking this as RwF.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 1 Oct 2021 12:13:53 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Improve join selectivity estimation using extended statistics" } ]
[ { "msg_contents": "Hi,\n\nAttached is a small patch for ${subject}\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Wed, 27 Jan 2021 14:53:08 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "protect pg_stat_statements_info() for being used without the library\n loaded" }, { "msg_contents": "On Thu, Jan 28, 2021 at 3:53 AM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> Hi,\n>\n> Attached is a small patch for ${subject}\n\nGood catch, and patch looks good to me.\n\n\n", "msg_date": "Thu, 28 Jan 2021 08:49:54 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: protect pg_stat_statements_info() for being used without the\n library loaded" }, { "msg_contents": "On Thu, Jan 28, 2021 at 08:49:54AM +0800, Julien Rouhaud wrote:\n> Good catch, and patch looks good to me.\n\nThis crashes the server, cash. Looking at all the other modules in\nthe tree, I am not seeing any other hole. This is new as of 9fbc3f3,\nand I will apply it on HEAD.\n--\nMichael", "msg_date": "Thu, 28 Jan 2021 15:42:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: protect pg_stat_statements_info() for being used without the\n library loaded" }, { "msg_contents": "\n\nOn 2021/01/28 15:42, Michael Paquier wrote:\n> On Thu, Jan 28, 2021 at 08:49:54AM +0800, Julien Rouhaud wrote:\n>> Good catch, and patch looks good to me.\n> \n> This crashes the server, cash. Looking at all the other modules in\n> the tree, I am not seeing any other hole. This is new as of 9fbc3f3,\n> and I will apply it on HEAD.\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 28 Jan 2021 15:53:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: protect pg_stat_statements_info() for being used without the\n library loaded" }, { "msg_contents": "On Thu, Jan 28, 2021 at 03:53:54PM +0900, Fujii Masao wrote:\n> Thanks!\n\nNo problem. Applied as of bca96dd.\n--\nMichael", "msg_date": "Thu, 28 Jan 2021 16:26:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: protect pg_stat_statements_info() for being used without the\n library loaded" } ]
[ { "msg_contents": "Hi!\n\nWe're currently having issues with serializable contention at our shop, and\nafter tracking it down very carefully, we found that there are two main\nreasons for one such conflict:\n1. Page-level predicate locks on primary key indexes, whose associated\ncolumn gets their Id from a sequence.\n2. An empty table which gets inserted to but has those inserted rows\ndeleted before committing.\n\n\nWe're confident they are the only remaining impediments to allowing\ntransactions not to conflict with each other, because we have changed the\ncode just in the right places to make sure that no conflicts arise when we\ndo both of:\n- In the first case, the sequence's nextval and increment are set so that\nthe first transaction gets an Id that is on a different index page than the\nId the second transaction will get.\n- Not writing to the table that once got inserted to and emptied. Before\nthis, we also tried setting enable_seqscan to off and inspecting the query\nplans and SIReadLocks carefully before committing to make sure sequential\nscans were avoided, but it wasn't sufficient.\n\nI believe in the first case the problem is one of granularity which has\nbeen mentioned before at\nhttps://www.postgresql.org/message-id/flat/20110503064807.GB85173%40csail.mit.edu#836599e3c18caf54052114d46f929cbb\n).\nIn the second case, I believe part of the problem could be due to how empty\ntables are predicately locked - according to\nhttps://dba.stackexchange.com/questions/246179/postgresql-serialisation-failure-on-different-ids\n.\n\nIn our case, we use empty tables to keep complex invariants checked at the\nDB level by inserting into them with triggers and making sure deferrable\nconstraints will fail if the rows are still there (thus forcing the\ncommitter to run a \"consistency-enforcing\" job before committing).\nI'm not sure if our use-case is too particular, but we have found in\ngeneral that having little data - which some of our tables do, and will\nstill have for the foreseeable future - is sometimes worse than having lots\nof it due to index locking granularity being at least at page-level.\n\nSo I have a few questions:\n- Would index-key / index-gap locking avoid avoid creating serialization\nanomalies for inserts of consecutive Ids that currently fall in the same\nindex page? Is it in the roadmap?\n- A colleague made a suggestion which I found no mention of anywhere: would\nit be possible not to predicate-lock on indices for insertion into\nGENERATED AS IDENTITY columns, unless of course in the case of UPDATE,\nINSERT INTO .. OVERRIDING, ALTER TABLE .. RESTART WITH or other similarly\nconflicting statements?\n- Is there something that can be done for the problem with empty tables?\n\nWe currently use Postgres 11, and anything that could help us change how we\napproach the problem on our side is very much welcome too!\n\nThanks in advance,\nMarcelo.\n\nHi!We're currently having issues with serializable contention at our shop, and after tracking it down very carefully, we found that there are two main reasons for one such conflict:1. Page-level predicate locks on primary key indexes, whose associated column gets their Id from a sequence.2. An empty table which gets inserted to but has those inserted rows deleted before committing.We're confident they are the only remaining impediments to allowing transactions not to conflict with each other, because we have changed the code just in the right places to make sure that no conflicts arise when we do both of:- In the first case, the sequence's nextval and increment are set so that the first transaction gets an Id that is on a different index page than the Id the second transaction will get.- Not writing to the table that once got inserted to and emptied. Before this, we also tried setting enable_seqscan to off and inspecting the query plans and SIReadLocks carefully before committing to make sure sequential scans were avoided, but it wasn't sufficient.I believe in the first case the problem is one of granularity which has been mentioned before at https://www.postgresql.org/message-id/flat/20110503064807.GB85173%40csail.mit.edu#836599e3c18caf54052114d46f929cbb).In the second case, I believe part of the problem could be due to how empty tables are predicately locked - according to https://dba.stackexchange.com/questions/246179/postgresql-serialisation-failure-on-different-ids.In our case, we use empty tables to keep complex invariants checked at the DB level by inserting into them with triggers and making sure deferrable constraints will fail if the rows are still there (thus forcing the committer to run a \"consistency-enforcing\" job before committing).I'm not sure if our use-case is too particular, but we have found in general that having little data - which some of our tables do, and will still have for the foreseeable future - is sometimes worse than having lots of it due to index locking granularity being at least at page-level.So I have a few questions:- Would index-key / index-gap locking avoid avoid creating serialization anomalies for inserts of consecutive Ids that currently fall in the same index page? Is it in the roadmap?- A colleague made a suggestion which I found no mention of anywhere: would it be possible not to predicate-lock on indices for insertion into GENERATED AS IDENTITY columns, unless of course in the case of UPDATE, INSERT INTO .. OVERRIDING, ALTER TABLE .. RESTART WITH or other similarly conflicting statements?- Is there something that can be done for the problem with empty tables?We currently use Postgres 11, and anything that could help us change how we approach the problem on our side is very much welcome too!Thanks in advance,Marcelo.", "msg_date": "Wed, 27 Jan 2021 19:01:43 -0300", "msg_from": "Marcelo Zabani <mzabani@gmail.com>", "msg_from_op": true, "msg_subject": "Index predicate locking and serializability contention" } ]
[ { "msg_contents": "Hey, all,\n\nWhen creating a logical replication connection that isn't allowed by the\ncurrent pg_hba.conf, the error message states that a \"replication\nconnection\" is not allowed.\n\nThis error message is confusing because although the user is trying to\ncreate a replication connection and specified \"replication=database\" in\nthe connection string, the special \"replication\" pg_hba.conf keyword\ndoes not apply. I believe the error message should just refer to a\nregular connection and specify the database the user is trying to\nconnect to.\n\nWhen connecting using \"replication\" in a connection string, the variable\nam_walsender is set to true. When \"replication=database\" is specified,\nthe variable am_db_walsender is also set to true [1].\n\nWhen checking whether a pg_hba.conf rule matches in libpq/hba.c, we only\ncheck for the \"replication\" keyword when am_walsender && !am_db_walsender [2].\n\nBut then when reporting error messages in libpq/auth.c, only\nam_walsender is used in the condition that chooses whether to specify\n\"replication connection\" or \"connection\" to a specific database in the\nerror message [3] [4].\n\nIn this patch I have modified the conditions in libpq/auth.c to check\nam_walsender && !am_db_walsender, as in hba.c. I have also added a\nclarification in the documentation for pg_hba.conf.\n\n> The value `replication` specifies that the record matches if a\n> physical replication connection is requested (note that replication\n> - connections do not specify any particular database).\n> + connections do not specify any particular database), but it does not\n> + match logical replication connections that specify\n> + `replication=database` and a `dbname` in their connection string.\n\nThanks,\nPaul\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/postmaster/postmaster.c;h=7de27ee4e0171863faca2f24d62488b773a7636e;hb=HEAD#l2154\n\n[2]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/libpq/hba.c;h=371dccb852fd5c0775c7ebd82b67de3f20dc70af;hb=HEAD#l640\n\n[3]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/libpq/auth.c;h=545635f41a916c740aacd6a8b68672d10378b7ab;hb=HEAD#l420\n\n[4]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/libpq/auth.c;h=545635f41a916c740aacd6a8b68672d10378b7ab;hb=HEAD#l487", "msg_date": "Wed, 27 Jan 2021 18:58:40 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "[PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Thu, Jan 28, 2021 at 1:51 PM Paul Martinez <paulmtz@google.com> wrote:\n>\n> Hey, all,\n>\n> When creating a logical replication connection that isn't allowed by the\n> current pg_hba.conf, the error message states that a \"replication\n> connection\" is not allowed.\n>\n> This error message is confusing because although the user is trying to\n> create a replication connection and specified \"replication=database\" in\n> the connection string, the special \"replication\" pg_hba.conf keyword\n> does not apply.\n>\n\nRight.\n\n> I believe the error message should just refer to a\n> regular connection and specify the database the user is trying to\n> connect to.\n>\n\nWhat exactly are you bothered about here? Is the database name not\npresent in the message your concern or the message uses 'replication'\nbut actually it doesn't relate to 'replication' specified in\npg_hba.conf your concern? I think with the current scheme one might\nsay that replication word in error message helps them to distinguish\nlogical replication connection error from a regular connection error.\nI am not telling what you are proposing is wrong but just that the\ncurrent scheme of things might be helpful to some users. If you can\nexplain a bit how the current message mislead you and the proposed one\nsolves that confusion that would be better?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 29 Jan 2021 09:47:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Thu, Jan 28, 2021 at 8:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> What exactly are you bothered about here? Is the database name not\n> present in the message your concern or the message uses 'replication'\n> but actually it doesn't relate to 'replication' specified in\n> pg_hba.conf your concern? I think with the current scheme one might\n> say that replication word in error message helps them to distinguish\n> logical replication connection error from a regular connection error.\n> I am not telling what you are proposing is wrong but just that the\n> current scheme of things might be helpful to some users. If you can\n> explain a bit how the current message misled you and the proposed one\n> solves that confusion that would be better?\n>\n\nMy main confusion arose from conflating the word \"replication\" in the\nerror message with the \"replication\" keyword in pg_hba.conf.\n\nIn my case, I was actually confused because I was creating logical\nreplication connections that weren't getting rejected, despite a lack\nof any \"replication\" rules in my pg_hba.conf. I had the faulty\nassumption that replication connection requires \"replication\" keyword,\nand my change to the documentation makes it explicit that logical\nreplication connections do not match the \"replication\" keyword.\n\nI was digging through the code trying to understand why it was working,\nand also making manual connections before I stumbled upon these error\nmessages.\n\nThe fact that the error message doesn't include the database name\ndefinitely contributed to my confusion. It only mentions the word\n\"replication\", and not a database name, and that reinforces the idea\nthat the \"replication\" keyword rule should apply, because it seems\n\"replication\" has replaced the database name.\n\nBut overall, I would agree that the current messages aren't wrong,\nand my fix could still cause confusion because now logical replication\nconnections won't be described as \"replication\" connections.\n\nHow about explicitly specifying physical vs. logical replication in the\nerror message, and also adding hints for clarifying the use of\nthe \"replication\" keyword in pg_hba.conf?\n\nif physical replication\n Error \"pg_hba.conf rejects physical replication connection ...\"\n Hint \"Physical replication connections only match pg_hba.conf rules\nusing the \"replication\" keyword\"\nelse if logical replication\n Error \"pg_hba.conf rejects logical replication connection to database %s ...\"\n // Maybe add this?\n Hint \"Logical replication connections do not match pg_hba.conf rules\nusing the \"replication\" keyword\"\nelse\n Error \"pg_hba.conf rejects connection to database %s ...\"\n\nIf I did go with this approach, would it be better to have three\nseparate branches, or to just introduce another variable for the\nconnection type? I'm not sure what is optimal for translation. (If both\ntypes of replication connections get hints then probably three branches\nis better.)\n\nconst char *connection_type;\n\nconnection_type =\n am_db_walsender ? _(\"logical replication connection\") :\n am_walsender ? _(\"physical replication connection\") :\n _(\"connection\")\n\n\n- Paul\n\n\n", "msg_date": "Fri, 29 Jan 2021 10:53:28 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Sat, Jan 30, 2021 at 12:24 AM Paul Martinez <paulmtz@google.com> wrote:\n>\n> On Thu, Jan 28, 2021 at 8:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > What exactly are you bothered about here? Is the database name not\n> > present in the message your concern or the message uses 'replication'\n> > but actually it doesn't relate to 'replication' specified in\n> > pg_hba.conf your concern? I think with the current scheme one might\n> > say that replication word in error message helps them to distinguish\n> > logical replication connection error from a regular connection error.\n> > I am not telling what you are proposing is wrong but just that the\n> > current scheme of things might be helpful to some users. If you can\n> > explain a bit how the current message misled you and the proposed one\n> > solves that confusion that would be better?\n> >\n>\n> My main confusion arose from conflating the word \"replication\" in the\n> error message with the \"replication\" keyword in pg_hba.conf.\n>\n> In my case, I was actually confused because I was creating logical\n> replication connections that weren't getting rejected, despite a lack\n> of any \"replication\" rules in my pg_hba.conf. I had the faulty\n> assumption that replication connection requires \"replication\" keyword,\n> and my change to the documentation makes it explicit that logical\n> replication connections do not match the \"replication\" keyword.\n>\n\nI think it is good to be more explicit in the documentation but we\nalready mention \"physical replication connection\" in the sentence. So\nit might be better that we add a separate sentence related to logical\nreplication.\n\n> I was digging through the code trying to understand why it was working,\n> and also making manual connections before I stumbled upon these error\n> messages.\n>\n> The fact that the error message doesn't include the database name\n> definitely contributed to my confusion. It only mentions the word\n> \"replication\", and not a database name, and that reinforces the idea\n> that the \"replication\" keyword rule should apply, because it seems\n> \"replication\" has replaced the database name.\n>\n> But overall, I would agree that the current messages aren't wrong,\n> and my fix could still cause confusion because now logical replication\n> connections won't be described as \"replication\" connections.\n>\n> How about explicitly specifying physical vs. logical replication in the\n> error message, and also adding hints for clarifying the use of\n> the \"replication\" keyword in pg_hba.conf?\n>\n\nYeah, hints or more details might improve the situation but I am not\nsure we want to add more branching here. Can we write something\nsimilar to HOSTNAME_LOOKUP_DETAIL for hints? Also, I think what you\nare proposing to write is more of a errdetail kind of message. See\nmore error routines in the docs [1].\n\n[1] - https://www.postgresql.org/docs/devel/error-message-reporting.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 30 Jan 2021 10:10:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Fri, Jan 29, 2021 at 8:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Yeah, hints or more details might improve the situation but I am not\n> sure we want to add more branching here. Can we write something\n> similar to HOSTNAME_LOOKUP_DETAIL for hints? Also, I think what you\n> are proposing to write is more of a errdetail kind of message. See\n> more error routines in the docs [1].\n>\n\nAlright, I've updated both sets of error messages to use something like\nHOSTNAME_LOOKUP_DETAIL, both for the error message itself, and for the\nextra detail message about the replication keyword. Since now we specify\nboth an errdetail (sent to the client) and an errdetail_log (sent to the\nlog), I renamed HOSTNAME_LOOKUP_DETAIL to HOSTNAME_LOOKUP_DETAIL_LOG.\n\n- Paul", "msg_date": "Mon, 1 Feb 2021 12:12:43 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Tue, Feb 2, 2021 at 1:43 AM Paul Martinez <paulmtz@google.com> wrote:\n>\n> On Fri, Jan 29, 2021 at 8:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Yeah, hints or more details might improve the situation but I am not\n> > sure we want to add more branching here. Can we write something\n> > similar to HOSTNAME_LOOKUP_DETAIL for hints? Also, I think what you\n> > are proposing to write is more of a errdetail kind of message. See\n> > more error routines in the docs [1].\n> >\n>\n> Alright, I've updated both sets of error messages to use something like\n> HOSTNAME_LOOKUP_DETAIL, both for the error message itself, and for the\n> extra detail message about the replication keyword. Since now we specify\n> both an errdetail (sent to the client) and an errdetail_log (sent to the\n> log), I renamed HOSTNAME_LOOKUP_DETAIL to HOSTNAME_LOOKUP_DETAIL_LOG.\n>\n\nI don't think we need to update the error messages, it makes the code\na bit difficult to parse without much benefit. How about just adding\nerrdetail? See attached and let me know what you think?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 16 Feb 2021 15:52:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Tue, Feb 16, 2021 at 2:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I don't think we need to update the error messages, it makes the code\n> a bit difficult to parse without much benefit. How about just adding\n> errdetail? See attached and let me know what you think?\n>\n\nYeah, I think that looks good. Thanks!\n\n- Paul\n\n\n", "msg_date": "Tue, 16 Feb 2021 09:10:17 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Tue, Feb 16, 2021 at 10:40 PM Paul Martinez <paulmtz@google.com> wrote:\n>\n> On Tue, Feb 16, 2021 at 2:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I don't think we need to update the error messages, it makes the code\n> > a bit difficult to parse without much benefit. How about just adding\n> > errdetail? See attached and let me know what you think?\n> >\n>\n> Yeah, I think that looks good. Thanks!\n>\n\nOkay, I think normally it might not be a good idea to expose\nadditional information about authentication failure especially about\npg_hba so as to reduce the risk of exposing information to potential\nattackers but in this case, it appears to me that it would be helpful\nfor users. Just in case someone else has any opinion, for logical\nreplication connection failures, the messages before and after fix\nwould be:\n\nBefore fix\nERROR: could not connect to the publisher: connection to server at\n\"localhost\" (::1), port 5432 failed: FATAL: pg_hba.conf rejects\nreplication connection for host \"::1\", user \"KapilaAm\", no encryption\n\nAfter fix error:\nERROR: could not connect to the publisher: connection to server at\n\"localhost\" (::1), port 5432 failed: FATAL: pg_hba.conf rejects\nconnection for host \"::1\", user \"KapilaAm\", database \"postgres\", no\nencryption\nDETAIL: Logical replication connections do not match pg_hba.conf\nrules using the \"replication\" keyword.\n\nDoes anyone see a problem with the DETAIL message or the change of\nerror message (database name appears in the new message) in this case?\n\nAttached patch with the updated commit message.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 17 Feb 2021 16:31:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Wed, Feb 17, 2021, at 8:01 AM, Amit Kapila wrote:\n> Before fix\n> ERROR: could not connect to the publisher: connection to server at\n> \"localhost\" (::1), port 5432 failed: FATAL: pg_hba.conf rejects\n> replication connection for host \"::1\", user \"KapilaAm\", no encryption\n> \n> After fix error:\n> ERROR: could not connect to the publisher: connection to server at\n> \"localhost\" (::1), port 5432 failed: FATAL: pg_hba.conf rejects\n> connection for host \"::1\", user \"KapilaAm\", database \"postgres\", no\n> encryption\n> DETAIL: Logical replication connections do not match pg_hba.conf\n> rules using the \"replication\" keyword.\nThe new message is certainly an improvement because it provides an additional \ncomponent (database name) that could be used to figure out what's wrong with \nthe logical replication connection. However, I wouldn't like to add a DETAIL \nmessage for something that could be easily inspected in the pg_hba.conf. The \nold message leaves a doubt about which rule was used (absence of database name)\nbut the new message makes this very clear. IMO with this new message, we don't \nneed a DETAIL message. If in doubt, user can always read that documentation \n(the new sentence clarifies the \"replication\" usage for logical replication \nconnections).\n\nRegarding the documentation, I think the new sentence a bit confusing. The \nmodified sentence is providing detailed information about \"replication\" in the \ndatabase field then you start mentioned \"replication=database\". Even though it \nis related to the connection string, it could confuse the reader for a second. \nI would say \"it does not match logical replication connections\". It seems \nsufficient to inform the reader that he/she cannot use records with \n\"replication\" to match logical replication connections.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 17, 2021, at 8:01 AM, Amit Kapila wrote:Before fixERROR:  could not connect to the publisher: connection to server at\"localhost\" (::1), port 5432 failed: FATAL:  pg_hba.conf rejectsreplication connection for host \"::1\", user \"KapilaAm\", no encryptionAfter fix error:ERROR:  could not connect to the publisher: connection to server at\"localhost\" (::1), port 5432 failed: FATAL:  pg_hba.conf rejectsconnection for host \"::1\", user \"KapilaAm\", database \"postgres\", noencryptionDETAIL:  Logical replication connections do not match pg_hba.confrules using the \"replication\" keyword.The new message is certainly an improvement because it provides an additional   component (database name) that could be used to figure out what's wrong with      the logical replication connection. However, I wouldn't like to add a DETAIL      message for something that could be easily inspected in the pg_hba.conf. The      old message leaves a doubt about which rule was used (absence of database name)but the new message makes this very clear. IMO with this new message, we don't need a DETAIL message. If in doubt, user can always read that documentation    (the new sentence clarifies the \"replication\" usage for logical replication       connections).Regarding the documentation, I think the new sentence a bit confusing. The      modified sentence is providing detailed information about \"replication\" in the database field then you start mentioned \"replication=database\". Even though it is related to the connection string, it could confuse the reader for a second.    I would say \"it does not match logical replication connections\". It seems         sufficient to inform the reader that he/she cannot use records with               \"replication\" to match logical replication connections.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 17 Feb 2021 21:27:56 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_pg=5Fhba.conf_error_messages_for_logical_replicati?=\n =?UTF-8?Q?on_connections?=" }, { "msg_contents": "On Thu, Feb 18, 2021 at 5:59 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Feb 17, 2021, at 8:01 AM, Amit Kapila wrote:\n>\n> Before fix\n> ERROR: could not connect to the publisher: connection to server at\n> \"localhost\" (::1), port 5432 failed: FATAL: pg_hba.conf rejects\n> replication connection for host \"::1\", user \"KapilaAm\", no encryption\n>\n> After fix error:\n> ERROR: could not connect to the publisher: connection to server at\n> \"localhost\" (::1), port 5432 failed: FATAL: pg_hba.conf rejects\n> connection for host \"::1\", user \"KapilaAm\", database \"postgres\", no\n> encryption\n> DETAIL: Logical replication connections do not match pg_hba.conf\n> rules using the \"replication\" keyword.\n>\n> The new message is certainly an improvement because it provides an additional\n> component (database name) that could be used to figure out what's wrong with\n> the logical replication connection. However, I wouldn't like to add a DETAIL\n> message for something that could be easily inspected in the pg_hba.conf. The\n> old message leaves a doubt about which rule was used (absence of database name)\n> but the new message makes this very clear. IMO with this new message, we don't\n> need a DETAIL message.\n>\n\nYou have a point. Paul, do you have any thoughts on this?\n\n> If in doubt, user can always read that documentation\n> (the new sentence clarifies the \"replication\" usage for logical replication\n> connections).\n>\n> Regarding the documentation, I think the new sentence a bit confusing. The\n> modified sentence is providing detailed information about \"replication\" in the\n> database field then you start mentioned \"replication=database\". Even though it\n> is related to the connection string, it could confuse the reader for a second.\n> I would say \"it does not match logical replication connections\". It seems\n> sufficient to inform the reader that he/she cannot use records with\n> \"replication\" to match logical replication connections.\n>\n\nFair point.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Feb 2021 14:42:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Thu, Feb 18, 2021 at 2:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 18, 2021 at 5:59 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Wed, Feb 17, 2021, at 8:01 AM, Amit Kapila wrote:\n> >\n> > Before fix\n> > ERROR: could not connect to the publisher: connection to server at\n> > \"localhost\" (::1), port 5432 failed: FATAL: pg_hba.conf rejects\n> > replication connection for host \"::1\", user \"KapilaAm\", no encryption\n> >\n> > After fix error:\n> > ERROR: could not connect to the publisher: connection to server at\n> > \"localhost\" (::1), port 5432 failed: FATAL: pg_hba.conf rejects\n> > connection for host \"::1\", user \"KapilaAm\", database \"postgres\", no\n> > encryption\n> > DETAIL: Logical replication connections do not match pg_hba.conf\n> > rules using the \"replication\" keyword.\n> >\n> > The new message is certainly an improvement because it provides an additional\n> > component (database name) that could be used to figure out what's wrong with\n> > the logical replication connection. However, I wouldn't like to add a DETAIL\n> > message for something that could be easily inspected in the pg_hba.conf. The\n> > old message leaves a doubt about which rule was used (absence of database name)\n> > but the new message makes this very clear. IMO with this new message, we don't\n> > need a DETAIL message.\n> >\n>\n> You have a point. Paul, do you have any thoughts on this?\n>\n\nChanged as per suggestion.\n\n> > If in doubt, user can always read that documentation\n> > (the new sentence clarifies the \"replication\" usage for logical replication\n> > connections).\n> >\n> > Regarding the documentation, I think the new sentence a bit confusing. The\n> > modified sentence is providing detailed information about \"replication\" in the\n> > database field then you start mentioned \"replication=database\". Even though it\n> > is related to the connection string, it could confuse the reader for a second.\n> > I would say \"it does not match logical replication connections\". It seems\n> > sufficient to inform the reader that he/she cannot use records with\n> > \"replication\" to match logical replication connections.\n> >\n>\n> Fair point.\n>\n\nI have used a bit of different wording here to make things clear.\n\nLet me know what you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 20 Feb 2021 16:03:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" }, { "msg_contents": "On Sat, Feb 20, 2021, at 7:33 AM, Amit Kapila wrote:\n> I have used a bit of different wording here to make things clear.\n> \n> Let me know what you think of the attached?\nWFM.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sat, Feb 20, 2021, at 7:33 AM, Amit Kapila wrote:I have used a bit of different wording here to make things clear.Let me know what you think of the attached?WFM.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 22 Feb 2021 09:38:36 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_pg=5Fhba.conf_error_messages_for_logical_replicati?=\n =?UTF-8?Q?on_connections?=" }, { "msg_contents": "On Mon, Feb 22, 2021 at 6:08 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Sat, Feb 20, 2021, at 7:33 AM, Amit Kapila wrote:\n>\n> I have used a bit of different wording here to make things clear.\n>\n> Let me know what you think of the attached?\n>\n> WFM.\n>\n\nThanks, Pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Feb 2021 13:36:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_hba.conf error messages for logical replication\n connections" } ]
[ { "msg_contents": "Hi all,\n\nWe're about 3 days from the end of this Commitfest. The current status is:\n\nNeeds review: 150 (+1)\nWaiting on Author: 24 (-5)\nReady for Committer: 24 (+0)\nCommitted: 52 (+2)\nWithdrawn: 8 (+0)\nMoved to next CF: 2 (+2)\n\nThis weekend, I'm planning to look through Waiting-on-Author patches\nand set them to \"Returned with Feedback\" (excluding bug fixe patches)\nif these have not changed for a very long time. Currently, we have 24\nWoA patches. The following patches have not updated for more than 1\nmonth and seem inactive.\n\n* Index Skip Scan\n * WoA since 2020-12-01\n * Latest patch on 2020-10-24\n * https://commitfest.postgresql.org/31/1741/\n\n* pgbench - add a synchronization barrier when starting\n * WoA since 2020-12-01\n * Latest patch on 2020-11-14\n * https://commitfest.postgresql.org/31/2557/\n\n* remove deprecated v8.2 containment operators\n * WoA since 2020-12-01\n * Latest patch on 2020-10-27\n * https://commitfest.postgresql.org/31/2798/\n\n* bitmap cost should account for correlated indexes\n * WoA since 2020-12-03\n * Latest patch on 2020-11-06\n * https://commitfest.postgresql.org/31/2310/\n\n* CREATE INDEX CONCURRENTLY on partitioned table\n * WoA since 2020-12-27\n * Latest patch on 2020-11-29\n * https://commitfest.postgresql.org/31/2815/\n\n* VACUUM (FAST_FREEZE )\n * WoA since 2021-01-04 (review comments are already sent on 2020-12-01)\n * Latest patch: 2020-11-29\n * https://commitfest.postgresql.org/31/2908/\n\nI'm going to ask the current status of these patches and set them to\nRwF if appropriate.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 28 Jan 2021 20:37:46 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Commitfest 2021-01 ends in 3 days" } ]
[ { "msg_contents": "Hello all,\r\n\r\nFirst, the context: recently I've been digging into the use of third-\r\nparty authentication systems with Postgres. One sticking point is the\r\nneed to have a Postgres role corresponding to the third-party user\r\nidentity, which becomes less manageable at scale. I've been trying to\r\ncome up with ways to make that less painful, and to start peeling off\r\nsmaller feature requests.\r\n\r\n= Problem =\r\n\r\nFor auth methods that allow pg_ident mapping, there's a way around the\r\none-role-per-user problem, which is to have all users that match some\r\npattern map to a single role. For Kerberos, you might specify that all\r\nuser principals under @EXAMPLE.COM are allowed to connect as some\r\ngeneric user role, and that everyone matching */admin@EXAMPLE.COM is\r\nadditionally allowed to connect as an admin role.\r\n\r\nUnfortunately, once you've been assigned a role, Postgres either makes\r\nthe original identity difficult to retrieve, or forgets who you were\r\nentirely:\r\n\r\n- for GSS, the original principal is saved in the Port struct, and you\r\nneed to either pull it out of pg_stat_gssapi, or enable log_connections\r\nand piece the log line together with later log entries;\r\n- for LDAP, the bind DN is discarded entirely;\r\n- for TLS client certs, the DN has to be pulled from pg_stat_ssl or the\r\nsslinfo extension (and it's truncated to 64 characters, so good luck if\r\nyou have a particularly verbose PKI tree);\r\n- for peer auth, the username of the peereid is discarded;\r\n- etc.\r\n\r\n= Proposal =\r\n\r\nI propose that every auth method should store the string it uses to\r\nidentify a user -- what I'll call an \"authenticated identity\" -- into\r\none central location in Port, after authentication succeeds but before\r\nany pg_ident authorization occurs. This field can then be exposed in\r\nlog_line_prefix. (It could additionally be exposed through a catalog\r\ntable or SQL function, if that were deemed useful.) This would let a\r\nDBA more easily audit user activity when using more complicated\r\npg_ident setups.\r\n\r\nAttached is a proof of concept that implements this for a handful of\r\nauth methods:\r\n\r\n- ldap uses the final bind DN as its authenticated identity\r\n- gss uses the user principal\r\n- cert uses the client's Subject DN\r\n- scram-sha-256 just uses the Postgres username\r\n\r\nWith this patch, the authenticated identity can be inserted into\r\nlog_line_prefix using the placeholder %Z.\r\n\r\n= Implementation Notes =\r\n\r\n- Client certificates can be combined with other authentication methods\r\nusing the clientcert option, but that doesn't provide an authenticated\r\nidentity in my proposal. *Only* the cert auth method populates the\r\nauthenticated identity from a client certificate. This keeps the patch\r\nfrom having to deal with two simultaneous identity sources.\r\n\r\n- The trust auth method has an authenticated identity of NULL, logged\r\nas [unknown]. I kept this property even when clientcert=verify-full is\r\nin use (which would otherwise be identical to the cert auth method), to\r\nhammer home that 1) trust is not an authentication method and 2) the\r\nclientcert option does not provide an authenticated identity. Whether\r\nthis is a useful property, or just overly pedantic, is probably\r\nsomething that could be debated.\r\n\r\n- The cert method's Subject DN string formatting needs the same\r\nconsiderations that are currently under discussion in Andrew's DN patch\r\n[1].\r\n\r\n- I'm not crazy about the testing method -- it leads to a lot of log\r\nfile proliferation in the tests -- but I wanted to make sure that we\r\nhad test coverage for the log lines themselves. The ability to\r\ncorrectly audit user behavior depends on us logging the correct\r\nidentity after authentication, but not a moment before.\r\n\r\nWould this be generally useful for those of you using pg_ident in\r\nproduction? Have I missed something that already provides this\r\nfunctionality?\r\n\r\nThanks,\r\n--Jacob\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/flat/92e70110-9273-d93c-5913-0bccb6562740@dunslane.net", "msg_date": "Thu, 28 Jan 2021 18:22:07 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> First, the context: recently I've been digging into the use of third-\n> party authentication systems with Postgres. One sticking point is the\n> need to have a Postgres role corresponding to the third-party user\n> identity, which becomes less manageable at scale. I've been trying to\n> come up with ways to make that less painful, and to start peeling off\n> smaller feature requests.\n\nYeah, it'd be nice to improve things in this area.\n\n> = Problem =\n> \n> For auth methods that allow pg_ident mapping, there's a way around the\n> one-role-per-user problem, which is to have all users that match some\n> pattern map to a single role. For Kerberos, you might specify that all\n> user principals under @EXAMPLE.COM are allowed to connect as some\n> generic user role, and that everyone matching */admin@EXAMPLE.COM is\n> additionally allowed to connect as an admin role.\n> \n> Unfortunately, once you've been assigned a role, Postgres either makes\n> the original identity difficult to retrieve, or forgets who you were\n> entirely:\n> \n> - for GSS, the original principal is saved in the Port struct, and you\n> need to either pull it out of pg_stat_gssapi, or enable log_connections\n> and piece the log line together with later log entries;\n\nThis has been improved on of late, but it's been done piece-meal.\n\n> - for LDAP, the bind DN is discarded entirely;\n\nWe don't support pg_ident.conf-style entries for LDAP, meaning that the\nuser provided has to match what we check, so I'm not sure what would be\nimproved with this change..? I'm also just generally not thrilled with\nputting much effort into LDAP as it's a demonstrably insecure\nauthentication mechanism.\n\n> - for TLS client certs, the DN has to be pulled from pg_stat_ssl or the\n> sslinfo extension (and it's truncated to 64 characters, so good luck if\n> you have a particularly verbose PKI tree);\n\nYeah, it'd be nice to improve on this.\n\n> - for peer auth, the username of the peereid is discarded;\n\nWould be good to improve this too.\n\n> = Proposal =\n> \n> I propose that every auth method should store the string it uses to\n> identify a user -- what I'll call an \"authenticated identity\" -- into\n> one central location in Port, after authentication succeeds but before\n> any pg_ident authorization occurs. This field can then be exposed in\n> log_line_prefix. (It could additionally be exposed through a catalog\n> table or SQL function, if that were deemed useful.) This would let a\n> DBA more easily audit user activity when using more complicated\n> pg_ident setups.\n\nThis seems like it would be good to include the CSV format log files\nalso.\n\n> Would this be generally useful for those of you using pg_ident in\n> production? Have I missed something that already provides this\n> functionality?\n\nFor some auth methods, eg: GSS, we've recently added information into\nthe authentication method which logs what the authenticated identity\nwas. The advantage with that approach is that it avoids bloating the\nlog by only logging that information once upon connection rather than\non every log line... I wonder if we should be focusing on a similar\napproach for other pg_ident.conf use-cases instead of having it via\nlog_line_prefix, as the latter means we'd be logging the same value over\nand over again on every log line.\n\nThanks,\n\nStephen", "msg_date": "Fri, 29 Jan 2021 17:01:01 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Jacob Champion (pchampion@vmware.com) wrote:\n>> I propose that every auth method should store the string it uses to\n>> identify a user -- what I'll call an \"authenticated identity\" -- into\n>> one central location in Port, after authentication succeeds but before\n>> any pg_ident authorization occurs. This field can then be exposed in\n>> log_line_prefix. (It could additionally be exposed through a catalog\n>> table or SQL function, if that were deemed useful.) This would let a\n>> DBA more easily audit user activity when using more complicated\n>> pg_ident setups.\n\n> This seems like it would be good to include the CSV format log files\n> also.\n\nWhat happens if ALTER USER RENAME is done while the session is still\nalive?\n\nMore generally, exposing this in log_line_prefix seems like an awfully\nnarrow-minded view of what people will want it for. I'd personally\nthink pg_stat_activity a better place to look, for example.\n\n> on every log line... I wonder if we should be focusing on a similar\n> approach for other pg_ident.conf use-cases instead of having it via\n> log_line_prefix, as the latter means we'd be logging the same value over\n> and over again on every log line.\n\nYeah, this seems like about the most expensive way that we could possibly\nchoose to make the info available.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jan 2021 17:30:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, 2021-01-29 at 17:01 -0500, Stephen Frost wrote:\r\n> > - for LDAP, the bind DN is discarded entirely;\r\n> \r\n> We don't support pg_ident.conf-style entries for LDAP, meaning that the\r\n> user provided has to match what we check, so I'm not sure what would be\r\n> improved with this change..?\r\n\r\nFor simple binds, this gives you almost nothing. For bind+search,\r\nlogging the actual bind DN is still important, in my opinion, since the\r\nmechanism for determining it is more opaque (and may change over time).\r\n\r\nBut as Tom noted -- for both cases, if the role name changes, this\r\nmechanism can still help you audit who the user _actually_ bound as,\r\nnot who you think they should have bound as based on their current role\r\nname.\r\n\r\n(There's also the fact that I think pg_ident mapping for LDAP would be\r\njust as useful as it is for GSS or certs. That's for a different\r\nconversation.)\r\n\r\n> I'm also just generally not thrilled with\r\n> putting much effort into LDAP as it's a demonstrably insecure\r\n> authentication mechanism.\r\n\r\nBecause Postgres has to proxy the password? Or is there something else?\r\n\r\n> > I propose that every auth method should store the string it uses to\r\n> > identify a user -- what I'll call an \"authenticated identity\" -- into\r\n> > one central location in Port, after authentication succeeds but before\r\n> > any pg_ident authorization occurs. This field can then be exposed in\r\n> > log_line_prefix. (It could additionally be exposed through a catalog\r\n> > table or SQL function, if that were deemed useful.) This would let a\r\n> > DBA more easily audit user activity when using more complicated\r\n> > pg_ident setups.\r\n> \r\n> This seems like it would be good to include the CSV format log files\r\n> also.\r\n\r\nAgreed in principle... Is the CSV format configurable? Forcing it into\r\nCSV logs by default seems like it'd be a hard sell, especially for\r\npeople not using pg_ident.\r\n\r\n> For some auth methods, eg: GSS, we've recently added information into\r\n> the authentication method which logs what the authenticated identity\r\n> was. The advantage with that approach is that it avoids bloating the\r\n> log by only logging that information once upon connection rather than\r\n> on every log line... I wonder if we should be focusing on a similar\r\n> approach for other pg_ident.conf use-cases instead of having it via\r\n> log_line_prefix, as the latter means we'd be logging the same value over\r\n> and over again on every log line.\r\n\r\nAs long as the identity can be easily logged and reviewed by DBAs, I'm\r\nhappy.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 29 Jan 2021 23:21:36 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, 2021-01-29 at 17:30 -0500, Tom Lane wrote:\r\n> What happens if ALTER USER RENAME is done while the session is still\r\n> alive?\r\n\r\nIMO the authenticated identity should be write-once. Especially since\r\none of my goals is to have greater auditability into events as they've\r\nactually happened. So ALTER USER RENAME should have no effect.\r\n\r\nThis also doesn't really affect third-party auth methods. If I'm bound\r\nas pchampion@EXAMPLE.COM and a superuser changes my username to tlane,\r\nyou _definitely_ don't want to see my authenticated identity change to \r\ntlane@EXAMPLE.COM. That's not who I am.\r\n\r\nSo the potential confusion would come into play with first-party authn.\r\nFrom an audit perspective, I think it's worth it. I did authenticate as\r\npchampion, not tlane.\r\n\r\n> More generally, exposing this in log_line_prefix seems like an awfully\r\n> narrow-minded view of what people will want it for. I'd personally\r\n> think pg_stat_activity a better place to look, for example.\r\n> [...]\r\n> Yeah, this seems like about the most expensive way that we could possibly\r\n> choose to make the info available.\r\n\r\nI'm happy as long as it's _somewhere_. :D It's relatively easy to\r\nexpose a single location through multiple avenues, but currently there\r\nis no single location.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 29 Jan 2021 23:33:02 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> On Fri, 2021-01-29 at 17:30 -0500, Tom Lane wrote:\n>> What happens if ALTER USER RENAME is done while the session is still\n>> alive?\n\n> IMO the authenticated identity should be write-once. Especially since\n> one of my goals is to have greater auditability into events as they've\n> actually happened. So ALTER USER RENAME should have no effect.\n\n> This also doesn't really affect third-party auth methods. If I'm bound\n> as pchampion@EXAMPLE.COM and a superuser changes my username to tlane,\n> you _definitely_ don't want to see my authenticated identity change to \n> tlane@EXAMPLE.COM. That's not who I am.\n\nAh. So basically, this comes into play when you consider that some\noutside-the-database entity is your \"real\" authenticated identity.\nThat seems reasonable when using Kerberos or the like, though it's\nnot real meaningful for traditional password-type authentication.\nI'd misunderstood your point before.\n\nSo, if we store this \"real\" identity, is there any security issue\ninvolved in exposing it to other users (via pg_stat_activity or\nwhatever)?\n\nI remain concerned about the cost and inconvenience of exposing\nit via log_line_prefix, but at least that shouldn't be visible\nto anyone who's not entitled to know who's logged in ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jan 2021 18:40:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, 2021-01-29 at 18:40 -0500, Tom Lane wrote:\r\n> Ah. So basically, this comes into play when you consider that some\r\n> outside-the-database entity is your \"real\" authenticated identity.\r\n> That seems reasonable when using Kerberos or the like, though it's\r\n> not real meaningful for traditional password-type authentication.\r\n\r\nRight.\r\n\r\n> So, if we store this \"real\" identity, is there any security issue\r\n> involved in exposing it to other users (via pg_stat_activity or\r\n> whatever)?\r\n\r\nI think that could be a concern for some, yeah. Besides being able to\r\nget information on other logged-in users, the ability to connect an\r\nauthenticated identity to a username also gives you some insight into\r\nthe pg_hba configuration.\r\n\r\n--Jacob\r\n", "msg_date": "Sat, 30 Jan 2021 00:10:59 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Sat, Jan 30, 2021 at 12:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Jacob Champion <pchampion@vmware.com> writes:\n> > On Fri, 2021-01-29 at 17:30 -0500, Tom Lane wrote:\n> >> What happens if ALTER USER RENAME is done while the session is still\n> >> alive?\n>\n> > IMO the authenticated identity should be write-once. Especially since\n> > one of my goals is to have greater auditability into events as they've\n> > actually happened. So ALTER USER RENAME should have no effect.\n>\n> > This also doesn't really affect third-party auth methods. If I'm bound\n> > as pchampion@EXAMPLE.COM and a superuser changes my username to tlane,\n> > you _definitely_ don't want to see my authenticated identity change to\n> > tlane@EXAMPLE.COM. That's not who I am.\n>\n> Ah. So basically, this comes into play when you consider that some\n> outside-the-database entity is your \"real\" authenticated identity.\n> That seems reasonable when using Kerberos or the like, though it's\n> not real meaningful for traditional password-type authentication.\n\nI think the usecases where it's relevant is a relatively close match\nto the usecases where we support user mapping in pg_ident.conf. There\nis a small exception in the ldap search+bind since it's a two-step\noperation and the interesting part would be in the mid-step, but I'm\nnot sure there is any other case than those where it adds a lot of\nvalue.\n\n\n> I'd misunderstood your point before.\n>\n> So, if we store this \"real\" identity, is there any security issue\n> involved in exposing it to other users (via pg_stat_activity or\n> whatever)?\n\nI'd say it might. It might for example reveal where in a hierarchical\nauthentication setup your \"real identity\" lives. I think it'd at least\nhave to be limited to superusers.\n\n\n> I remain concerned about the cost and inconvenience of exposing\n> it via log_line_prefix, but at least that shouldn't be visible\n> to anyone who's not entitled to know who's logged in ...\n\nWhat if we logged it as part of log_connection=on, but only there and\nonly once? It could still be traced through the rest of that sessions\nlogging using the fields identifying the session, and we'd only end up\nlogging it once.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 31 Jan 2021 12:15:52 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Sat, Jan 30, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Fri, 2021-01-29 at 17:01 -0500, Stephen Frost wrote:\n> > > - for LDAP, the bind DN is discarded entirely;\n> >\n> > We don't support pg_ident.conf-style entries for LDAP, meaning that the\n> > user provided has to match what we check, so I'm not sure what would be\n> > improved with this change..?\n>\n> For simple binds, this gives you almost nothing. For bind+search,\n> logging the actual bind DN is still important, in my opinion, since the\n> mechanism for determining it is more opaque (and may change over time).\n\nYeah, that's definitely a piece of information that can be hard to get at today.\n\n\n> (There's also the fact that I think pg_ident mapping for LDAP would be\n> just as useful as it is for GSS or certs. That's for a different\n> conversation.)\n\nSpecifically for search+bind, I would assume?\n\n\n> > I'm also just generally not thrilled with\n> > putting much effort into LDAP as it's a demonstrably insecure\n> > authentication mechanism.\n>\n> Because Postgres has to proxy the password? Or is there something else?\n\nStephen is on a bit of a crusade against ldap :) Mostly for good\nreasons of course. A large amount of those who choose ldap also have a\nkerberos system (because, say, active directory) and the pick ldap\nonly because they think it's good, not because it is...\n\nBut yes, I think the enforced cleartext password proxying is at the\ncore of the problem. LDAP also encourages the idea of centralized\npassword-reuse, which is not exactly a great thing for security.\n\nThat said, I don't think either of those are reasons not to improve on\nLDAP. It can certainly be a reason for somebody not to want to spend\ntheir own time on it, but there's no reason it should prevent\nimprovements.\n\n\n> > > I propose that every auth method should store the string it uses to\n> > > identify a user -- what I'll call an \"authenticated identity\" -- into\n> > > one central location in Port, after authentication succeeds but before\n> > > any pg_ident authorization occurs. This field can then be exposed in\n> > > log_line_prefix. (It could additionally be exposed through a catalog\n> > > table or SQL function, if that were deemed useful.) This would let a\n> > > DBA more easily audit user activity when using more complicated\n> > > pg_ident setups.\n> >\n> > This seems like it would be good to include the CSV format log files\n> > also.\n>\n> Agreed in principle... Is the CSV format configurable? Forcing it into\n> CSV logs by default seems like it'd be a hard sell, especially for\n> people not using pg_ident.\n\nFor CVS, all columns are always included, and that's a feature -- it\nmakes it predictable.\n\nTo make it optional it would have to be a configuration parameter that\nturns the field into an empty one. but it should still be there.\n\n\n> > For some auth methods, eg: GSS, we've recently added information into\n> > the authentication method which logs what the authenticated identity\n> > was. The advantage with that approach is that it avoids bloating the\n> > log by only logging that information once upon connection rather than\n> > on every log line... I wonder if we should be focusing on a similar\n> > approach for other pg_ident.conf use-cases instead of having it via\n> > log_line_prefix, as the latter means we'd be logging the same value over\n> > and over again on every log line.\n>\n> As long as the identity can be easily logged and reviewed by DBAs, I'm\n> happy.\n\nYeah, per my previous mail, I think this is a better way - make it\npart of log_connections. But it would be good to find a way that we\ncan log it the same way for all of them -- rather than slightly\ndifferent ways depending on authentication method.\n\nWith that I think it would also be useful to have it available in the\nsystem as well -- either as a column in pg_stat_activity or maybe just\nas a function like pg_get_authenticated_identity() since it might be\nsomething that's interesting to a smallish subset of users (but very\ninteresting to those).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 31 Jan 2021 12:27:47 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, 29 Jan 2021 at 18:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ah. So basically, this comes into play when you consider that some\n> outside-the-database entity is your \"real\" authenticated identity.\n> That seems reasonable when using Kerberos or the like, though it's\n> not real meaningful for traditional password-type authentication.\n> I'd misunderstood your point before.\n\nI wonder if there isn't room to handle this the other way around. To\nconfigure Postgres to not need a CREATE ROLE for every role but\ndelegate the user management to the external authentication service.\n\nSo Postgres would consider the actual role to be the one kerberos said\nit was even if that role didn't exist in pg_role. Presumably you would\nwant to delegate to a corresponding authorization system as well so if\nthe role was absent from pg_role (or more likely fit some pattern)\nPostgres would ignore pg_role and consult the authorization system\nconfigured like AD or whatever people use with Kerberos these days.\n\n\n-- \ngreg\n\n\n", "msg_date": "Sun, 31 Jan 2021 10:17:33 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Sat, Jan 30, 2021 at 12:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I remain concerned about the cost and inconvenience of exposing\n>> it via log_line_prefix, but at least that shouldn't be visible\n>> to anyone who's not entitled to know who's logged in ...\n\n> What if we logged it as part of log_connection=on, but only there and\n> only once? It could still be traced through the rest of that sessions\n> logging using the fields identifying the session, and we'd only end up\n> logging it once.\n\nI'm certainly fine with including this info in the log_connection output.\nPerhaps it'd also be good to have a superuser-only column in\npg_stat_activity, or some other restricted way to get the info from an\nexisting session. I doubt we really want a log_line_prefix option.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 Jan 2021 10:49:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> I wonder if there isn't room to handle this the other way around. To\n> configure Postgres to not need a CREATE ROLE for every role but\n> delegate the user management to the external authentication service.\n\n> So Postgres would consider the actual role to be the one kerberos said\n> it was even if that role didn't exist in pg_role. Presumably you would\n> want to delegate to a corresponding authorization system as well so if\n> the role was absent from pg_role (or more likely fit some pattern)\n> Postgres would ignore pg_role and consult the authorization system\n> configured like AD or whatever people use with Kerberos these days.\n\nThis doesn't sound particularly workable: how would you manage\ninside-the-database permissions? Kerberos isn't going to know\nwhat \"view foo\" is, let alone know whether you should be allowed\nto read or write it. So ISTM there has to be a role to hold\nthose permissions. Certainly, you could allow multiple external\nidentities to share a role ... but that works today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 Jan 2021 10:53:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Sat, Jan 30, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\n> > > I'm also just generally not thrilled with\n> > > putting much effort into LDAP as it's a demonstrably insecure\n> > > authentication mechanism.\n> >\n> > Because Postgres has to proxy the password? Or is there something else?\n\nYes.\n\n> Stephen is on a bit of a crusade against ldap :) Mostly for good\n> reasons of course. A large amount of those who choose ldap also have a\n> kerberos system (because, say, active directory) and the pick ldap\n> only because they think it's good, not because it is...\n\nThis is certainly one area of frustration, but even if Kerberos isn't\navailable, it doesn't make it a good idea to use LDAP.\n\n> But yes, I think the enforced cleartext password proxying is at the\n> core of the problem. LDAP also encourages the idea of centralized\n> password-reuse, which is not exactly a great thing for security.\n\nRight- passing around a user's password in the clear (or even through an\nencrypted tunnel) has been strongly discouraged for a very long time,\nfor very good reason. LDAP does double-down on that by being a\ncentralized password, meaning that someone's entire identity (for all\nthe services that share that LDAP system, at least) are compromised if\nany one system in the environment is.\n\nIdeally, we'd have a 'PasswordAuthentication' option which would\ndisallow cleartext passwords, as has been discussed elsewhere, which\nwould make things like ldap and pam auth methods disallowed.\n\n> That said, I don't think either of those are reasons not to improve on\n> LDAP. It can certainly be a reason for somebody not to want to spend\n> their own time on it, but there's no reason it should prevent\n> improvements.\n\nI realize that this isn't a popular opinion, but I'd much rather we\nactively move in the direction of deprecating auth methods which use\ncleartext passwords. The one auth method we have that works that way\nand isn't terrible is radius, though it also isn't great since the pin\ndoesn't change and would be compromised, not to mention that it likely\ndepends on the specific system as to if an attacker might be able to use\nthe exact same code provided to log into other systems if done fast\nenough.\n\n> > > > I propose that every auth method should store the string it uses to\n> > > > identify a user -- what I'll call an \"authenticated identity\" -- into\n> > > > one central location in Port, after authentication succeeds but before\n> > > > any pg_ident authorization occurs. This field can then be exposed in\n> > > > log_line_prefix. (It could additionally be exposed through a catalog\n> > > > table or SQL function, if that were deemed useful.) This would let a\n> > > > DBA more easily audit user activity when using more complicated\n> > > > pg_ident setups.\n> > >\n> > > This seems like it would be good to include the CSV format log files\n> > > also.\n> >\n> > Agreed in principle... Is the CSV format configurable? Forcing it into\n> > CSV logs by default seems like it'd be a hard sell, especially for\n> > people not using pg_ident.\n> \n> For CVS, all columns are always included, and that's a feature -- it\n> makes it predictable.\n> \n> To make it optional it would have to be a configuration parameter that\n> turns the field into an empty one. but it should still be there.\n\nYeah, we've been around this before and, as I recall anyway, there was\nactually a prior patch proposed to add this information to the CSV log.\nThere is the question about if it's valuable enough to repeat on every\nline or not. These days, I think I lean in the same direction as the\nmajority on this thread that it's sufficient to log as part of the\nconnection authorized message.\n\n> > > For some auth methods, eg: GSS, we've recently added information into\n> > > the authentication method which logs what the authenticated identity\n> > > was. The advantage with that approach is that it avoids bloating the\n> > > log by only logging that information once upon connection rather than\n> > > on every log line... I wonder if we should be focusing on a similar\n> > > approach for other pg_ident.conf use-cases instead of having it via\n> > > log_line_prefix, as the latter means we'd be logging the same value over\n> > > and over again on every log line.\n> >\n> > As long as the identity can be easily logged and reviewed by DBAs, I'm\n> > happy.\n> \n> Yeah, per my previous mail, I think this is a better way - make it\n> part of log_connections. But it would be good to find a way that we\n> can log it the same way for all of them -- rather than slightly\n> different ways depending on authentication method.\n\n+1.\n\n> With that I think it would also be useful to have it available in the\n> system as well -- either as a column in pg_stat_activity or maybe just\n> as a function like pg_get_authenticated_identity() since it might be\n> something that's interesting to a smallish subset of users (but very\n> interesting to those).\n\nWe've been trending in the direction of having separate functions/views\nfor the different types of auth, as the specific information you'd want\nvaries (SSL has a different set than GSS, for example). Maybe it makes\nsense to have the one string that's used to match against in pg_ident\nincluded in pg_stat_activity also but I'm not completely sure- after\nall, there's a reason we have the separate views. Also, if we do add\nit, I would think we'd have it under the same check as the other\nsensitive pg_stat_activity fields and not be superuser-only.\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Feb 2021 11:49:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Greg Stark <stark@mit.edu> writes:\n> > I wonder if there isn't room to handle this the other way around. To\n> > configure Postgres to not need a CREATE ROLE for every role but\n> > delegate the user management to the external authentication service.\n> \n> > So Postgres would consider the actual role to be the one kerberos said\n> > it was even if that role didn't exist in pg_role. Presumably you would\n> > want to delegate to a corresponding authorization system as well so if\n> > the role was absent from pg_role (or more likely fit some pattern)\n> > Postgres would ignore pg_role and consult the authorization system\n> > configured like AD or whatever people use with Kerberos these days.\n> \n> This doesn't sound particularly workable: how would you manage\n> inside-the-database permissions? Kerberos isn't going to know\n> what \"view foo\" is, let alone know whether you should be allowed\n> to read or write it. So ISTM there has to be a role to hold\n> those permissions. Certainly, you could allow multiple external\n> identities to share a role ... but that works today.\n\nAgreed- we would need something in the database to tie it to and I don't\nsee it making much sense to try to invent something else for that when\nthat's what roles are. What's been discussed before and would certainly\nbe nice, however, would be a way to have roles automatically created.\nThere's pg_ldap_sync for that today but it'd be nice to have something\nbuilt-in and which happens at connection/authentication time, or maybe a\nbackground worker that connects to an ldap server and listens for\nchanges and creates appropriate roles when they're created. Considering\nwe've got the LDAP code already, that'd be a really nice capability.\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Feb 2021 12:06:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> This doesn't sound particularly workable: how would you manage\n>> inside-the-database permissions? Kerberos isn't going to know\n>> what \"view foo\" is, let alone know whether you should be allowed\n>> to read or write it. So ISTM there has to be a role to hold\n>> those permissions. Certainly, you could allow multiple external\n>> identities to share a role ... but that works today.\n\n> Agreed- we would need something in the database to tie it to and I don't\n> see it making much sense to try to invent something else for that when\n> that's what roles are. What's been discussed before and would certainly\n> be nice, however, would be a way to have roles automatically created.\n> There's pg_ldap_sync for that today but it'd be nice to have something\n> built-in and which happens at connection/authentication time, or maybe a\n> background worker that connects to an ldap server and listens for\n> changes and creates appropriate roles when they're created. Considering\n> we've got the LDAP code already, that'd be a really nice capability.\n\nThat's still got the same issue though: where does the role get any\npermissions from?\n\nI suppose you could say \"allow auto-creation of new roles and make them\nmembers of group X\", where X holds the permissions that \"everybody\"\nshould have. But I'm not sure how much that buys compared to just\nletting everyone log in as X.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Feb 2021 12:32:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> This doesn't sound particularly workable: how would you manage\n> >> inside-the-database permissions? Kerberos isn't going to know\n> >> what \"view foo\" is, let alone know whether you should be allowed\n> >> to read or write it. So ISTM there has to be a role to hold\n> >> those permissions. Certainly, you could allow multiple external\n> >> identities to share a role ... but that works today.\n> \n> > Agreed- we would need something in the database to tie it to and I don't\n> > see it making much sense to try to invent something else for that when\n> > that's what roles are. What's been discussed before and would certainly\n> > be nice, however, would be a way to have roles automatically created.\n> > There's pg_ldap_sync for that today but it'd be nice to have something\n> > built-in and which happens at connection/authentication time, or maybe a\n> > background worker that connects to an ldap server and listens for\n> > changes and creates appropriate roles when they're created. Considering\n> > we've got the LDAP code already, that'd be a really nice capability.\n> \n> That's still got the same issue though: where does the role get any\n> permissions from?\n> \n> I suppose you could say \"allow auto-creation of new roles and make them\n> members of group X\", where X holds the permissions that \"everybody\"\n> should have. But I'm not sure how much that buys compared to just\n> letting everyone log in as X.\n\nRight, pg_ldap_sync already supports making new roles a member of\nanother role in PG such as a group role, we'd want to do something\nsimilar. Also- once the role exists, then permissions could be assigned\ndirectly as well, of course, which would be the advantage of a\nbackground worker that's keeping the set of roles in sync, as the role\nwould be created at nearly the same time in both the authentication\nsystem itself (eg: AD) and in PG. That kind of integration exists in\nother products and would go a long way to making PG easier to use and\nadminister.\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Feb 2021 12:43:59 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, Feb 1, 2021 at 6:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> This doesn't sound particularly workable: how would you manage\n> >> inside-the-database permissions? Kerberos isn't going to know\n> >> what \"view foo\" is, let alone know whether you should be allowed\n> >> to read or write it. So ISTM there has to be a role to hold\n> >> those permissions. Certainly, you could allow multiple external\n> >> identities to share a role ... but that works today.\n>\n> > Agreed- we would need something in the database to tie it to and I don't\n> > see it making much sense to try to invent something else for that when\n> > that's what roles are. What's been discussed before and would certainly\n> > be nice, however, would be a way to have roles automatically created.\n> > There's pg_ldap_sync for that today but it'd be nice to have something\n> > built-in and which happens at connection/authentication time, or maybe a\n> > background worker that connects to an ldap server and listens for\n> > changes and creates appropriate roles when they're created. Considering\n> > we've got the LDAP code already, that'd be a really nice capability.\n>\n> That's still got the same issue though: where does the role get any\n> permissions from?\n>\n> I suppose you could say \"allow auto-creation of new roles and make them\n> members of group X\", where X holds the permissions that \"everybody\"\n> should have. But I'm not sure how much that buys compared to just\n> letting everyone log in as X.\n\nWhat people would *really* want I think is \"alow auto-creation of new\nroles, and then look up which other roles they should be members of\nusing ldap\" (or \"using this script over here\" for a more flexible\napproach). Which is of course a whole different thing to do in the\nprocess of authentication.\n\nThe main thing you'd gain by auto-creating users rather than just\nletting them log in is the ability to know exactly which user did\nsomething, and view who it really is through pg_stat_activity. Adding\nthe \"original auth id\" as a field or available method would provide\nthat information in the mapped user case -- making the difference even\nsmaller. It's really the auto-membership that's the killer feature of\nthat one, I think.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 1 Feb 2021 18:44:18 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Sun, 2021-01-31 at 12:27 +0100, Magnus Hagander wrote:\r\n> > (There's also the fact that I think pg_ident mapping for LDAP would be\r\n> > just as useful as it is for GSS or certs. That's for a different\r\n> > conversation.)\r\n> \r\n> Specifically for search+bind, I would assume?\r\n\r\nEven for the simple bind case, I think it'd be useful to be able to\r\nperform a pg_ident mapping of\r\n\r\n ldapmap /.* ldapuser\r\n\r\nso that anyone who is able to authenticate against the LDAP server is\r\nallowed to assume the ldapuser role. (For this to work, you'd need to\r\nbe able to specify your LDAP username as a connection option, similar\r\nto how you can specify a client certificate, so that you could set\r\nPGUSER=ldapuser.)\r\n\r\nBut again, that's orthogonal to the current discussion.\r\n\r\n> With that I think it would also be useful to have it available in the\r\n> system as well -- either as a column in pg_stat_activity or maybe just\r\n> as a function like pg_get_authenticated_identity() since it might be\r\n> something that's interesting to a smallish subset of users (but very\r\n> interesting to those).\r\n\r\nAgreed, it would slot in nicely with the other per-backend stats functions.\r\n--Jacob\r\n", "msg_date": "Mon, 1 Feb 2021 21:36:34 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-02-01 at 11:49 -0500, Stephen Frost wrote:\r\n> * Magnus Hagander (magnus@hagander.net) wrote:\r\n> > But yes, I think the enforced cleartext password proxying is at the\r\n> > core of the problem. LDAP also encourages the idea of centralized\r\n> > password-reuse, which is not exactly a great thing for security.\r\n> \r\n> Right- passing around a user's password in the clear (or even through an\r\n> encrypted tunnel) has been strongly discouraged for a very long time,\r\n> for very good reason. LDAP does double-down on that by being a\r\n> centralized password, meaning that someone's entire identity (for all\r\n> the services that share that LDAP system, at least) are compromised if\r\n> any one system in the environment is.\r\n\r\nSure. I don't disagree with anything you've said in that paragraph, but\r\nas someone who's implementing solutions for other people who are\r\nactually deploying, I don't have a lot of control over whether a\r\ncustomer's IT department wants to use LDAP or not. And I'm not holding\r\nmy breath for LDAP servers to start implementing federated identity,\r\nthough that would be nice.\r\n\r\n> Also, if we do add\r\n> it, I would think we'd have it under the same check as the other\r\n> sensitive pg_stat_activity fields and not be superuser-only.\r\n\r\nJust the standard HAS_PGSTAT_PERMISSIONS(), then?\r\n\r\nTo double-check -- since giving this ability to the pg_read_all_stats\r\nrole would expand its scope -- could that be dangerous for anyone?\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 1 Feb 2021 21:50:54 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Mon, 2021-02-01 at 11:49 -0500, Stephen Frost wrote:\n> > * Magnus Hagander (magnus@hagander.net) wrote:\n> > > But yes, I think the enforced cleartext password proxying is at the\n> > > core of the problem. LDAP also encourages the idea of centralized\n> > > password-reuse, which is not exactly a great thing for security.\n> > \n> > Right- passing around a user's password in the clear (or even through an\n> > encrypted tunnel) has been strongly discouraged for a very long time,\n> > for very good reason. LDAP does double-down on that by being a\n> > centralized password, meaning that someone's entire identity (for all\n> > the services that share that LDAP system, at least) are compromised if\n> > any one system in the environment is.\n> \n> Sure. I don't disagree with anything you've said in that paragraph, but\n> as someone who's implementing solutions for other people who are\n> actually deploying, I don't have a lot of control over whether a\n> customer's IT department wants to use LDAP or not. And I'm not holding\n> my breath for LDAP servers to start implementing federated identity,\n> though that would be nice.\n\nNot sure exactly what you're referring to here but AD already provides\nKerberos with cross-domain trusts (aka forests). The future is here..?\n:)\n\n> > Also, if we do add\n> > it, I would think we'd have it under the same check as the other\n> > sensitive pg_stat_activity fields and not be superuser-only.\n> \n> Just the standard HAS_PGSTAT_PERMISSIONS(), then?\n> \n> To double-check -- since giving this ability to the pg_read_all_stats\n> role would expand its scope -- could that be dangerous for anyone?\n\nI don't agree that this really expands its scope- in fact, you'll see\nthat the GSSAPI and SSL user authentication information is already\nallowed under HAS_PGSTAT_PERMISSIONS().\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Feb 2021 17:01:26 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, Feb 1, 2021 at 10:36 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Sun, 2021-01-31 at 12:27 +0100, Magnus Hagander wrote:\n> > > (There's also the fact that I think pg_ident mapping for LDAP would be\n> > > just as useful as it is for GSS or certs. That's for a different\n> > > conversation.)\n> >\n> > Specifically for search+bind, I would assume?\n>\n> Even for the simple bind case, I think it'd be useful to be able to\n> perform a pg_ident mapping of\n>\n> ldapmap /.* ldapuser\n>\n> so that anyone who is able to authenticate against the LDAP server is\n> allowed to assume the ldapuser role. (For this to work, you'd need to\n> be able to specify your LDAP username as a connection option, similar\n> to how you can specify a client certificate, so that you could set\n> PGUSER=ldapuser.)\n>\n> But again, that's orthogonal to the current discussion.\n\nRight. I guess that's what I mean -- *just* adding support for user\nmapping wouldn't be helpful. You'd have to change how the actual\nauthentication is done. The way that it's done now, mapping makes no\nsense.\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 1 Feb 2021 23:15:44 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-02-01 at 18:44 +0100, Magnus Hagander wrote:\r\n> What people would *really* want I think is \"alow auto-creation of new\r\n> roles, and then look up which other roles they should be members of\r\n> using ldap\" (or \"using this script over here\" for a more flexible\r\n> approach). Which is of course a whole different thing to do in the\r\n> process of authentication.\r\n\r\nYep. I think there are at least three separate things:\r\n\r\n1) third-party authentication (\"tell me who this user is\"), which I\r\nthink Postgres currently has a fairly good handle on;\r\n\r\n2) third-party authorization (\"tell me what roles this user can\r\nassume\"), which Postgres doesn't do, unless you have a script\r\nautomatically update pg_ident -- and even then you can't do it for\r\nevery authentication type; and\r\n\r\n3) third-party role administration (\"tell me what roles should exist in\r\nthe database, and what permissions they have\"), which currently exists\r\nin a limited handful of third-party tools.\r\n\r\nMany users will want all three of these questions to be answered by the\r\nsame system, which is fine, but for more advanced use cases I think\r\nit'd be really useful if you could answer them fully independently.\r\n\r\nFor really gigantic deployments, the overhead of hundreds of Postgres\r\ninstances randomly pinging a central server just to see if there have\r\nbeen any new users can be a concern. Having a solid system for\r\nauthorization could potentially decrease the need for a role auto-\r\ncreation system, and reduce the number of moving parts. If you have a\r\nsmall number of core roles (relative to the number of users), it might\r\nnot be as important to constantly keep role lists up to date, so long\r\nas the central authority can tell you which of your existing roles a\r\nuser is authorized to become.\r\n\r\n> The main thing you'd gain by auto-creating users rather than just\r\n> letting them log in is the ability to know exactly which user did\r\n> something, and view who it really is through pg_stat_activity. Adding\r\n> the \"original auth id\" as a field or available method would provide\r\n> that information in the mapped user case -- making the difference even\r\n> smaller. It's really the auto-membership that's the killer feature of\r\n> that one, I think.\r\n\r\nAgreed. As long as it's possible for multiple user identities to assume\r\nthe same role, storing the original authenticated identity is still\r\nimportant, regardless of how you administer the roles themselves.\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 1 Feb 2021 22:22:05 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-02-01 at 17:01 -0500, Stephen Frost wrote:\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > And I'm not holding\r\n> > my breath for LDAP servers to start implementing federated identity,\r\n> > though that would be nice.\r\n> \r\n> Not sure exactly what you're referring to here but AD already provides\r\n> Kerberos with cross-domain trusts (aka forests). The future is here..?\r\n> :)\r\n\r\nIf the end user is actually using LDAP-on-top-of-AD, and comfortable\r\nadministering the Kerberos-related pieces of AD so that their *nix\r\nservers and users can speak it instead, then sure. But I continue to\r\nhear about customers who don't fit into that mold. :D Enough that I\r\nhave to keep an eye on the \"pure\" LDAP side of things, at least.\r\n\r\n> > To double-check -- since giving this ability to the pg_read_all_stats\r\n> > role would expand its scope -- could that be dangerous for anyone?\r\n> \r\n> I don't agree that this really expands its scope- in fact, you'll see\r\n> that the GSSAPI and SSL user authentication information is already\r\n> allowed under HAS_PGSTAT_PERMISSIONS().\r\n\r\nAh, so they are. :) I think that's the way to go, then.\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 1 Feb 2021 22:40:18 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Mon, 2021-02-01 at 17:01 -0500, Stephen Frost wrote:\n> > * Jacob Champion (pchampion@vmware.com) wrote:\n> > > And I'm not holding\n> > > my breath for LDAP servers to start implementing federated identity,\n> > > though that would be nice.\n> > \n> > Not sure exactly what you're referring to here but AD already provides\n> > Kerberos with cross-domain trusts (aka forests). The future is here..?\n> > :)\n> \n> If the end user is actually using LDAP-on-top-of-AD, and comfortable\n> administering the Kerberos-related pieces of AD so that their *nix\n> servers and users can speak it instead, then sure. But I continue to\n> hear about customers who don't fit into that mold. :D Enough that I\n> have to keep an eye on the \"pure\" LDAP side of things, at least.\n\nI suppose it's likely that I'll continue to run into people who are\nhorrified to learn that they've been using pass-the-password auth thanks\nto using ldap.\n\n> > > To double-check -- since giving this ability to the pg_read_all_stats\n> > > role would expand its scope -- could that be dangerous for anyone?\n> > \n> > I don't agree that this really expands its scope- in fact, you'll see\n> > that the GSSAPI and SSL user authentication information is already\n> > allowed under HAS_PGSTAT_PERMISSIONS().\n> \n> Ah, so they are. :) I think that's the way to go, then.\n\nOk.. but what's 'go' mean here? We already have views and such for GSS\nand SSL, is the idea to add another view for LDAP and add in columns\nthat are returned by pg_stat_get_activity() which are then pulled out by\nthat view? Or did you have something else in mind?\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Feb 2021 18:01:46 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-02-01 at 18:01 -0500, Stephen Frost wrote:\r\n> Ok.. but what's 'go' mean here? We already have views and such for GSS\r\n> and SSL, is the idea to add another view for LDAP and add in columns\r\n> that are returned by pg_stat_get_activity() which are then pulled out by\r\n> that view? Or did you have something else in mind?\r\n\r\nMagnus suggested a function like pg_get_authenticated_identity(), which\r\nis what I was thinking of when I said that. I'm not too interested in\r\nan LDAP-specific view, and I don't think anyone so far has asked for\r\nthat.\r\n\r\nMy goal is to get this one single point of reference, for all of the\r\nauth backends. The LDAP mapping conversation is separate.\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 1 Feb 2021 23:08:39 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Mon, 2021-02-01 at 18:01 -0500, Stephen Frost wrote:\n> > Ok.. but what's 'go' mean here? We already have views and such for GSS\n> > and SSL, is the idea to add another view for LDAP and add in columns\n> > that are returned by pg_stat_get_activity() which are then pulled out by\n> > that view? Or did you have something else in mind?\n> \n> Magnus suggested a function like pg_get_authenticated_identity(), which\n> is what I was thinking of when I said that. I'm not too interested in\n> an LDAP-specific view, and I don't think anyone so far has asked for\n> that.\n> \n> My goal is to get this one single point of reference, for all of the\n> auth backends. The LDAP mapping conversation is separate.\n\nPresumably this would be the DN for SSL then..? Not just the CN? How\nwould the issuer DN be included? And the serial?\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Feb 2021 18:40:13 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-02-01 at 18:40 -0500, Stephen Frost wrote:\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > My goal is to get this one single point of reference, for all of the\r\n> > auth backends. The LDAP mapping conversation is separate.\r\n> \r\n> Presumably this would be the DN for SSL then..? Not just the CN?\r\n\r\nCorrect.\r\n\r\n> How would the issuer DN be included? And the serial?\r\n\r\nIn the current proposal, they're not. Seems like only the Subject\r\nshould be considered when determining the \"identity of the user\" --\r\nknowing the issuer or the certificate fingerprint might be useful in\r\ngeneral, and perhaps they should be logged somewhere, but they're not\r\npart of the user's identity.\r\n\r\nIf there were a feature that considered the issuer or serial number\r\nwhen making role mappings, I think it'd be easier to make a case for\r\nthat. As of right now I don't think they should be incorporated into\r\nthis *particular* identifier.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 2 Feb 2021 00:16:47 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Thu, 2021-01-28 at 18:22 +0000, Jacob Champion wrote:\r\n> = Proposal =\r\n> \r\n> I propose that every auth method should store the string it uses to\r\n> identify a user -- what I'll call an \"authenticated identity\" -- into\r\n> one central location in Port, after authentication succeeds but before\r\n> any pg_ident authorization occurs.\r\n\r\nThanks everyone for all of the feedback! Here's my summary of the\r\nconversation so far:\r\n\r\n- The idea of storing the user's original identity consistently across\r\nall auth methods seemed to be positively received.\r\n\r\n- Exposing this identity through log_line_prefix was not as well-\r\nreceived, landing somewhere between \"meh\" and \"no thanks\". The main\r\nconcern was log bloat/expense.\r\n\r\n- Exposing it through the CSV log got the same reception: if we expose\r\nit through log_line_prefix, we should expose it through CSV, but no one\r\nseemed particularly excited about either.\r\n\r\n- The idea of logging this information once per session, as part of\r\nlog_connection, got a more positive response. That way the information\r\ncan still be obtained, but it doesn't clutter every log line.\r\n\r\n- There was also some interest in exposing this through the statistics\r\ncollector, either as a superuser-only feature or via the\r\npg_read_all_stats role.\r\n\r\n- There was some discussion around *which* string to choose as the\r\nidentifer for more complicated cases, such as TLS client certificates.\r\n\r\n- Other improvements around third-party authorization and role\r\nmanagement were discussed, including the ability to auto-create\r\nnonexistent roles, to sync role definitions as a first-party feature,\r\nand to query an external system for role authorization.\r\n\r\n(Let me know if there's something else I've missed.)\r\n\r\n== My Plans ==\r\n\r\nGiven the feedback above, I'll continue to flesh out the PoC patch,\r\nfocusing on 1) storing the identity in a single place for all auth\r\nmethods and 2) exposing it consistently in the logs as part of\r\nlog_connections. I'll drop the log_line_prefix format specifier from\r\nthe patch and see what that does to the testing side of things. I also\r\nplan to write a follow-up patch to add the authenticated identity to\r\nthe statistics collector, with pg_get_authenticated_identity() to\r\nretrieve it.\r\n\r\nI'm excited to see where the third-party authz and role management\r\nconversations go, but I won't focus on those for my initial patchset. I\r\nthink this patch has use even if those ideas are implemented too.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 2 Feb 2021 22:22:49 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, 2021-02-02 at 22:22 +0000, Jacob Champion wrote:\r\n> Given the feedback above, I'll continue to flesh out the PoC patch,\r\n> focusing on 1) storing the identity in a single place for all auth\r\n> methods and 2) exposing it consistently in the logs as part of\r\n> log_connections.\r\n\r\nAttached is a v1 patchset. Note that I haven't compiled or tested on\r\nWindows and BSD yet, so the SSPI and BSD auth changes are eyeballed for\r\nnow.\r\n\r\nThe first two patches are preparatory, pulled from other threads on the\r\nmailing list: 0001 comes from my Kerberos test fix thread [1], and 0002\r\nis extracted from Andrew Dunstan's patch [2] to store the subject DN\r\nfrom a client cert. 0003 has the actual implementation, which now fills\r\nin port->authn_id for all auth methods.\r\n\r\nNow that we're using log_connections instead of log_line_prefix,\r\nthere's more helpful information we can put into the log when\r\nauthentication succeeds. For now, I include the identity of the user,\r\nthe auth method in use, and the pg_hba.conf file and line number. E.g.\r\n\r\n LOG: connection received: host=[local]\r\n LOG: connection authenticated: identity=\"pchampion\" method=peer (/data/pg_hba.conf:88)\r\n LOG: connection authorized: user=admin database=postgres application_name=psql\r\n\r\nIf the overall direction seems good, then I have two questions:\r\n\r\n- Since the authenticated identity is more or less an opaque string\r\nthat may come from a third party, should I be escaping it in some way\r\nbefore it goes into the logs? Or is it generally accepted that log\r\nfiles can contain arbitrary blobs in unspecified encodings?\r\n\r\n- For the SSPI auth method, I pick the format of the identity string\r\nbased on the compatibility mode: \"DOMAIN\\user\" when using compat_realm,\r\nand \"user@DOMAIN\" otherwise. For Windows DBAs, is this a helpful way to\r\nvisualize the identity, or should I just stick to one format?\r\n\r\n> I also\r\n> plan to write a follow-up patch to add the authenticated identity to\r\n> the statistics collector, with pg_get_authenticated_identity() to\r\n> retrieve it.\r\n\r\nThis part turned out to be more work than I'd thought! Now I understand\r\nwhy pg_stat_ssl truncates several fields to NAMEDATALEN.\r\n\r\nHas there been any prior discussion on lifting that restriction for the\r\nstatistics collector as a whole, before I go down my own path? I can't\r\nimagine taking up another 64 bytes per connection for a field that\r\nwon't be useful for the most common use cases -- and yet it still won't\r\nbe long enough for other users...\r\n\r\nThanks,\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/fe7a46f8d46ebb074ba1572d4b5e4af72dc95420.camel%40vmware.com\r\n[2] https://www.postgresql.org/message-id/fd96ae76-a8e3-ef8e-a642-a592f5b76771%40dunslane.net", "msg_date": "Mon, 8 Feb 2021 23:35:36 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-02-08 at 23:35 +0000, Jacob Champion wrote:\r\n> Note that I haven't compiled or tested on\r\n> Windows and BSD yet, so the SSPI and BSD auth changes are eyeballed for\r\n> now.\r\n\r\nI've now tested on both.\r\n\r\n> - For the SSPI auth method, I pick the format of the identity string\r\n> based on the compatibility mode: \"DOMAIN\\user\" when using compat_realm,\r\n> and \"user@DOMAIN\" otherwise. For Windows DBAs, is this a helpful way to\r\n> visualize the identity, or should I just stick to one format?\r\n\r\nAfter testing on Windows, I think switching formats based on\r\ncompat_realm is a good approach. For users not on a domain, the\r\nMACHINE\\user format is probably more familiar than user@MACHINE.\r\nInversely, users on a domain probably want to see the modern \r\nuser@DOMAIN instead.\r\n\r\nv2 just updates the patchset to remove the Windows TODO and fill in the\r\npatch notes; no functional changes. The question about escaping log\r\ncontents remains.\r\n\r\n--Jacob", "msg_date": "Thu, 11 Feb 2021 20:32:45 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Thu, 2021-02-11 at 20:32 +0000, Jacob Champion wrote:\r\n> v2 just updates the patchset to remove the Windows TODO and fill in the\r\n> patch notes; no functional changes. The question about escaping log\r\n> contents remains.\r\n\r\nv3 rebases onto latest master, for SSL test conflicts.\r\n\r\nNote:\r\n- Since the 0001 patch from [1] is necessary for the new Kerberos tests\r\nin 0003, I won't make a separate commitfest entry for it.\r\n- 0002 would be subsumed by [2] if it's committed.\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/flat/fe7a46f8d46ebb074ba1572d4b5e4af72dc95420.camel%40vmware.com\r\n[2] https://www.postgresql.org/message-id/flat/fd96ae76-a8e3-ef8e-a642-a592f5b76771%40dunslane.net#642757cec955d8e923025898402f9452", "msg_date": "Fri, 26 Feb 2021 19:45:41 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, Feb 26, 2021 at 8:45 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Thu, 2021-02-11 at 20:32 +0000, Jacob Champion wrote:\n> > v2 just updates the patchset to remove the Windows TODO and fill in the\n> > patch notes; no functional changes. The question about escaping log\n> > contents remains.\n>\n> v3 rebases onto latest master, for SSL test conflicts.\n>\n> Note:\n> - Since the 0001 patch from [1] is necessary for the new Kerberos tests\n> in 0003, I won't make a separate commitfest entry for it.\n> - 0002 would be subsumed by [2] if it's committed.\n\nIt looks like patch 0001 has some leftover debuggnig code at the end?\nOr did you intend for that to be included permanently?\n\nAs for log escaping, we report port->user_name already unescaped --\nsurely this shouldn't be a worse case than that?\n\nI wonder if it wouldn't be better to keep the log line on the existing\n\"connection authorized\" line, just as a separate field. I'm kind of\nsplit on it though, because I guess it might make that line very long.\nBut it's also a lot more convenient to parse it on a single line than\nacross multiple lines potentially overlapping with other sessions.\n\nWith this we store the same value as the authn and as\nport->gss->princ, and AFAICT it's only used once. Seems we could just\nuse the new field for the gssapi usage as well? Especially since that\nusage only seems to be there in order to do the gssapi specific\nlogging of, well, the same thing.\n\nSame goes for peer_user? In fact, if we're storing it in the Port, why\nare we even passing it as a separate parameter to check_usermap --\nshouldn't that one always use this same value? ISTM that it could be\nquite confusing if the logged value is different from whatever we\napply to the user mapping?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sat, 6 Mar 2021 18:33:28 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Sat, 2021-03-06 at 18:33 +0100, Magnus Hagander wrote:\r\n> It looks like patch 0001 has some leftover debuggnig code at the end?\r\n> Or did you intend for that to be included permanently?\r\n\r\nI'd intended to keep it -- it works hand-in-hand with the existing\r\n\"current_logfiles\" log line on 219 and might keep someone from tearing\r\ntheir hair out. But I can certainly remove it, if it's cluttering up\r\nthe logs too much.\r\n\r\n> As for log escaping, we report port->user_name already unescaped --\r\n> surely this shouldn't be a worse case than that?\r\n\r\nAh, that's a fair point. I'll remove the TODO.\r\n\r\n> I wonder if it wouldn't be better to keep the log line on the existing\r\n> \"connection authorized\" line, just as a separate field. I'm kind of\r\n> split on it though, because I guess it might make that line very long.\r\n> But it's also a lot more convenient to parse it on a single line than\r\n> across multiple lines potentially overlapping with other sessions.\r\n\r\nAuthentication can succeed even if authorization fails, and it's useful\r\nto see that in the logs. In most cases that looks like a failed user\r\nmapping, but there are other corner cases where we fail the connection\r\nafter a successful authentication, such as when using krb_realm.\r\nCurrently you get little to no feedback when that happens, but with a\r\nseparate log line, it's a lot easier to piece together what's happened.\r\n\r\n(In general, I feel pretty strongly that Postgres combines/conflates\r\nauthentication and authorization in too many places.)\r\n\r\n> With this we store the same value as the authn and as\r\n> port->gss->princ, and AFAICT it's only used once. Seems we could just\r\n> use the new field for the gssapi usage as well? Especially since that\r\n> usage only seems to be there in order to do the gssapi specific\r\n> logging of, well, the same thing.\r\n> \r\n> Same goes for peer_user? In fact, if we're storing it in the Port, why\r\n> are we even passing it as a separate parameter to check_usermap --\r\n> shouldn't that one always use this same value? ISTM that it could be\r\n> quite confusing if the logged value is different from whatever we\r\n> apply to the user mapping?\r\n\r\nSeems reasonable; I'll consolidate them.\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 8 Mar 2021 22:16:23 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-03-08 at 22:16 +0000, Jacob Champion wrote:\r\n> On Sat, 2021-03-06 at 18:33 +0100, Magnus Hagander wrote:\r\n> > With this we store the same value as the authn and as\r\n> > port->gss->princ, and AFAICT it's only used once. Seems we could just\r\n> > use the new field for the gssapi usage as well? Especially since that\r\n> > usage only seems to be there in order to do the gssapi specific\r\n> > logging of, well, the same thing.\r\n> > \r\n> > [...]\r\n> \r\n> Seems reasonable; I'll consolidate them.\r\n\r\nA slight hitch in the plan, for the GSS side... port->gss->princ is\r\nexposed by pg_stat_gssapi. I can switch this to use port->authn_id\r\neasily enough.\r\n\r\nBut it seems like the existence of a user principal for the connection\r\nis independent of whether or not you're using that principal as your\r\nidentity. For example, you might connect via a \"hostgssenc ... trust\"\r\nline in the HBA. (This would be analogous to presenting a user\r\ncertificate over TLS but not using it to authenticate to the database.)\r\nI'd argue that the principal should be available through the stats view\r\nin this case as well, just like you can see a client DN in pg_stat_ssl\r\neven if you're using trust auth.\r\n\r\nThe server doesn't currently support that -- gss->princ is only\r\npopulated in the gss auth case, as far as I can tell -- but if I remove\r\ngss->princ entirely, then it'll be that much more work for someone who\r\nwants to expose that info later. I think it should remain independent.\r\n\r\nThoughts?\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 8 Mar 2021 23:55:16 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Sat, 2021-03-06 at 18:33 +0100, Magnus Hagander wrote:\r\n> In fact, if we're storing it in the Port, why\r\n> are we even passing it as a separate parameter to check_usermap --\r\n> shouldn't that one always use this same value?\r\n\r\nAh, and now I remember why I didn't consolidate this to begin with.\r\nSeveral auth methods perform some sort of translation before checking\r\nthe usermap: cert pulls the CN out of the Subject DN, SSPI and GSS can\r\noptionally strip the realm, etc.\r\n\r\n> ISTM that it could be\r\n> quite confusing if the logged value is different from whatever we\r\n> apply to the user mapping?\r\n\r\nMaybe. But it's an accurate reflection of what's actually happening,\r\nand that's the goal of the patch: show enough information to be able to\r\naudit who's logging in. The certificates\r\n\r\n /OU=ACME Ltd./C=US/CN=pchampion\r\n\r\nand\r\n\r\n /OU=Postgres/C=GR/CN=pchampion\r\n\r\nare different identities, but Postgres will silently authorize them to\r\nlog in as the same user. In my opinion, hiding that information makes\r\nthings more confusing in the long term, not less.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 9 Mar 2021 00:48:20 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-03-08 at 22:16 +0000, Jacob Champion wrote:\r\n> On Sat, 2021-03-06 at 18:33 +0100, Magnus Hagander wrote:\r\n> > As for log escaping, we report port->user_name already unescaped --\r\n> > surely this shouldn't be a worse case than that?\r\n> \r\n> Ah, that's a fair point. I'll remove the TODO.\r\n\r\nv4 removes the TODO and the extra allocation for peer_user. I'll hold\r\noff on the other two suggestions pending that conversation.\r\n\r\n--Jacob", "msg_date": "Tue, 9 Mar 2021 18:03:03 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, 2021-03-09 at 18:03 +0000, Jacob Champion wrote:\r\n> v4 removes the TODO and the extra allocation for peer_user. I'll hold\r\n> off on the other two suggestions pending that conversation.\r\n\r\nAnd v5 is rebased over this morning's SSL test changes.\r\n\r\n--Jacob", "msg_date": "Tue, 9 Mar 2021 19:10:52 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, 2021-03-09 at 19:10 +0000, Jacob Champion wrote:\r\n> And v5 is rebased over this morning's SSL test changes.\r\nRebased again after the SSL test revert (this is the same as v4).\r\n\r\n--Jacob", "msg_date": "Mon, 15 Mar 2021 15:50:48 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, Mar 15, 2021 at 03:50:48PM +0000, Jacob Champion wrote:\n> \t\t# might need to retry if logging collector process is slow...\n> \t\tmy $max_attempts = 180 * 10;\n> \t\tmy $first_logfile;\n> \t\tfor (my $attempts = 0; $attempts < $max_attempts; $attempts++)\n> \t\t{\n> \t\t\t$first_logfile = slurp_file($node->data_dir . '/' . $lfname);\n> -\t\t\tlast if $first_logfile =~ m/\\Q$expect_log_msg\\E/;\n> +\n> +\t\t\t# Don't include previously matched text in the search.\n> +\t\t\t$first_logfile = substr $first_logfile, $current_log_position;\n> +\t\t\tif ($first_logfile =~ m/\\Q$expect_log_msg\\E/g)\n> +\t\t\t{\n> +\t\t\t\t$current_log_position += pos($first_logfile);\n> +\t\t\t\tlast;\n> +\t\t\t}\n> +\n> \t\t\tusleep(100_000);\n\nLooking at 0001, I am not much a fan of relying on the position of the\nmatching pattern in the log file. Instead of relying on the logging\ncollector and one single file, why not just changing the generation of\nthe logfile and rely on the output of stderr by restarting the server?\nThat means less tests, no need to wait for the logging collector to do\nits business, and it solves your problem. Please see the idea with\nthe patch attached. Thoughts?\n--\nMichael", "msg_date": "Thu, 18 Mar 2021 17:14:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Thu, Mar 18, 2021 at 05:14:24PM +0900, Michael Paquier wrote:\n> Looking at 0001, I am not much a fan of relying on the position of the\n> matching pattern in the log file. Instead of relying on the logging\n> collector and one single file, why not just changing the generation of\n> the logfile and rely on the output of stderr by restarting the server?\n> That means less tests, no need to wait for the logging collector to do\n> its business, and it solves your problem. Please see the idea with\n> the patch attached. Thoughts?\n\nWhile looking at 0003, I have noticed that the new kerberos tests\nactually switch from a logic where one message pattern matches, to a\nlogic where multiple message patterns match, but I don't see a problem\nwith what I sent previously, as long as one consume once a log file\nand matches all the patterns once, say like the following in\ntest_access():\n my $first_logfile = slurp_file($node->logfile);\n\n # Verify specified log messages are logged in the log file.\n while (my $expect_log_msg = shift @expect_log_msgs)\n {\n like($first_logfile, qr/\\Q$expect_log_msg\\E/,\n 'found expected log file content');\n }\n\n # Rotate to a new file, for any next check.\n $node->rotate_logfile;\n $node->restart; \n\nA second solution would be a logrotate, relying on the contents of\ncurrent_logfiles to know what is the current file, with an extra wait\nafter $node->logrotate to check if the contents of current_logfiles\nhave changed. That's slower for me as this requires a small sleep to\nmake sure that the new log file name has changed, and I find the\nrestart solution simpler and more elegant. Please see the attached\nbased on HEAD for this logrotate idea.\n\nJacob, what do you think?\n--\nMichael", "msg_date": "Fri, 19 Mar 2021 17:21:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, 2021-03-19 at 17:21 +0900, Michael Paquier wrote:\r\n> On Thu, Mar 18, 2021 at 05:14:24PM +0900, Michael Paquier wrote:\r\n> > Looking at 0001, I am not much a fan of relying on the position of the\r\n> > matching pattern in the log file. Instead of relying on the logging\r\n> > collector and one single file, why not just changing the generation of\r\n> > the logfile and rely on the output of stderr by restarting the server?\r\n\r\nFor getting rid of the logging collector logic, this is definitely an\r\nimprovement. It was briefly discussed in [1] but I never got around to\r\ntrying it; thanks!\r\n\r\nOne additional improvement I would suggest, now that the rotation logic\r\nis simpler than it was in my original patch, is to rotate the logfile\r\nregardless of whether the test is checking the logs or not. (Similarly,\r\nwe can manually rotate after the block of test_query() calls.) That way\r\nit's harder to match the last test's output.\r\n\r\n> While looking at 0003, I have noticed that the new kerberos tests\r\n> actually switch from a logic where one message pattern matches, to a\r\n> logic where multiple message patterns match, but I don't see a problem\r\n> with what I sent previously, as long as one consume once a log file\r\n> and matches all the patterns once, say like the following in\r\n> test_access():\r\n\r\nThe tradeoff is that if you need to check for log message order, or for\r\nmultiple instances of overlapping patterns, you still need some sort of\r\nsearch-forward functionality. But looking over the tests, I don't see\r\nany that truly *need* that yet. It's nice that the current patchset\r\nenforces an \"authenticated\" line before an \"authorized\" line, but I\r\nthink it's nicer to not have the extra code.\r\n\r\nI'll incorporate this approach into the patchset. Thanks!\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/f1fd9ccaf7ffb2327bf3c06120afeadd50c1db97.camel%40vmware.com\r\n", "msg_date": "Fri, 19 Mar 2021 16:54:10 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, 2021-03-19 at 16:54 +0000, Jacob Champion wrote:\r\n> One additional improvement I would suggest, now that the rotation logic\r\n> is simpler than it was in my original patch, is to rotate the logfile\r\n> regardless of whether the test is checking the logs or not. (Similarly,\r\n> we can manually rotate after the block of test_query() calls.) That way\r\n> it's harder to match the last test's output.\r\n\r\nThe same effect can be had by moving the log rotation to the top of the\r\ntest that needs it, so I've done it that way in v7.\r\n\r\n> The tradeoff is that if you need to check for log message order, or for\r\n> multiple instances of overlapping patterns, you still need some sort of\r\n> search-forward functionality.\r\n\r\nTurns out it's easy now to have our cake and eat it too; a single if\r\nstatement can implement the same search-forward functionality that was\r\nspread across multiple places before. So I've done that too.\r\n\r\nMuch nicer, thank you for the suggestion!\r\n\r\n--Jacob", "msg_date": "Fri, 19 Mar 2021 18:37:05 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, Mar 19, 2021 at 06:37:05PM +0000, Jacob Champion wrote:\n> The same effect can be had by moving the log rotation to the top of the\n> test that needs it, so I've done it that way in v7.\n\nAfter thinking more about 0001, I have come up with an even simpler\nsolution that has resulted in 11e1577. That's similar to what\nPostgresNode::issues_sql_like() does. This also makes 0003 simpler\nwith its changes as this requires to change two lines in test_access.\n\n> Turns out it's easy now to have our cake and eat it too; a single if\n> statement can implement the same search-forward functionality that was\n> spread across multiple places before. So I've done that too.\n\nI have briefly looked at 0002 (0001 in the attached set), and it seems\nsane to me. I still need to look at 0003 (well, now 0002) in details,\nwhich is very sensible as one mistake would likely be a CVE-class\nbug.\n--\nMichael", "msg_date": "Mon, 22 Mar 2021 15:16:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, Mar 22, 2021 at 7:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 19, 2021 at 06:37:05PM +0000, Jacob Champion wrote:\n> > The same effect can be had by moving the log rotation to the top of the\n> > test that needs it, so I've done it that way in v7.\n>\n> After thinking more about 0001, I have come up with an even simpler\n> solution that has resulted in 11e1577. That's similar to what\n> PostgresNode::issues_sql_like() does. This also makes 0003 simpler\n> with its changes as this requires to change two lines in test_access.\n\nMan that renumbering threw me off :)\n\n\n> > Turns out it's easy now to have our cake and eat it too; a single if\n> > statement can implement the same search-forward functionality that was\n> > spread across multiple places before. So I've done that too.\n>\n> I have briefly looked at 0002 (0001 in the attached set), and it seems\n> sane to me. I still need to look at 0003 (well, now 0002) in details,\n> which is very sensible as one mistake would likely be a CVE-class\n> bug.\n\nThe 0002/0001/whateveritisaftertherebase is tracked over at\nhttps://www.postgresql.org/message-id/flat/92e70110-9273-d93c-5913-0bccb6562740@dunslane.net\nisn't it? I've assumed the expectation is to have that one committed\nfrom that thread, and then rebase using that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 22 Mar 2021 18:22:52 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-03-22 at 18:22 +0100, Magnus Hagander wrote:\r\n> On Mon, Mar 22, 2021 at 7:16 AM Michael Paquier <michael@paquier.xyz> wrote:\r\n> > \r\n> > I have briefly looked at 0002 (0001 in the attached set), and it seems\r\n> > sane to me. I still need to look at 0003 (well, now 0002) in details,\r\n> > which is very sensible as one mistake would likely be a CVE-class\r\n> > bug.\r\n> \r\n> The 0002/0001/whateveritisaftertherebase is tracked over at\r\n> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2Fflat%2F92e70110-9273-d93c-5913-0bccb6562740%40dunslane.net&amp;data=04%7C01%7Cpchampion%40vmware.com%7Cd085c1e56ff045c7af3308d8ed57279a%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637520305878415422%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=kyW9O1jD0z14z0rC%2BYY9UhIKb7D6bg0nCWoVBJkF8oQ%3D&amp;reserved=0\r\n> isn't it? I've assumed the expectation is to have that one committed\r\n> from that thread, and then rebase using that.\r\n\r\nI think the primary thing that needs to be greenlit for both is the\r\nidea of using the RFC 2253/4514 format for Subject DNs.\r\n\r\nOther than that, the version here should only contain the changes\r\nnecessary for both features (that is, port->peer_dn), so there's no\r\nhard dependency between the two. It's just on me to make sure my\r\nversion is up-to-date. Which I believe it is, as of today.\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 22 Mar 2021 18:51:10 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-03-22 at 15:16 +0900, Michael Paquier wrote:\r\n> On Fri, Mar 19, 2021 at 06:37:05PM +0000, Jacob Champion wrote:\r\n> > The same effect can be had by moving the log rotation to the top of the\r\n> > test that needs it, so I've done it that way in v7.\r\n> \r\n> After thinking more about 0001, I have come up with an even simpler\r\n> solution that has resulted in 11e1577. That's similar to what\r\n> PostgresNode::issues_sql_like() does. This also makes 0003 simpler\r\n> with its changes as this requires to change two lines in test_access.\r\nv8's test_access lost the in-order log search from v7; I've added it\r\nback in v9. The increased resistance to entropy seems worth the few\r\nextra lines. Thoughts?\r\n\r\n--Jacob", "msg_date": "Mon, 22 Mar 2021 19:17:26 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, Mar 22, 2021 at 07:17:26PM +0000, Jacob Champion wrote:\n> v8's test_access lost the in-order log search from v7; I've added it\n> back in v9. The increased resistance to entropy seems worth the few\n> extra lines. Thoughts?\n\nI am not really sure that we need to bother about the ordering of the\nentries here, as long as we check for all of them within the same\nfragment of the log file, so I would just go down to the simplest\nsolution that I posted upthread that is enough to make sure that the\nverbosity is protected. That's what we do elsewhere, like with\ncommand_checks_all() and such.\n--\nMichael", "msg_date": "Tue, 23 Mar 2021 14:21:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, Mar 22, 2021 at 06:22:52PM +0100, Magnus Hagander wrote:\n> The 0002/0001/whateveritisaftertherebase is tracked over at\n> https://www.postgresql.org/message-id/flat/92e70110-9273-d93c-5913-0bccb6562740@dunslane.net\n> isn't it? I've assumed the expectation is to have that one committed\n> from that thread, and then rebase using that.\n\nIndependent and useful pieces could just be extracted and applied\nseparately where it makes sense. I am not sure if that's the case\nhere, so I'll do a patch_to_review++.\n--\nMichael", "msg_date": "Tue, 23 Mar 2021 14:24:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, 2021-03-23 at 14:21 +0900, Michael Paquier wrote:\r\n> I am not really sure that we need to bother about the ordering of the\r\n> entries here, as long as we check for all of them within the same\r\n> fragment of the log file, so I would just go down to the simplest\r\n> solution that I posted upthread that is enough to make sure that the\r\n> verbosity is protected. That's what we do elsewhere, like with\r\n> command_checks_all() and such.\r\nWith low-coverage test suites, I think it's useful to allow as little\r\nstrange behavior as possible -- in this case, printing authorization\r\nbefore authentication could signal a serious bug -- but I don't feel\r\ntoo strongly about it.\r\n\r\nv10 attached, which reverts to v8 test behavior, with minor updates to\r\nthe commit message and test comment.\r\n\r\n--Jacob", "msg_date": "Wed, 24 Mar 2021 16:45:35 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Wed, Mar 24, 2021 at 04:45:35PM +0000, Jacob Champion wrote:\n> With low-coverage test suites, I think it's useful to allow as little\n> strange behavior as possible -- in this case, printing authorization\n> before authentication could signal a serious bug -- but I don't feel\n> too strongly about it.\n\nI got to look at the DN patch yesterday, so now's the turn of this\none. Nice work.\n\n+ * Sets the authenticated identity for the current user. The provided string\n+ * will be copied into the TopMemoryContext. The ID will be logged if\n+ * log_connections is enabled.\n[...]\n+ port->authn_id = MemoryContextStrdup(TopMemoryContext, id);\nIt may not be obvious that all the field is copied to TopMemoryContext\nbecause the Port requires that.\n\n+$node->stop('fast');\n+my $log_contents = slurp_file($log);\nLike 11e1577, let's just truncate the log files in all those tests.\n\n+ if (auth_method < 0 || USER_AUTH_LAST < auth_method)\n+ {\n+ Assert((0 <= auth_method) && (auth_method <= USER_AUTH_LAST));\nWhat's the point of having the check and the assertion? NULL does not\nreally seem like a good default here as this should never really\nhappen. Wouldn't a FATAL be actually safer?\n\n+like(\n+ $log_contents,\n+ qr/connection authenticated: identity=\"ssltestuser\"\nmethod=scram-sha-256/,\n+ \"Basic SCRAM sets the username as the authenticated identity\");\n+\n+$node->start;\nIt looks wrong to me to include in the SSL tests some checks related\nto SCRAM authentication. This should remain in 001_password.pl, as of\nsrc/test/authentication/.\n\n port->gss->princ = MemoryContextStrdup(TopMemoryContext, port->gbuf.value);\n+ set_authn_id(port, gbuf.value);\nI don't think that this position is right for GSSAPI. Shouldn't this\nbe placed at the end of pg_GSS_checkauth() and only if the status is\nOK?\n\n- ret = check_usermap(port->hba->usermap, port->user_name, peer_user, false);\n-\n- pfree(peer_user);\n+ ret = check_usermap(port->hba->usermap, port->user_name, port->authn_id, false);\nI would also put this one after checking the usermap for peer.\n\n+ /*\n+ * We have all of the information necessary to construct the authenticated\n+ * identity.\n+ */\n+ if (port->hba->compat_realm)\n+ {\n+ /* SAM-compatible format. */\n+ authn_id = psprintf(\"%s\\\\%s\", domainname, accountname);\n+ }\n+ else\n+ {\n+ /* Kerberos principal format. */\n+ authn_id = psprintf(\"%s@%s\", accountname, domainname);\n+ }\n+\n+ set_authn_id(port, authn_id);\n+ pfree(authn_id);\nFor SSPI, I think that this should be moved down once we are sure that\nthere is no error and that pg_SSPI_recvauth() reports STATUS_OK to the\ncaller. There is a similar issue with CheckCertAuth(), and\nset_authn_id() is documented so as it should be called only when we\nare sure that authentication succeeded.\n\nReading through the thread, the consensus is to add the identity\ninformation with log_connections. One question I have is that if we\njust log the information with log_connectoins, there is no real reason\nto add this information to the Port, except the potential addition of\nsome system function, a superuser-only column in pg_stat_activity or\nto allow extensions to access this information. I am actually in\nfavor of keeping this information in the Port with those pluggability\nreasons. How do others feel about that?\n--\nMichael", "msg_date": "Thu, 25 Mar 2021 14:41:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Thu, 2021-03-25 at 14:41 +0900, Michael Paquier wrote:\r\n> I got to look at the DN patch yesterday, so now's the turn of this\r\n> one. Nice work.\r\n\r\nThanks!\r\n\r\n> + port->authn_id = MemoryContextStrdup(TopMemoryContext, id);\r\n> It may not be obvious that all the field is copied to TopMemoryContext\r\n> because the Port requires that.\r\n\r\nI've expanded the comment. (v11 attached, with incremental changes over\r\nv10 in since-v10.diff.txt.)\r\n\r\n> +$node->stop('fast');\r\n> +my $log_contents = slurp_file($log);\r\n> Like 11e1577, let's just truncate the log files in all those tests.\r\n\r\nHmm... having the full log file contents for the SSL tests has been\r\nincredibly helpful for me with the NSS work. I'd hate to lose them; it\r\ncan be very hard to recreate the test conditions exactly.\r\n\r\n> + if (auth_method < 0 || USER_AUTH_LAST < auth_method)\r\n> + {\r\n> + Assert((0 <= auth_method) && (auth_method <= USER_AUTH_LAST));\r\n> What's the point of having the check and the assertion? NULL does not\r\n> really seem like a good default here as this should never really\r\n> happen. Wouldn't a FATAL be actually safer?\r\n\r\nI think FATAL makes more sense. Changed, thanks.\r\n\r\n> It looks wrong to me to include in the SSL tests some checks related\r\n> to SCRAM authentication. This should remain in 001_password.pl, as of\r\n> src/test/authentication/.\r\n\r\nAgreed. Moved the bad-password SCRAM tests over, and removed the\r\nduplicates. The last SCRAM test in that file, which tests the\r\ninteraction between client certificates and SCRAM auth, remains.\r\n\r\n> port->gss->princ = MemoryContextStrdup(TopMemoryContext, port->gbuf.value);\r\n> + set_authn_id(port, gbuf.value);\r\n> I don't think that this position is right for GSSAPI. Shouldn't this\r\n> be placed at the end of pg_GSS_checkauth() and only if the status is\r\n> OK?\r\n\r\nNo, and the tests will catch you if you try. Authentication happens\r\nbefore authorization (user mapping), and can succeed independently even\r\nif authz doesn't. See below.\r\n\r\n> For SSPI, I think that this should be moved down once we are sure that\r\n> there is no error and that pg_SSPI_recvauth() reports STATUS_OK to the\r\n> caller. There is a similar issue with CheckCertAuth(), and\r\n> set_authn_id() is documented so as it should be called only when we\r\n> are sure that authentication succeeded.\r\n\r\nAuthentication *has* succeeded already; that's what the SSPI machinery\r\nhas done above. Likewise for CheckCertAuth, which relies on the TLS\r\nsubsystem to validate the client signature before setting the peer_cn.\r\nThe user mapping is an authorization concern: it answers the question,\r\n\"is an authenticated user allowed to use a particular Postgres user\r\nname?\"\r\n\r\nPostgres currently conflates authn and authz in many places, and in my\r\nexperience, that'll make it difficult to maintain new authorization\r\nfeatures like the ones in the wishlist upthread. This patch is only one\r\nstep towards a clearer distinction.\r\n\r\n> I am actually in\r\n> favor of keeping this information in the Port with those pluggability\r\n> reasons.\r\n\r\nThat was my intent, yeah. Getting this into the stats framework was\r\nmore than I could bite off for this first patchset, but having it\r\nstored in a central location will hopefully help people do more with\r\nit.\r\n\r\n--Jacob", "msg_date": "Thu, 25 Mar 2021 18:51:22 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Thu, Mar 25, 2021 at 06:51:22PM +0000, Jacob Champion wrote:\n> On Thu, 2021-03-25 at 14:41 +0900, Michael Paquier wrote:\n>> + port->authn_id = MemoryContextStrdup(TopMemoryContext, id);\n>> It may not be obvious that all the field is copied to TopMemoryContext\n>> because the Port requires that.\n> \n> I've expanded the comment. (v11 attached, with incremental changes over\n> v10 in since-v10.diff.txt.)\n\nThat's the addition of \"to match the lifetime of the Port\". Looks\ngood.\n\n>> +$node->stop('fast');\n>> +my $log_contents = slurp_file($log);\n>> Like 11e1577, let's just truncate the log files in all those tests.\n> \n> Hmm... having the full log file contents for the SSL tests has been\n> incredibly helpful for me with the NSS work. I'd hate to lose them; it\n> can be very hard to recreate the test conditions exactly.\n\nDoes it really matter to have the full contents of the file from the\nprevious tests though? like() would report the contents of\nslurp_file() when it fails if the generated output does not match the\nexpected one, so you actually get less noise this way.\n\n>> + if (auth_method < 0 || USER_AUTH_LAST < auth_method)\n>> + {\n>> + Assert((0 <= auth_method) && (auth_method <= USER_AUTH_LAST));\n>> What's the point of having the check and the assertion? NULL does not\n>> really seem like a good default here as this should never really\n>> happen. Wouldn't a FATAL be actually safer?\n> \n> I think FATAL makes more sense. Changed, thanks.\n\nThanks. FWIW, one worry I had here was a corrupted stack that calls\nthis code path that would remain undetected.\n\n>> For SSPI, I think that this should be moved down once we are sure that\n>> there is no error and that pg_SSPI_recvauth() reports STATUS_OK to the\n>> caller. There is a similar issue with CheckCertAuth(), and\n>> set_authn_id() is documented so as it should be called only when we\n>> are sure that authentication succeeded.\n> \n> Authentication *has* succeeded already; that's what the SSPI machinery\n> has done above. Likewise for CheckCertAuth, which relies on the TLS\n> subsystem to validate the client signature before setting the peer_cn.\n> The user mapping is an authorization concern: it answers the question,\n> \"is an authenticated user allowed to use a particular Postgres user\n> name?\"\n\nOkay. Could you make the comments in those various areas more\nexplicit about the difference and that it is intentional to register\nthe auth ID before checking the user map? Anybody reading this code\nin the future may get confused with the differences in handling all\nthat according to the auth type involved if that's not clearly\nstated.\n\n> That was my intent, yeah. Getting this into the stats framework was\n> more than I could bite off for this first patchset, but having it\n> stored in a central location will hopefully help people do more with\n> it.\n\nNo problem with that.\n--\nMichael", "msg_date": "Fri, 26 Mar 2021 09:12:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, 2021-03-26 at 09:12 +0900, Michael Paquier wrote:\r\n> Does it really matter to have the full contents of the file from the\r\n> previous tests though?\r\n\r\nFor a few of the bugs I was tracking down, it was imperative. The tests\r\naren't isolated enough (or at all) to keep one from affecting the\r\nothers. And if the test is written incorrectly, or becomes incorrect\r\ndue to implementation changes, then the log files are really the only\r\nway to debug after a false positive -- with truncation, the bad test\r\nsucceeds incorrectly and then swallows the evidence. :)\r\n\r\n> Could you make the comments in those various areas more\r\n> explicit about the difference and that it is intentional to register\r\n> the auth ID before checking the user map? Anybody reading this code\r\n> in the future may get confused with the differences in handling all\r\n> that according to the auth type involved if that's not clearly\r\n> stated.\r\n\r\nI took a stab at this in v12, attached.\r\n\r\n--Jacob", "msg_date": "Fri, 26 Mar 2021 22:41:03 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, Mar 26, 2021 at 10:41:03PM +0000, Jacob Champion wrote:\n> For a few of the bugs I was tracking down, it was imperative. The tests\n> aren't isolated enough (or at all) to keep one from affecting the\n> others.\n\nIf the output of the log file is redirected to stderr and truncated,\nwhile the connection attempts are isolated according to the position\nwhere the file is truncated, I am not quite sure to follow this line\nof thoughts. What actually happened? Should we make the tests more\nstable instead? The kerberos have been running for one week now with\n11e1577a on HEAD, and look stable so it would be good to be consistent\non all fronts.\n\n> And if the test is written incorrectly, or becomes incorrect\n> due to implementation changes, then the log files are really the only\n> way to debug after a false positive -- with truncation, the bad test\n> succeeds incorrectly and then swallows the evidence. :)\n\nHmm, okay. However, I still see a noticeable difference in the tests\nwithout the additional restarts done so I would rather avoid this\ncost. For example, on my laptop, the restarts make\nauthentication/t/001_password.pl last 7s. Truncating the logs without\nany restarts bring the test down to 5.3s so that's 20% faster without\nimpacting its coverage. If you want to keep this information around\nfor debugging, I guess that we could just print the contents of the\nbackend logs to regress_log_001_password instead? This could be done\nwith a simple wrapper routine that prints the past contents of the log\nfile before truncating them. I am not sure that we need to stop the\nserver while checking for the logs contents either, to start it again\na bit later in the test while the configuration does not change. that\ncosts in speed.\n\n>> Could you make the comments in those various areas more\n>> explicit about the difference and that it is intentional to register\n>> the auth ID before checking the user map? Anybody reading this code\n>> in the future may get confused with the differences in handling all\n>> that according to the auth type involved if that's not clearly\n>> stated.\n> \n> I took a stab at this in v12, attached.\n\nThis part looks good, thanks!\n\n Causes each attempted connection to the server to be logged,\n- as well as successful completion of client authentication.\n+ as well as successful completion of client authentication and authorization.\nI am wondering if this paragraph can be confusing for the end-user\nwithout more explanation and a link to the \"User Name Maps\" section,\nand if we actually need this addition at all. The difference is that\nthe authenticated log is logged before the authorized log, with user\nname map checks in-between for some of the auth methods. HEAD refers\nto the existing authorized log as \"authentication\" in the logs, while\nyou correct that.\n--\nMichael", "msg_date": "Mon, 29 Mar 2021 16:50:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-03-29 at 16:50 +0900, Michael Paquier wrote:\r\n> On Fri, Mar 26, 2021 at 10:41:03PM +0000, Jacob Champion wrote:\r\n> > For a few of the bugs I was tracking down, it was imperative. The tests\r\n> > aren't isolated enough (or at all) to keep one from affecting the\r\n> > others.\r\n> \r\n> If the output of the log file is redirected to stderr and truncated,\r\n> while the connection attempts are isolated according to the position\r\n> where the file is truncated, I am not quite sure to follow this line\r\n> of thoughts. What actually happened? Should we make the tests more\r\n> stable instead?\r\n\r\nIt's not a matter of the tests being stable, but of the tests needing\r\nto change and evolve as the implementation changes. A big part of that\r\nis visibility into what the tests are doing, so that you can debug\r\nthem.\r\n\r\nI'm sorry I don't have any explicit examples; the NSS work is pretty\r\nbroad.\r\n\r\n> The kerberos have been running for one week now with\r\n> 11e1577a on HEAD, and look stable so it would be good to be consistent\r\n> on all fronts.\r\n\r\nI agree that it would be good in general, as long as the consistency\r\nisn't at the expense of usefulness.\r\n\r\nKeep in mind that the rotate-restart-slurp method comes from an\r\nexisting test. I assume Andrew chose that method for the same reasons I\r\ndid -- it works with what we currently have.\r\n\r\n> Hmm, okay. However, I still see a noticeable difference in the tests\r\n> without the additional restarts done so I would rather avoid this\r\n> cost. For example, on my laptop, the restarts make\r\n> authentication/t/001_password.pl last 7s. Truncating the logs without\r\n> any restarts bring the test down to 5.3s so that's 20% faster without\r\n> impacting its coverage.\r\n\r\nI agree that it'd be ideal not to have to restart the server. But 20%\r\nof less than ten seconds is less than two seconds, and the test suite\r\nhas to run thousands of times to make up a single hour of debugging\r\ntime that would be (hypothetically) lost by missing log files. (These\r\nare not easy tests for me to debug and maintain, personally -- maybe\r\nothers have a different experience.)\r\n\r\n> If you want to keep this information around\r\n> for debugging, I guess that we could just print the contents of the\r\n> backend logs to regress_log_001_password instead? This could be done\r\n> with a simple wrapper routine that prints the past contents of the log\r\n> file before truncating them. I am not sure that we need to stop the\r\n> server while checking for the logs contents either, to start it again\r\n> a bit later in the test while the configuration does not change. that\r\n> costs in speed.\r\n\r\nIs the additional effort to create (and maintain) that new system worth\r\ntwo seconds per run? I feel like it's not -- but if you feel strongly\r\nthen I can definitely look into it.\r\n\r\nPersonally, I'd rather spend time making it easy for tests to get the\r\nlog entries associated with a given connection or query. It seems like\r\nevery suite has had to cobble together its own method of checking the\r\nlog files, with varying levels of success/correctness. Maybe something\r\nwith session_preload_libraries and the emit_log_hook? But that would be\r\na job for a different changeset.\r\n\r\n> Causes each attempted connection to the server to be logged,\r\n> - as well as successful completion of client authentication.\r\n> + as well as successful completion of client authentication and authorization.\r\n> I am wondering if this paragraph can be confusing for the end-user\r\n> without more explanation and a link to the \"User Name Maps\" section,\r\n> and if we actually need this addition at all. The difference is that\r\n> the authenticated log is logged before the authorized log, with user\r\n> name map checks in-between for some of the auth methods. HEAD refers\r\n> to the existing authorized log as \"authentication\" in the logs, while\r\n> you correct that.\r\n\r\nWhich parts would you consider confusing/in need of change? I'm happy\r\nto expand where needed. Would an inline sample be more helpful than a\r\ntextual explanation?\r\n\r\nThanks again for all the feedback!\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 29 Mar 2021 23:53:03 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, Mar 29, 2021 at 11:53:03PM +0000, Jacob Champion wrote:\n> It's not a matter of the tests being stable, but of the tests needing\n> to change and evolve as the implementation changes. A big part of that\n> is visibility into what the tests are doing, so that you can debug\n> them.\n\nSure, but I still don't quite see why this applies here? At the point\nof any test, like() or unlink() print the contents of the comparison\nif there is a failure, so there is no actual loss of data. That's\nwhat issues_sql_like() does, for one.\n\n> I'm sorry I don't have any explicit examples; the NSS work is pretty\n> broad.\n\nYeah, I saw that..\n\n> I agree that it would be good in general, as long as the consistency\n> isn't at the expense of usefulness.\n> \n> Keep in mind that the rotate-restart-slurp method comes from an\n> existing test. I assume Andrew chose that method for the same reasons I\n> did -- it works with what we currently have.\n\nPostgresNode::rotate_logfile got introduced in c098509, and it is just\nused in t/017_shm.pl on HEAD. There could be more simplifications\nwith 019_replslot_limit.pl, I certainly agree with that.\n\n> I agree that it'd be ideal not to have to restart the server. But 20%\n> of less than ten seconds is less than two seconds, and the test suite\n> has to run thousands of times to make up a single hour of debugging\n> time that would be (hypothetically) lost by missing log files. (These\n> are not easy tests for me to debug and maintain, personally -- maybe\n> others have a different experience.)\n>\n> Is the additional effort to create (and maintain) that new system worth\n> two seconds per run? I feel like it's not -- but if you feel strongly\n> then I can definitely look into it.\n\nI fear that heavily parallelized runs could feel the difference. Ask\nAndres about that, he has been able to trigger in parallel a failure\nwith pg_upgrade wiping out testtablespace while the main regression\ntest suite just began :) \n\n> Personally, I'd rather spend time making it easy for tests to get the\n> log entries associated with a given connection or query. It seems like\n> every suite has had to cobble together its own method of checking the\n> log files, with varying levels of success/correctness. Maybe something\n> with session_preload_libraries and the emit_log_hook? But that would be\n> a job for a different changeset.\n\nMaybe.\n\n>> Causes each attempted connection to the server to be logged,\n>> - as well as successful completion of client authentication.\n>> + as well as successful completion of client authentication and authorization.\n>> I am wondering if this paragraph can be confusing for the end-user\n>> without more explanation and a link to the \"User Name Maps\" section,\n>> and if we actually need this addition at all. The difference is that\n>> the authenticated log is logged before the authorized log, with user\n>> name map checks in-between for some of the auth methods. HEAD refers\n>> to the existing authorized log as \"authentication\" in the logs, while\n>> you correct that.\n> \n> Which parts would you consider confusing/in need of change? I'm happy\n> to expand where needed. Would an inline sample be more helpful than a\n> textual explanation?\n\nThat's with the use of \"authentication and authorization\". How can\nusers make the difference between what one or this other is without\nsome explanation with the name maps? It seems that there is no place\nin the existing docs where this difference is explained. I am\nwondering if it would be better to not change this paragraph, or\nreword it slightly to outline that this may cause more than one log\nentry, say:\n\"Causes each attempted connection to the server, and each\nauthentication activity to be logged.\"\n--\nMichael", "msg_date": "Tue, 30 Mar 2021 09:55:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, 2021-03-30 at 09:55 +0900, Michael Paquier wrote:\r\n> On Mon, Mar 29, 2021 at 11:53:03PM +0000, Jacob Champion wrote:\r\n> > It's not a matter of the tests being stable, but of the tests needing\r\n> > to change and evolve as the implementation changes. A big part of that\r\n> > is visibility into what the tests are doing, so that you can debug\r\n> > them.\r\n> \r\n> Sure, but I still don't quite see why this applies here? At the point\r\n> of any test, like() or unlink() print the contents of the comparison\r\n> if there is a failure, so there is no actual loss of data. That's\r\n> what issues_sql_like() does, for one.\r\n\r\nThe key there is \"if there is a failure\" -- false positives need to be\r\ndebugged too. Tests I've worked with recently for the NSS work were\r\nsucceeding for the wrong reasons. Overly generic logfile matches (\"SSL\r\nerror\"), for example.\r\n\r\n> > Keep in mind that the rotate-restart-slurp method comes from an\r\n> > existing test. I assume Andrew chose that method for the same reasons I\r\n> > did -- it works with what we currently have.\r\n> \r\n> PostgresNode::rotate_logfile got introduced in c098509, and it is just\r\n> used in t/017_shm.pl on HEAD. There could be more simplifications\r\n> with 019_replslot_limit.pl, I certainly agree with that.\r\n\r\nmodules/ssl_passphrase_callback/t/001_testfunc.pl is where I pulled\r\nthis pattern from.\r\n\r\n> > Is the additional effort to create (and maintain) that new system worth\r\n> > two seconds per run? I feel like it's not -- but if you feel strongly\r\n> > then I can definitely look into it.\r\n> \r\n> I fear that heavily parallelized runs could feel the difference. Ask\r\n> Andres about that, he has been able to trigger in parallel a failure\r\n> with pg_upgrade wiping out testtablespace while the main regression\r\n> test suite just began :) \r\n\r\nDoes unilateral log truncation play any nicer with parallel test runs?\r\nI understand not wanting to make an existing problem worse, but it\r\ndoesn't seem like the existing tests were written for general\r\nparallelism.\r\n\r\nWould it be acceptable to adjust the tests for live rotation using the\r\nlogging collector, rather than a full restart? It would unfortunately\r\nmean that we have to somehow wait for the rotation to complete, since\r\nthat's asynchronous.\r\n\r\n(Speaking of asynchronous: how does the existing check-and-truncate\r\ncode make sure that the log entries it's looking for have been flushed\r\nto disk? Shutting down the server guarantees it.)\r\n\r\n> > Which parts would you consider confusing/in need of change? I'm happy\r\n> > to expand where needed. Would an inline sample be more helpful than a\r\n> > textual explanation?\r\n> \r\n> That's with the use of \"authentication and authorization\". How can\r\n> users make the difference between what one or this other is without\r\n> some explanation with the name maps? It seems that there is no place\r\n> in the existing docs where this difference is explained. I am\r\n> wondering if it would be better to not change this paragraph, or\r\n> reword it slightly to outline that this may cause more than one log\r\n> entry, say:\r\n> \"Causes each attempted connection to the server, and each\r\n> authentication activity to be logged.\"\r\n\r\nI took a stab at this in v13: \"Causes each attempted connection to the\r\nserver to be logged, as well as successful completion of both client\r\nauthentication (if necessary) and authorization.\" (IMO any further in-\r\ndepth explanation of authn/z and user mapping probably belongs in the\r\nauth method documentation, and this patch doesn't change any authn/z\r\nbehavior.)\r\n\r\nv13 also incorporates the latest SSL cert changes, so it's just a\r\nsingle patch now. Tests now cover the CN and DN clientname modes. I\r\nhave not changed the log capture method yet; I'll take a look at it\r\nnext.\r\n\r\n--Jacob", "msg_date": "Tue, 30 Mar 2021 17:06:51 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, 2021-03-30 at 17:06 +0000, Jacob Champion wrote:\r\n> Would it be acceptable to adjust the tests for live rotation using the\r\n> logging collector, rather than a full restart? It would unfortunately\r\n> mean that we have to somehow wait for the rotation to complete, since\r\n> that's asynchronous.\r\n\r\nI wasn't able to make live rotation work in a sane way. So, v14 tries\r\nto thread the needle with a riff on your earlier idea:\r\n\r\n> If you want to keep this information around\r\n> for debugging, I guess that we could just print the contents of the\r\n> backend logs to regress_log_001_password instead? This could be done\r\n> with a simple wrapper routine that prints the past contents of the log\r\n> file before truncating them.\r\n\r\nRather than putting Postgres log data into the Perl logs, I rotate the\r\nlogs exactly once at the beginning -- so that there's an\r\nold 001_ssltests_primary.log, and a new 001_ssltests_primary_1.log --\r\nand then every time we truncate the logfile, I shuffle the bits from\r\nthe new logfile into the old one. So no one has to learn to find the\r\nlog entries in a new place, we don't get an explosion of rotated logs,\r\nwe don't lose the log data, we don't match incorrect portions of the\r\nlogs, and we only pay the restart price once. This is wrapped into a\r\nsmall Perl module, LogCollector.\r\n\r\nWDYT?\r\n\r\n--Jacob", "msg_date": "Tue, 30 Mar 2021 23:15:48 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, Mar 30, 2021 at 05:06:51PM +0000, Jacob Champion wrote:\n> The key there is \"if there is a failure\" -- false positives need to be\n> debugged too. Tests I've worked with recently for the NSS work were\n> succeeding for the wrong reasons. Overly generic logfile matches (\"SSL\n> error\"), for example.\n\nIndeed, so that's a test stability issue. It looks like a good idea\nto make those tests more picky with the sub-errors they expect. I see\nmost \"certificate verify failed\" a lot, two \"sslv3 alert certificate\nrevoked\" and one \"tlsv1 alert unknown ca\" with 1.1.1, but it is not\nsomething that this patch has to address IMO.\n\n> modules/ssl_passphrase_callback/t/001_testfunc.pl is where I pulled\n> this pattern from.\n\nI see. For this case, I see no issue as the input caught is from\n_PG_init() so that seems better than a wait on the logs generated.\n\n> Does unilateral log truncation play any nicer with parallel test runs?\n> I understand not wanting to make an existing problem worse, but it\n> doesn't seem like the existing tests were written for general\n> parallelism.\n\nTAP tests running in parallel use their own isolated backend, wiht\ndedicated paths and ports.\n\n> Would it be acceptable to adjust the tests for live rotation using the\n> logging collector, rather than a full restart? It would unfortunately\n> mean that we have to somehow wait for the rotation to complete, since\n> that's asynchronous.\n> \n> (Speaking of asynchronous: how does the existing check-and-truncate\n> code make sure that the log entries it's looking for have been flushed\n> to disk? Shutting down the server guarantees it.)\n\nstderr redirection looks to be working pretty well with\nissues_sql_like().\n\n> I took a stab at this in v13: \"Causes each attempted connection to the\n> server to be logged, as well as successful completion of both client\n> authentication (if necessary) and authorization.\" (IMO any further in-\n> depth explanation of authn/z and user mapping probably belongs in the\n> auth method documentation, and this patch doesn't change any authn/z\n> behavior.)\n>\n> v13 also incorporates the latest SSL cert changes, so it's just a\n> single patch now. Tests now cover the CN and DN clientname modes. I\n> have not changed the log capture method yet; I'll take a look at it\n> next.\n\nThanks, I am looking into that and I am digging into the code now.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 13:03:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:15:48PM +0000, Jacob Champion wrote:\n> Rather than putting Postgres log data into the Perl logs, I rotate the\n> logs exactly once at the beginning -- so that there's an\n> old 001_ssltests_primary.log, and a new 001_ssltests_primary_1.log --\n> and then every time we truncate the logfile, I shuffle the bits from\n> the new logfile into the old one. So no one has to learn to find the\n> log entries in a new place, we don't get an explosion of rotated logs,\n> we don't lose the log data, we don't match incorrect portions of the\n> logs, and we only pay the restart price once. This is wrapped into a\n> small Perl module, LogCollector.\n\nHmm. I have dug today into that and I am really not convinced that\nthis is necessary, as a connection attempt combined with the output\nsent to stderr gives you the stability needed. If we were to have\nanything like that, perhaps a sub-class of PostgresNode would be\nadapted instead, with an internal log integration.\n\nAfter thinking about it, the new wording in config.sgml looks fine\nas-is.\n\nAnyway, I have not been able to convince myself that we need those\nslowdowns and that many server restarts as there is no\nreload-dependent timing here, and things have been stable on\neverything I have tested (including a slow RPI). I have found a\ncouple of things that can be simplified in the tests:\n- In src/test/authentication/, except for the trust method where there\nis no auth ID, all the other tests wrap a like() if $res == 0, or\nunlike() otherwise. I think that it is cleaner to make the matching\npattern an argument of test_role(), and adapt the tests to that.\n- src/test/ldap/ can also embed a single logic within test_access().\n- src/test/ssl/ is a different beast, but I think that there is more\nrefactoring possible here in parallel of the recent work I have sent\nto have equivalents of test_connect_ok() and test_connect_fails() in\nPostgresNode.pm. For now, I think that we should just live in this\nset with a small routine able to check for pattern matches in the\nlogs.\n\nAttached is an updated patch, with a couple of comments tweaks, the\nreworked tests and an indentation done.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 16:42:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Wed, Mar 31, 2021 at 04:42:32PM +0900, Michael Paquier wrote:\n> Attached is an updated patch, with a couple of comments tweaks, the\n> reworked tests and an indentation done.\n\nJacob has mentioned me that v15 has some false positives in the SSL\ntests, as we may catch in the backend logs patterns that come from\na previous test. We should really make that stuff more robust by\ndesign, or it will bite hard with some bugs remaining undetected while\nthe tests pass. This stuff can take advantage of 0d1a3343, and I\nthink that we should make the kerberos, ldap, authentication and SSL\ntest suites just use connect_ok() and connect_fails() from\nPostgresNode.pm. They just need to be extended a bit with a new\nargument for the log pattern check. This has the advantage to\ncentralize in a single code path the log file truncation (or some log\nfile rotation if the logging collector is used).\n--\nMichael", "msg_date": "Thu, 1 Apr 2021 10:21:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Thu, 2021-04-01 at 10:21 +0900, Michael Paquier wrote:\r\n> This stuff can take advantage of 0d1a3343, and I\r\n> think that we should make the kerberos, ldap, authentication and SSL\r\n> test suites just use connect_ok() and connect_fails() from\r\n> PostgresNode.pm. They just need to be extended a bit with a new\r\n> argument for the log pattern check.\r\n\r\nv16, attached, migrates all tests in those suites to connect_ok/fails\r\n(in the first two patches), and also adds the log pattern matching (in\r\nthe final feature patch).\r\n\r\nA since-v15 diff is attached, but it should be viewed with suspicion\r\nsince I've rebased on top of the new SSL tests at the same time.\r\n\r\n--Jacob", "msg_date": "Fri, 2 Apr 2021 00:03:21 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, Apr 02, 2021 at 12:03:21AM +0000, Jacob Champion wrote:\n> On Thu, 2021-04-01 at 10:21 +0900, Michael Paquier wrote:\n> > This stuff can take advantage of 0d1a3343, and I\n> > think that we should make the kerberos, ldap, authentication and SSL\n> > test suites just use connect_ok() and connect_fails() from\n> > PostgresNode.pm. They just need to be extended a bit with a new\n> > argument for the log pattern check.\n> \n> v16, attached, migrates all tests in those suites to connect_ok/fails\n> (in the first two patches), and also adds the log pattern matching (in\n> the final feature patch).\n\nThanks. I have been looking at 0001 and 0002, and found the addition\nof %params to connect_ok() and connect_fails() confusing first, as\nthis is only required for the 12th test of 001_password.pl (failure to\ngrab a password for md5_role not located in a pgpass file with\nPGPASSWORD not set). Instead of falling into a trap where the tests\ncould remain stuck, I think that we could just pass down -w from\nconnect_ok() and connect_fails() to PostgresNode::psql.\n\nThis change made also the parameter handling of the kerberos tests\nmore confusing on two points:\n- PostgresNode::psql uses a query as an argument, so there was a mix\nbetween the query passed down within the set of parameters, but then\nremoved from the list.\n- PostgresNode::psql uses already -XAt so there is no need to define\nit again.\n\n> A since-v15 diff is attached, but it should be viewed with suspicion\n> since I've rebased on top of the new SSL tests at the same time.\n\nThat did not seem that suspicious to me ;)\n\nAnyway, after looking at 0003, the main patch, it becomes quite clear\nthat the need to match logs depending on like() or unlike() is much\nmore elegant once we have use of parameters in connect_ok() and \nconnect_fails(), but I think that it is a mistake to pass down blindly\nthe parameters to psql and delete some of them on the way while\nkeeping the others. The existing code of HEAD only requires a SQL\nquery or some expected stderr or stdout output, so let's make all\nthat parameterized first.\n\nAttached is what I have come up with as the first building piece,\nwhich is basically a combination of 0001 and 0002, except that I\nmodified things so as the number of arguments remains minimal for all\nthe routines. This avoids the manipulation of the list of parameters\npassed down to PostgresNode::psql. The arguments for the optional\nquery, the expected stdout and stderr are part of the parameter set\n(0001 was not doing that). For the main patch, this will need to be\nextended with two more parameters in each routine: log_like and\nlog_unlike to match for the log patterns, handled as arrays of\nregexes. That's what 0003 is basically doing already.\n\nAs a whole, this is a consolidation of its own, so let's apply this\npart first.\n--\nMichael", "msg_date": "Fri, 2 Apr 2021 13:45:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, 2021-04-02 at 13:45 +0900, Michael Paquier wrote:\r\n> Attached is what I have come up with as the first building piece,\r\n> which is basically a combination of 0001 and 0002, except that I\r\n> modified things so as the number of arguments remains minimal for all\r\n> the routines. This avoids the manipulation of the list of parameters\r\n> passed down to PostgresNode::psql. The arguments for the optional\r\n> query, the expected stdout and stderr are part of the parameter set\r\n> (0001 was not doing that).\r\n\r\nI made a few changes, highlighted in the since-v18 diff:\r\n\r\n> +\t\t# The result is assumed to match \"true\", or \"t\", here.\r\n> +\t\t$node->connect_ok($connstr, $test_name, sql => $query,\r\n> +\t\t\t\t expected_stdout => qr/t/);\r\n\r\nI've anchored this as qr/^t$/ so we don't accidentally match a stray\r\n\"t\" in some larger string.\r\n\r\n> -\tis($res, 0, $test_name);\r\n> -\tlike($stdoutres, $expected, $test_name);\r\n> -\tis($stderrres, \"\", $test_name);\r\n> +\tmy ($stdoutres, $stderrres);\r\n> +\r\n> +\t$node->connect_ok($connstr, $test_name, $query, $expected);\r\n\r\n$query and $expected need to be given as named parameters. We also lost\r\nthe stderr check from the previous version of the test, so I added\r\nexpected_stderr to connect_ok().\r\n\r\n> @@ -446,14 +446,14 @@ TODO:\r\n> \t# correct client cert in encrypted PEM with empty password\r\n> \t$node->connect_fails(\r\n> \t\t\"$common_connstr user=ssltestuser sslcert=ssl/client.crt sslkey=ssl/client-encrypted-pem_tmp.key sslpassword=''\",\r\n> -\t\tqr!\\Qprivate key file \"ssl/client-encrypted-pem_tmp.key\": processing error\\E!,\r\n> +\t\texpected_stderr => qr!\\Qprivate key file \"ssl/client-encrypted-pem_tmp.key\": processing error\\E!,\r\n> \t\t\"certificate authorization fails with correct client cert and empty password in encrypted PEM format\"\r\n> \t);\r\n\r\nThese tests don't run yet inside the TODO block, but I've put the\r\nexpected_stderr parameter at the end of the list for them.\r\n\r\n> For the main patch, this will need to be\r\n> extended with two more parameters in each routine: log_like and\r\n> log_unlike to match for the log patterns, handled as arrays of\r\n> regexes. That's what 0003 is basically doing already.\r\n\r\nRebased on top of your patch as v19, attached. (v17 disappeared into\r\nthe ether somewhere, I think. :D)\r\n\r\nNow that it's easy to add log_like to existing tests, I fleshed out the\r\nLDAP tests with a few more cases. They don't add code coverage, but\r\nthey pin the desired behavior for a few more types of LDAP auth.\r\n\r\n--Jacob", "msg_date": "Fri, 2 Apr 2021 18:18:44 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, Apr 02, 2021 at 01:45:31PM +0900, Michael Paquier wrote:\n> As a whole, this is a consolidation of its own, so let's apply this\n> part first.\n\nSlight rebase for this one to take care of the updates with the SSL\nerror messages.\n--\nMichael", "msg_date": "Sat, 3 Apr 2021 21:30:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Sat, Apr 03, 2021 at 09:30:25PM +0900, Michael Paquier wrote:\n> Slight rebase for this one to take care of the updates with the SSL\n> error messages.\n\nI have been looking again at that and applied it as c50624cd after\nsome slight modifications. Attached is the main, refactored, patch\nthat plugs on top of the existing infrastructure. connect_ok() and\nconnect_fails() gain two parameters each to match or to not match the\nlogs of the backend, with a truncation of the logs done before any\nconnection attempt.\n\nI have spent more time reviewing the backend code while on it and\nthere was one thing that stood out:\n+ ereport(FATAL,\n+ (errmsg(\"connection was re-authenticated\"),\n+ errdetail_log(\"previous ID: \\\"%s\\\"; new ID: \\\"%s\\\"\",\n+ port->authn_id, id)));\nThis message would not actually trigger because auth_failed() is the\ncode path in charge of showing an error here, so this could just be\nreplaced by an assertion on authn_id being NULL? The contents of this\nlog were a bit in contradiction with the comments a couple of lines\nabove anyway. Jacob, what do you think?\n--\nMichael", "msg_date": "Mon, 5 Apr 2021 14:47:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, 2021-04-05 at 14:47 +0900, Michael Paquier wrote:\r\n> On Sat, Apr 03, 2021 at 09:30:25PM +0900, Michael Paquier wrote:\r\n> > Slight rebase for this one to take care of the updates with the SSL\r\n> > error messages.\r\n> \r\n> I have been looking again at that and applied it as c50624cd after\r\n> some slight modifications.\r\n\r\nThis loses the test fixes I made in my v19 [1]; some of the tests on\r\nHEAD aren't testing anything anymore. I've put those fixups into 0001,\r\nattached.\r\n\r\n> Attached is the main, refactored, patch\r\n> that plugs on top of the existing infrastructure. connect_ok() and\r\n> connect_fails() gain two parameters each to match or to not match the\r\n> logs of the backend, with a truncation of the logs done before any\r\n> connection attempt.\r\n\r\nIt looks like this is a reimplementation of v19, but it loses the\r\nadditional tests I wrote? Not sure. Maybe my v19 was sent to spam?\r\n\r\nIn any case I have attached my Friday patch as 0002.\r\n\r\n> I have spent more time reviewing the backend code while on it and\r\n> there was one thing that stood out:\r\n> + ereport(FATAL,\r\n> + (errmsg(\"connection was re-authenticated\"),\r\n> + errdetail_log(\"previous ID: \\\"%s\\\"; new ID: \\\"%s\\\"\",\r\n> + port->authn_id, id)));\r\n> This message would not actually trigger because auth_failed() is the\r\n> code path in charge of showing an error here\r\n\r\nIt triggers just fine for me (you can duplicate one of the\r\nset_authn_id() calls to see):\r\n\r\n FATAL: connection was re-authenticated\r\n DETAIL: previous ID: \"uid=test2,dc=example,dc=net\"; new ID: \"uid=test2,dc=example,dc=net\"\r\n\r\n> so this could just be\r\n> replaced by an assertion on authn_id being NULL?\r\n\r\nAn assertion seems like the wrong way to go; in the event that a future\r\ncode path accidentally performs a duplicated authentication, the FATAL\r\nwill just kill off an attacker's connection, while an assertion will\r\nDoS the server.\r\n\r\n> The contents of this\r\n> log were a bit in contradiction with the comments a couple of lines\r\n> above anyway.\r\n\r\nWhat do you mean by this? I took another look at the comment and it\r\nseems to match the implementation.\r\n\r\nv21 attached, which is just a rebase of my original v19.\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/8c08c6402051b5348d599c0e07bbd83f8614fa16.camel%40vmware.com", "msg_date": "Mon, 5 Apr 2021 16:40:41 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Mon, Apr 05, 2021 at 04:40:41PM +0000, Jacob Champion wrote:\n> This loses the test fixes I made in my v19 [1]; some of the tests on\n> HEAD aren't testing anything anymore. I've put those fixups into 0001,\n> attached.\n\nArgh, Thanks. The part about not checking after the error output when\nthe connection should pass is wanted to be more consistent with the\nother test suites. So I have removed this part and applied the rest\nof 0001.\n\n> It looks like this is a reimplementation of v19, but it loses the\n> additional tests I wrote? Not sure.\n\nSo, what you have here are three extra tests for ldap with\nsearch+bind and search filters. This looks like a good idea.\n\n> Maybe my v19 was sent to spam?\n\nIndeed. All those messages are finishing in my spam folder. I am\nwondering why actually. That's a bit surprising.\n\n> It triggers just fine for me (you can duplicate one of the\n> set_authn_id() calls to see):\n> \n> FATAL: connection was re-authenticated\n> DETAIL: previous ID: \"uid=test2,dc=example,dc=net\"; new ID: \"uid=test2,dc=example,dc=net\"\n\nHmm. It looks like I did something wrong here.\n\n> An assertion seems like the wrong way to go; in the event that a future\n> code path accidentally performs a duplicated authentication, the FATAL\n> will just kill off an attacker's connection, while an assertion will\n> DoS the server.\n\nHmm. You are making a good point here, but is that really the best\nthing we can do? We lose the context of the authentication type being\ndone with this implementation, and the client would know that it did a\nre-authentication even if the logdetail goes only to the backend's\nlogs. Wouldn't it be better, for instance, to generate a LOG message\nin this code path, switch to STATUS_ERROR to let auth_failed()\ngenerate the FATAL message? set_authn_id() could just return a\nboolean to tell if it was OK with the change in authn_id or not. \n\n> v21 attached, which is just a rebase of my original v19.\n\nThis requires a perltidy run from what I can see, but that's no big\ndeal.\n\n+ my (@log_like, @log_unlike);\n+ if (defined($params{log_like}))\n+ {\n+ @log_like = @{ delete $params{log_like} };\n+ }\n+ if (defined($params{log_unlike}))\n+ {\n+ @log_unlike = @{ delete $params{log_unlike} };\n+ }\nThere is no need for that? This removal was done as %params was\npassed down directly as-is to PostgresNode::psql, but that's not the\ncase anymore.\n--\nMichael", "msg_date": "Tue, 6 Apr 2021 14:15:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, 2021-04-06 at 14:15 +0900, Michael Paquier wrote:\r\n> On Mon, Apr 05, 2021 at 04:40:41PM +0000, Jacob Champion wrote:\r\n> > This loses the test fixes I made in my v19 [1]; some of the tests on\r\n> > HEAD aren't testing anything anymore. I've put those fixups into 0001,\r\n> > attached.\r\n> \r\n> Argh, Thanks. The part about not checking after the error output when\r\n> the connection should pass is wanted to be more consistent with the\r\n> other test suites. So I have removed this part and applied the rest\r\n> of 0001.\r\n\r\nI assumed Tom added those checks to catch a particular failure mode for\r\nthe GSS encryption case. (I guess Tom would know for sure.)\r\n\r\n> > An assertion seems like the wrong way to go; in the event that a future\r\n> > code path accidentally performs a duplicated authentication, the FATAL\r\n> > will just kill off an attacker's connection, while an assertion will\r\n> > DoS the server.\r\n> \r\n> Hmm. You are making a good point here, but is that really the best\r\n> thing we can do? We lose the context of the authentication type being\r\n> done with this implementation, and the client would know that it did a\r\n> re-authentication even if the logdetail goes only to the backend's\r\n> logs. Wouldn't it be better, for instance, to generate a LOG message\r\n> in this code path, switch to STATUS_ERROR to let auth_failed()\r\n> generate the FATAL message? set_authn_id() could just return a\r\n> boolean to tell if it was OK with the change in authn_id or not. \r\n\r\nMy concern there is that we already know the code is wrong in this\r\n(hypothetical future) case, and then we'd be relying on that wrong code\r\nto correctly bubble up an error status. I think that, once you hit this\r\ncode path, the program flow should be interrupted immediately -- do not\r\npass Go, collect $200, or let the bad implementation continue to do\r\nmore damage.\r\n\r\nI agree that losing the context is not ideal. To avoid that, I thought\r\nit might be nice to add errbacktrace() to the ereport() call -- but\r\nsince the functions we're interested in are static, the backtrace\r\ndoesn't help. (I should check to see whether libbacktrace is better in\r\nthis situation. Later.)\r\n\r\nAs for the client knowing: an active attacker is probably going to know\r\nthat they're triggering the reauthentication anyway. So the primary\r\ndisadvantage I see is that a more passive attacker could scan for some\r\nvulnerability by looking for that error message.\r\n\r\nIf that's a major concern, we could call auth_failed() directly from\r\nthis code. But that means that the auth_failed() logic must not give\r\nthem more ammunition, in this hypothetical scenario where the authn\r\nsystem is already messed up. Obscuring the failure mode helps buy\r\npeople time to update Postgres, which definitely has value, but it\r\nwon't prevent any actual exploit by the time we get to this check. A\r\ntricky trade-off.\r\n\r\n> > v21 attached, which is just a rebase of my original v19.\r\n> \r\n> This requires a perltidy run from what I can see, but that's no big\r\n> deal.\r\n\r\nIs that done per-patch? It looks like there's a large amount of\r\nuntidied code in src/test in general, and in the files being touched.\r\n\r\n> + my (@log_like, @log_unlike);\r\n> + if (defined($params{log_like}))\r\n> + {\r\n> + @log_like = @{ delete $params{log_like} };\r\n> + }\r\n> + if (defined($params{log_unlike}))\r\n> + {\r\n> + @log_unlike = @{ delete $params{log_unlike} };\r\n> + }\r\n> There is no need for that? This removal was done as %params was\r\n> passed down directly as-is to PostgresNode::psql, but that's not the\r\n> case anymore.\r\n\r\nFixed in v22, thanks.\r\n\r\n--Jacob", "msg_date": "Tue, 6 Apr 2021 18:31:16 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, Apr 06, 2021 at 06:31:16PM +0000, Jacob Champion wrote:\n> On Tue, 2021-04-06 at 14:15 +0900, Michael Paquier wrote:\n> > Hmm. You are making a good point here, but is that really the best\n> > thing we can do? We lose the context of the authentication type being\n> > done with this implementation, and the client would know that it did a\n> > re-authentication even if the logdetail goes only to the backend's\n> > logs. Wouldn't it be better, for instance, to generate a LOG message\n> > in this code path, switch to STATUS_ERROR to let auth_failed()\n> > generate the FATAL message? set_authn_id() could just return a\n> > boolean to tell if it was OK with the change in authn_id or not. \n> \n> My concern there is that we already know the code is wrong in this\n> (hypothetical future) case, and then we'd be relying on that wrong code\n> to correctly bubble up an error status. I think that, once you hit this\n> code path, the program flow should be interrupted immediately -- do not\n> pass Go, collect $200, or let the bad implementation continue to do\n> more damage.\n\nSounds fair to me.\n\n> I agree that losing the context is not ideal. To avoid that, I thought\n> it might be nice to add errbacktrace() to the ereport() call -- but\n> since the functions we're interested in are static, the backtrace\n> doesn't help. (I should check to see whether libbacktrace is better in\n> this situation. Later.)\n\nPerhaps, but that does not seem strongly necessary to me either here.\n\n> If that's a major concern, we could call auth_failed() directly from\n> this code. But that means that the auth_failed() logic must not give\n> them more ammunition, in this hypothetical scenario where the authn\n> system is already messed up. Obscuring the failure mode helps buy\n> people time to update Postgres, which definitely has value, but it\n> won't prevent any actual exploit by the time we get to this check. A\n> tricky trade-off.\n\nNah. I don't like much a solution that involves calling auth_failed()\nin more code paths than now.\n\n>> This requires a perltidy run from what I can see, but that's no big\n>> deal.\n> \n> Is that done per-patch? It looks like there's a large amount of\n> untidied code in src/test in general, and in the files being touched.\n\nCommitters take care of that usually, but if you can do it that\nhelps :)\n\nFrom what I can see, most of the indent diffs are coming from the\ntests added with the addition of the log_(un)like parameters. See \npgindent's README for all the details related to the version of\nperltidy, for example. The trick is that some previous patches may\nnot have been indented, causing the apparitions of extra diffs\nunrelated to a patch. Usually that's easy enough to fix on a\nfile-basis.\n\nAnyway, using a FATAL in this code path is fine by me at the end, so I\nhave applied the patch. Let's see now what the buildfarm thinks about\nit.\n--\nMichael", "msg_date": "Wed, 7 Apr 2021 10:20:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Wed, 2021-04-07 at 10:20 +0900, Michael Paquier wrote:\r\n> Anyway, using a FATAL in this code path is fine by me at the end, so I\r\n> have applied the patch. Let's see now what the buildfarm thinks about\r\n> it.\r\n\r\nLooks like the farm has gone green, after some test fixups. Thanks for\r\nall the reviews!\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 13 Apr 2021 15:47:21 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Tue, Apr 13, 2021 at 03:47:21PM +0000, Jacob Champion wrote:\n> Looks like the farm has gone green, after some test fixups. Thanks for\n> all the reviews!\n\nYou may want to follow this thread as well, as the topic is related to\nwhat has been discussed on this thread as there is an impact in a\ndifferent code path for the TAP tests, and not only the connection\ntests:\nhttps://www.postgresql.org/message-id/YHajnhcMAI3++pJL@paquier.xyz\n--\nMichael", "msg_date": "Thu, 15 Apr 2021 10:28:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On 03.04.21 14:30, Michael Paquier wrote:\n> On Fri, Apr 02, 2021 at 01:45:31PM +0900, Michael Paquier wrote:\n>> As a whole, this is a consolidation of its own, so let's apply this\n>> part first.\n> \n> Slight rebase for this one to take care of the updates with the SSL\n> error messages.\n\nI noticed this patch eliminated one $Test::Builder::Level assignment. \nWas there a reason for this?\n\nI think we should add it back, and also add a few missing ones in \nsimilar places. See attached patch.", "msg_date": "Wed, 22 Sep 2021 08:59:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Wed, Sep 22, 2021 at 08:59:38AM +0200, Peter Eisentraut wrote:\n> I noticed this patch eliminated one $Test::Builder::Level assignment. Was\n> there a reason for this?\n> \n> I think we should add it back, and also add a few missing ones in similar\n> places. See attached patch.\n>\n> [...]\n>\n> {\n> +\tlocal $Test::Builder::Level = $Test::Builder::Level + 1;\n> +\n\nSo you are referring to this one removed in c50624c. In what does\nthis addition change things compared to what has been added in\nconnect_ok() and connect_fails()? I am pretty sure that I have\nremoved this one because this logic got refactored in\nPostgresNode.pm.\n--\nMichael", "msg_date": "Wed, 22 Sep 2021 16:39:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On 22.09.21 09:39, Michael Paquier wrote:\n> On Wed, Sep 22, 2021 at 08:59:38AM +0200, Peter Eisentraut wrote:\n>> I noticed this patch eliminated one $Test::Builder::Level assignment. Was\n>> there a reason for this?\n>>\n>> I think we should add it back, and also add a few missing ones in similar\n>> places. See attached patch.\n>>\n>> [...]\n>>\n>> {\n>> +\tlocal $Test::Builder::Level = $Test::Builder::Level + 1;\n>> +\n> \n> So you are referring to this one removed in c50624c. In what does\n> this addition change things compared to what has been added in\n> connect_ok() and connect_fails()? I am pretty sure that I have\n> removed this one because this logic got refactored in\n> PostgresNode.pm.\n\nThis should be added to each level of a function call that represents a \ntest. This ensures that when a test fails, the line number points to \nthe top-level location of the test_role() call. Otherwise it would \npoint to the connect_ok() call inside test_role().\n\n\n", "msg_date": "Wed, 22 Sep 2021 10:20:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Wed, 2021-09-22 at 10:20 +0200, Peter Eisentraut wrote:\r\n> This should be added to each level of a function call that represents a \r\n> test. This ensures that when a test fails, the line number points to \r\n> the top-level location of the test_role() call. Otherwise it would \r\n> point to the connect_ok() call inside test_role().\r\n\r\nPatch LGTM, sorry about that. Thanks!\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 22 Sep 2021 15:18:43 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Wed, Sep 22, 2021 at 03:18:43PM +0000, Jacob Champion wrote:\n> On Wed, 2021-09-22 at 10:20 +0200, Peter Eisentraut wrote:\n>> This should be added to each level of a function call that represents a \n>> test. This ensures that when a test fails, the line number points to \n>> the top-level location of the test_role() call. Otherwise it would \n>> point to the connect_ok() call inside test_role().\n> \n> Patch LGTM, sorry about that. Thanks!\n\nFor the places of the patch, that seems fine then. Thanks!\n\nDo we need to care about that in other places? We have tests in\nsrc/bin/ using subroutines that call things from PostgresNode.pm or\nTestLib.pm, like pg_checksums, pg_ctl or pg_verifybackup, just to name\nthree.\n--\nMichael", "msg_date": "Thu, 23 Sep 2021 19:34:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On 23.09.21 12:34, Michael Paquier wrote:\n> On Wed, Sep 22, 2021 at 03:18:43PM +0000, Jacob Champion wrote:\n>> On Wed, 2021-09-22 at 10:20 +0200, Peter Eisentraut wrote:\n>>> This should be added to each level of a function call that represents a\n>>> test. This ensures that when a test fails, the line number points to\n>>> the top-level location of the test_role() call. Otherwise it would\n>>> point to the connect_ok() call inside test_role().\n>>\n>> Patch LGTM, sorry about that. Thanks!\n> \n> For the places of the patch, that seems fine then. Thanks!\n\ncommitted\n\n> Do we need to care about that in other places? We have tests in\n> src/bin/ using subroutines that call things from PostgresNode.pm or\n> TestLib.pm, like pg_checksums, pg_ctl or pg_verifybackup, just to name\n> three.\n\nYeah, at first glance, there is probably more that could be done. Here, \nI was just looking at a place where it was already and was accidentally \nremoved.\n\n\n", "msg_date": "Thu, 23 Sep 2021 23:20:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "\nOn 9/23/21 5:20 PM, Peter Eisentraut wrote:\n> On 23.09.21 12:34, Michael Paquier wrote:\n>> On Wed, Sep 22, 2021 at 03:18:43PM +0000, Jacob Champion wrote:\n>>> On Wed, 2021-09-22 at 10:20 +0200, Peter Eisentraut wrote:\n>>>> This should be added to each level of a function call that\n>>>> represents a\n>>>> test.� This ensures that when a test fails, the line number points to\n>>>> the top-level location of the test_role() call.� Otherwise it would\n>>>> point to the connect_ok() call inside test_role().\n>>>\n>>> Patch LGTM, sorry about that. Thanks!\n>>\n>> For the places of the patch, that seems fine then.� Thanks!\n>\n> committed\n>\n>> Do we need to care about that in other places?� We have tests in\n>> src/bin/ using subroutines that call things from PostgresNode.pm or\n>> TestLib.pm, like pg_checksums, pg_ctl or pg_verifybackup, just to name\n>> three.\n>\n> Yeah, at first glance, there is probably more that could be done.�\n> Here, I was just looking at a place where it was already and was\n> accidentally removed.\n\n\n\nIt probably wouldn't be a bad thing to have something somewhere\n(src/test/perl/README ?) that explains when and why we need to bump\n$Test::Builder::Level.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 24 Sep 2021 17:37:48 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" }, { "msg_contents": "On Fri, Sep 24, 2021 at 05:37:48PM -0400, Andrew Dunstan wrote:\n> It probably wouldn't be a bad thing to have something somewhere\n> (src/test/perl/README ?) that explains when and why we need to bump\n> $Test::Builder::Level.\n\nI have some ideas about that. So I propose to move the discussion to\na new thread.\n--\nMichael", "msg_date": "Sat, 25 Sep 2021 07:53:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Save user's original authenticated identity for logging" } ]
[ { "msg_contents": "Hi,\n\nJust found another crash.\n\nSeems that commit a929e17e5a8c9b751b66002c8a89fdebdacfe194 broke something.\nAttached is a minimal case and the stack trace.\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Thu, 28 Jan 2021 21:45:23 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Assertion fail with window function and partitioned tables" }, { "msg_contents": "On Thu, Jan 28, 2021 at 9:45 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> Hi,\n>\n> Just found another crash.\n>\n> Seems that commit a929e17e5a8c9b751b66002c8a89fdebdacfe194 broke something.\n> Attached is a minimal case and the stack trace.\n>\n\nHi,\n\nSeems this is the same that Andreas reported in\nhttps://www.postgresql.org/message-id/87sg8tqhsl.fsf@aurora.ydns.eu so\nconsider this one as noise\n\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n\n--\n\n\n", "msg_date": "Fri, 29 Jan 2021 16:35:27 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: Assertion fail with window function and partitioned tables" } ]
[ { "msg_contents": "Hi All,\n\nI realize using foreign data wrappers with transaction pooling may not be\nstrictly supported, but for the most part they work. I am however\noccasionally noticing errors stemming from the prepared statement names\ncreated by the fdw modify code colliding between sessions/DBs.\n\nWould the development team be open to a patch which somehow makes this less\nlikely? Something like the attached patch works, but probably isn't ideal?\nPerhaps there is a better unique identifier I can use here. I am very new\nto the postgres codebase.\n\n\nBest\n*Marco Montagna*", "msg_date": "Fri, 29 Jan 2021 00:14:58 -0800", "msg_from": "Marco <marcojoemontagna@gmail.com>", "msg_from_op": true, "msg_subject": "[WIP] Reduce likelihood of fdw prepared statement collisions" } ]
[ { "msg_contents": "I got annoyed (not for the first time) by the fact that the\npartitioned_rels field of AppendPath and MergeAppendPath is a list of\nRelids, i.e., Bitmapsets. This is problematic because a Bitmapset is\nnot a type of Node, and thus a List of them is really an invalid data\nstructure. The main practical consequence is that pprint() fails to\nprint these path types accurately, which is an issue for debugging.\n\nWe've had some related problems before, so I'm wondering if it's time\nto bite the bullet and turn Bitmapsets into legal Nodes. We'd have\nto add a nodetag field to them, which is free on 64-bit machines due\nto alignment considerations, but would increase the size of most\nBitmapsets on 32-bit machines. OTOH, I do not think we're optimizing\nfor 32-bit machines anymore.\n\nAnother issue is that the outfuncs/readfuncs print representation\ncurrently looks like \"(b 1 2 ...)\" which isn't a normal\nrepresentation for a Node. I'd be inclined to try to preserve that\nrepresentation, because I think we'd have to special-case Bitmapsets\nanyway given their variable number of unnamed entries. But I've not\ntried to actually code anything for it.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jan 2021 15:12:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "On Fri, Jan 29, 2021 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I got annoyed (not for the first time) by the fact that the\n> partitioned_rels field of AppendPath and MergeAppendPath is a list of\n> Relids, i.e., Bitmapsets. This is problematic because a Bitmapset is\n> not a type of Node, and thus a List of them is really an invalid data\n> structure. The main practical consequence is that pprint() fails to\n> print these path types accurately, which is an issue for debugging.\n\nSo we don't actually require T_List-type Lists to only contain entries\nof type Node already? ISTM that T_List-type Lists cannot *mostly* be a\nNode that consists of a collection of linked Nodes. It has to be\nall-or-nothing. The \"Node-ness\" of a List should never be situational\nor implicit -- allowing that seems like a recipe for disaster. This\nkind of \"code reuse\" is not a virtue at all.\n\nIf tightening things up here turns out to be a problem someplace, then\nI'm okay with that code using some other solution. That could mean\nexpanding the definition of a Node in some way that was not originally\nconsidered (when it nevertheless makes sense), or it could mean using\nsome other data structure instead.\n\nMight be good to Assert() that this rule is followed in certain key\nlist.c functions.\n\n> We've had some related problems before, so I'm wondering if it's time\n> to bite the bullet and turn Bitmapsets into legal Nodes. We'd have\n> to add a nodetag field to them, which is free on 64-bit machines due\n> to alignment considerations, but would increase the size of most\n> Bitmapsets on 32-bit machines. OTOH, I do not think we're optimizing\n> for 32-bit machines anymore.\n\n+1 from me.\n\nI'm prepared to say that 32-bit performance shouldn't be a concern\nthese days, except perhaps with really significant regressions. And\neven then, only when there is no clear upside. If anybody really does\nrun Postgres 14 on a 32-bit platform, they should be much more\nconcerned about bugs that slip in because nobody owns hardware like\nthat anymore. It's probably much riskier to use 32-bit x86 today than\nit is to use (say) POWER8, or some other contemporary minority\nplatform.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Jan 2021 15:45:27 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Jan 29, 2021 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I got annoyed (not for the first time) by the fact that the\n>> partitioned_rels field of AppendPath and MergeAppendPath is a list of\n>> Relids, i.e., Bitmapsets. This is problematic because a Bitmapset is\n>> not a type of Node, and thus a List of them is really an invalid data\n>> structure. The main practical consequence is that pprint() fails to\n>> print these path types accurately, which is an issue for debugging.\n\n> So we don't actually require T_List-type Lists to only contain entries\n> of type Node already?\n\nI seem to recall that there are some places that use Lists to store\nplain \"char *\" strings (not wrapped in T_String), and there are\ndefinitely places that use lists of non-Node structs. That's a kluge,\nbut I don't really object to it in narrowly-scoped data structures.\nI think it's a good bit south of acceptable in anything declared in\ninclude/nodes/*.h, though. Publicly visible Node types ought to be\nfully manipulable by the standard backend/nodes/ functions.\n\n> ISTM that T_List-type Lists cannot *mostly* be a\n> Node that consists of a collection of linked Nodes. It has to be\n> all-or-nothing.\n\nRight. Any situation where you have a List of things that aren't\nNodes has to be one where you know a-priori that everything in this\nList is a $whatever. If the List is only used within a small bit\nof code, that's fine, and adding the overhead to make the contents\nbe real Nodes wouldn't be worth the trouble.\n\n> It's probably much riskier to use 32-bit x86 today than\n> it is to use (say) POWER8, or some other contemporary minority\n> platform.\n\nWe do still have x86 in the buildfarm, as well as some other\n32-bit platforms, so I don't agree that it's that much less\ntested than non-mainstream 64-bit platforms. But I do agree\nit's not our main development focus anymore, and shouldn't be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jan 2021 19:01:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "On Fri, Jan 29, 2021 at 4:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > It's probably much riskier to use 32-bit x86 today than\n> > it is to use (say) POWER8, or some other contemporary minority\n> > platform.\n>\n> We do still have x86 in the buildfarm, as well as some other\n> 32-bit platforms, so I don't agree that it's that much less\n> tested than non-mainstream 64-bit platforms. But I do agree\n> it's not our main development focus anymore, and shouldn't be.\n\nI was arguing that it's much less tested *in effect*. It seems like\nthe trend is very much in the direction of less and less ISA level\ndifferentiation.\n\nConsider (just to pick one example) the rationale behind the RISC-V initiative:\n\nhttps://en.wikipedia.org/wiki/RISC-V#Rationale\n\nIn many ways my x86-64 Macbook is closer to the newer M1 Macbook than\nit is to some old 32-bit x86 machine. I suspect that this matters. I\nam speculating here, of course -- I have to because there is no\nguidance to work off of. I don't know anybody that still runs Postgres\n(or anything like it) on a 32-bit platform. I think that Michael\nPaquier owns a Raspberry Pi zero, but that hardly counts.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Jan 2021 16:31:16 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Jan 29, 2021 at 4:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We do still have x86 in the buildfarm, as well as some other\n>> 32-bit platforms, so I don't agree that it's that much less\n>> tested than non-mainstream 64-bit platforms.\n\n> I don't know anybody that still runs Postgres\n> (or anything like it) on a 32-bit platform. I think that Michael\n> Paquier owns a Raspberry Pi zero, but that hardly counts.\n\nHmph ... three of my five buildfarm animals are 32-bit, plus I\nhave got 32-bit OSes for my Raspberry Pi ;-). Admittedly, none\nof those represent hardware someone would put a serious database\non today. But in terms of testing diversity, I think they're\na lot more credible than thirty-one flavors of Linux on x86_64.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jan 2021 20:53:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "On Fri, Jan 29, 2021 at 08:53:49PM -0500, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n>> I don't know anybody that still runs Postgres\n>> (or anything like it) on a 32-bit platform. I think that Michael\n>> Paquier owns a Raspberry Pi zero, but that hardly counts.\n\nhamster died a couple of years ago, it was a RPI1 and I have not\nbought one after. RIP to it. I still have dangomushi, a RPI2, based\non armv7l and that's 32-bit. Heikki has chipmunk, which is a RPI1\nlast time we discussed about that. The good thing about those\nmachines is that they are low-energy consumers, and silent. So it is\neasy to forget about them and just let them be.\n\n> Hmph ... three of my five buildfarm animals are 32-bit, plus I\n> have got 32-bit OSes for my Raspberry Pi ;-). Admittedly, none\n> of those represent hardware someone would put a serious database\n> on today. But in terms of testing diversity, I think they're\n> a lot more credible than thirty-one flavors of Linux on x86_64.\n\nThose 32-bit modules are still being sold actively by the RPI\nfoundation, and used as cheap machines for education purposes, so I\nthink that it is still useful for Postgres to have active buildfarm\nmembers for 32-bit architectures.\n--\nMichael", "msg_date": "Sat, 30 Jan 2021 11:33:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "On Fri, Jan 29, 2021 at 5:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmph ... three of my five buildfarm animals are 32-bit, plus I\n> have got 32-bit OSes for my Raspberry Pi ;-). Admittedly, none\n> of those represent hardware someone would put a serious database\n> on today. But in terms of testing diversity, I think they're\n> a lot more credible than thirty-one flavors of Linux on x86_64.\n\nFair enough.\n\nTo be clear I meant testing in the deepest and most general sense --\nnot simply running the tests. If you happen to be using approximately\nthe same platform as most Postgres hackers, it's reasonable to expect\nto run into fewer bugs tied to portability issues. Regardless of\nwhether or not the minority platform you were considering has\ntheoretical testing parity.\n\nBroad trends have made it easier to write portable C code, but that\ndoesn't apply to 32-bit machines, I imagine. Including even the\nextremely low power 32-bit chips that are not yet fully obsolete, like\nthe Raspberry Pi Zero's chip.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Jan 2021 18:34:41 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "On Fri, Jan 29, 2021 at 6:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Those 32-bit modules are still being sold actively by the RPI\n> foundation, and used as cheap machines for education purposes, so I\n> think that it is still useful for Postgres to have active buildfarm\n> members for 32-bit architectures.\n\nBut I'm not arguing against that. I'm merely arguing that it is okay\nto regress 32-bit platforms (within reason) in order to make them more\nlike 64-bit platforms. This makes them less prone to subtle\nportability bugs that the regression tests won't catch, so even 32-bit\nPostgres may well come out ahead, in a certain sense.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Jan 2021 18:37:56 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Broad trends have made it easier to write portable C code, but that\n> doesn't apply to 32-bit machines, I imagine. Including even the\n> extremely low power 32-bit chips that are not yet fully obsolete, like\n> the Raspberry Pi Zero's chip.\n\nMeh. To my mind, the most interesting aspects of different hardware\nplatforms for our purposes are\n\n* alignment sensitivity (particularly, is unaligned access expensive);\n* spinlock support, and after that various other atomic instructions;\n* endianness\n\nPointer width is interesting, but really it's a solved problem\ncompared to these.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Jan 2021 21:44:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "On Fri, Jan 29, 2021 at 6:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Pointer width is interesting, but really it's a solved problem\n> compared to these.\n\nWhat about USE_FLOAT8_BYVAL?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Jan 2021 18:57:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Jan 29, 2021 at 6:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Pointer width is interesting, but really it's a solved problem\n>> compared to these.\n\n> What about USE_FLOAT8_BYVAL?\n\nThat's an annoyance, sure, but I don't recall many recent bugs\nrelated to violations of that coding rule. As long as you don't\nviolate the Datum abstraction it's pretty safe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 30 Jan 2021 11:34:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "Now that commit f003a7522 did away with the partitioned_rels fields,\nmy original motivation for doing $SUBJECT is gone. It might still be\nworth doing, but I'm not planning to tackle it right now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Feb 2021 15:23:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "On Tue, 2 Feb 2021 at 09:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Now that commit f003a7522 did away with the partitioned_rels fields,\n> my original motivation for doing $SUBJECT is gone. It might still be\n> worth doing, but I'm not planning to tackle it right now.\n\nI'm not sure if the misuse of Lists to store non-Node types should be\nall that surprising. lappend() accepts a void pointer rather than a\nNode *. I also didn't catch anything that indicates storing non-Node\ntypes is bad practise.\n\nMaybe it's worth still adding something to some comments in list.c to\ntry and reduce the chances of someone making this mistake again in the\nfuture?\n\nDavid\n\n\n", "msg_date": "Tue, 2 Feb 2021 11:57:36 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 2 Feb 2021 at 09:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Now that commit f003a7522 did away with the partitioned_rels fields,\n>> my original motivation for doing $SUBJECT is gone. It might still be\n>> worth doing, but I'm not planning to tackle it right now.\n\n> I'm not sure if the misuse of Lists to store non-Node types should be\n> all that surprising.\n\nWell, as I tried to clarify upthread, it's only a problem if the list\nis a subfield of a recognized Node type. Random private data structures\ncan and do contain lists of $whatever. But if you put something in a\nNode type then you'd better be prepared to teach backend/nodes/*.c about\nit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Feb 2021 18:47:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Should we make Bitmapsets a kind of Node?" } ]
[ { "msg_contents": "Hello,\n\nIn tablespace.c, a comment explains that DROP TABLESPACE can fail\nbogusly because of Windows file semantics:\n\n * XXX On Windows, an unlinked file persists in the directory listing\n * until no process retains an open handle for the file. The DDL\n * commands that schedule files for unlink send invalidation messages\n * directing other PostgreSQL processes to close the files. DROP\n * TABLESPACE should not give up on the tablespace becoming empty\n * until all relevant invalidation processing is complete.\n\nWhile trying to get the AIO patchset working on more operating\nsystems, this turned out to be a problem. Andres mentioned the new\nProcSignalBarrier stuff as a good way to tackle this, so I tried it\nand it seems to work well so far.\n\nThe idea in this initial version is to tell every process in the\ncluster to close all fds, and then try again. That's a pretty large\nhammer, but it isn't reached on Unix, and with slightly more work it\ncould be made to happen only after 2 failures on Windows. It was\ntempting to try to figure out how to use the sinval mechanism to close\nprecisely the right files instead, but it doesn't look safe to run\nsinval at arbitrary CFI() points. It's easier to see that the\npre-existing closeAllVfds() function has an effect that is local to\nfd.c and doesn't affect the VFDs or SMgrRelations, so any CFI() should\nbe an OK time to run that.\n\nWhile reading the ProcSignalBarrier code, I couldn't resist replacing\nits poll/sleep loop with condition variables.\n\nThoughts?", "msg_date": "Sun, 31 Jan 2021 01:52:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "> While reading the ProcSignalBarrier code, I couldn't resist replacing\n> its poll/sleep loop with condition variables.\n\nOops, that version accidentally added and then removed an unnecessary\nchange due to incorrect commit squashing. Here's a better pair of\npatches.", "msg_date": "Sun, 31 Jan 2021 02:11:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "Hi,\n\nThanks for developing this.\n\nOn 2021-01-31 02:11:06 +1300, Thomas Munro wrote:\n> --- a/src/backend/commands/tablespace.c\n> +++ b/src/backend/commands/tablespace.c\n> @@ -520,15 +520,23 @@ DropTableSpace(DropTableSpaceStmt *stmt)\n> \t\t * but we can't tell them apart from important data files that we\n> \t\t * mustn't delete. So instead, we force a checkpoint which will clean\n> \t\t * out any lingering files, and try again.\n> -\t\t *\n> -\t\t * XXX On Windows, an unlinked file persists in the directory listing\n> -\t\t * until no process retains an open handle for the file. The DDL\n> -\t\t * commands that schedule files for unlink send invalidation messages\n> -\t\t * directing other PostgreSQL processes to close the files. DROP\n> -\t\t * TABLESPACE should not give up on the tablespace becoming empty\n> -\t\t * until all relevant invalidation processing is complete.\n> \t\t */\n> \t\tRequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT);\n> +\t\t/*\n> +\t\t * On Windows, an unlinked file persists in the directory listing until\n> +\t\t * no process retains an open handle for the file. The DDL\n> +\t\t * commands that schedule files for unlink send invalidation messages\n> +\t\t * directing other PostgreSQL processes to close the files, but nothing\n> +\t\t * guarantees they'll be processed in time. So, we'll also use a\n> +\t\t * global barrier to ask all backends to close all files, and wait\n> +\t\t * until they're finished.\n> +\t\t */\n> +#if defined(WIN32) || defined(USE_ASSERT_CHECKING)\n> +\t\tLWLockRelease(TablespaceCreateLock);\n> +\t\tWaitForProcSignalBarrier(EmitProcSignalBarrier(PROCSIGNAL_BARRIER_SMGRRELEASE));\n> +\t\tLWLockAcquire(TablespaceCreateLock, LW_EXCLUSIVE);\n> +#endif\n> +\t\t/* And now try again. */\n> \t\tif (!destroy_tablespace_directories(tablespaceoid, false))\n> \t\t{\n> \t\t\t/* Still not empty, the files must be important then */\n\nIt's probably rare enough to care, but this still made me thing whether\nwe could avoid the checkpoint at all somehow. Requiring an immediate\ncheckpoint for dropping relations is quite a heavy hammer that\npractically cannot be used in production without causing performance\nproblems. But it seems hard to process the fsync deletion queue in\nanother way.\n\n\n> diff --git a/src/backend/storage/smgr/smgr.c b/src/backend/storage/smgr/smgr.c\n> index 4dc24649df..0f8548747c 100644\n> --- a/src/backend/storage/smgr/smgr.c\n> +++ b/src/backend/storage/smgr/smgr.c\n> @@ -298,6 +298,12 @@ smgrcloseall(void)\n> \t\tsmgrclose(reln);\n> }\n> \n> +void\n> +smgrrelease(void)\n> +{\n> +\tmdrelease();\n> +}\n\nProbably should be something like\n\tfor (i = 0; i < NSmgr; i++)\n\t{\n\t\tif (smgrsw[i].smgr_release)\n\t\t\tsmgrsw[i].smgr_release();\n\t}\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 1 Feb 2021 11:02:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Tue, Feb 2, 2021 at 8:02 AM Andres Freund <andres@anarazel.de> wrote:\n> It's probably rare enough to care, but this still made me thing whether\n> we could avoid the checkpoint at all somehow. Requiring an immediate\n> checkpoint for dropping relations is quite a heavy hammer that\n> practically cannot be used in production without causing performance\n> problems. But it seems hard to process the fsync deletion queue in\n> another way.\n\nRight, the checkpoint itself is probably worse than this\n\"close-all-your-files!\" thing in some cases (though it seems likely\nthat once we start using ProcSignalBarrier we're going to find out\nabout places that take a long time to get around to processing them\nand that's going to be a thing to work on). As a separate project,\nperhaps we should find some other way to stop GetNewRelFileNode() from\nrecycling the relfilenode until the next checkpoint, so that we can\nunlink the file eagerly at commit time, while still avoiding the\nhazard described in the comment for mdunlink(). A straw-man idea\nwould be to touch a file under PGDATA/pg_dropped and fsync it so it\nsurvives a power outage, have checkpoints clean that out, and have\nGetNewRelFileNode() to try access() it. Then we wouldn't need the\ncheckpoint here, I think; we'd just need this ProcSignalBarrier for\nWindows.\n\n> > +void\n> > +smgrrelease(void)\n> > +{\n> > + mdrelease();\n> > +}\n>\n> Probably should be something like\n> for (i = 0; i < NSmgr; i++)\n> {\n> if (smgrsw[i].smgr_release)\n> smgrsw[i].smgr_release();\n> }\n\nFixed. Thanks!", "msg_date": "Tue, 2 Feb 2021 11:16:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Tue, Feb 2, 2021 at 11:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... A straw-man idea\n> would be to touch a file under PGDATA/pg_dropped and fsync it so it\n> survives a power outage, have checkpoints clean that out, and have\n> GetNewRelFileNode() to try access() it. ...\n\nI should add, the reason I mentioned fsyncing it is that in another\nthread we've also discussed making the end-of-crash-recovery\ncheckpoint optional, and then I think you'd need to be sure you can\navoid reusing the relfilenode even after crash recovery, because if\nyou recycle the relfilenode and then crash again you'd be exposed to\nthat hazard during the 2nd run thought recovery. But perhaps it's\nenough to recreate the hypothetical pg_dropped file while replaying\nthe drop-relation record. Not sure, would need more thought.\n\n\n", "msg_date": "Tue, 2 Feb 2021 12:26:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "Here's a new version. The condition variable patch 0001 fixes a bug:\nCleanupProcSignalState() also needs to broadcast. The hunk that\nallows the interrupt handlers to use CVs while you're already waiting\non a CV is now in a separate patch 0002. I'm thinking of going ahead\nand committing those two. The 0003 patch to achieve $SUBJECT needs\nmore discussion.", "msg_date": "Sat, 27 Feb 2021 16:14:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Sat, Feb 27, 2021 at 4:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a new version. The condition variable patch 0001 fixes a bug:\n> CleanupProcSignalState() also needs to broadcast. The hunk that\n> allows the interrupt handlers to use CVs while you're already waiting\n> on a CV is now in a separate patch 0002. I'm thinking of going ahead\n> and committing those two.\n\nDone. Of course nothing in the tree reaches any of this code yet.\nIt'll be exercised by cfbot in this thread and (I assume) Amul's\n\"ALTER SYSTEM READ { ONLY | WRITE }\" thread.\n\n> The 0003 patch to achieve $SUBJECT needs\n> more discussion.\n\nRebased.\n\nThe more I think about it, the more I think that this approach is good\nenough for an initial solution to the problem. It only affects\nWindows, dropping tablespaces is hopefully rare, and it's currently\nbroken on that OS. That said, it's complex enough, and I guess more\nto the point, enough of a compromise, that I'm hoping to get some\nexplicit consensus about that.\n\nA better solution would probably have to be based on the sinval queue,\nsomehow. Perhaps with a new theory or rule making it safe to process\nat every CFI(), or by deciding that we're prepared have the operation\nwait arbitrarily long until backends reach a point where it is known\nto be safe (probably near ProcessClientReadInterrupt()'s call to\nProcessCatchupInterrupt()), or by inventing a new kind of lightweight\n\"sinval peek\" that is safe to run at every CFI() point, being based on\nreading (but not consuming!) the sinval queue and performing just\nvfd-close of referenced smgr relations in this case. The more I think\nabout all that complexity for a super rare event on only one OS, the\nmore I want to just do it the stupid way and close all the fds.\nRobert opined similarly in an off-list chat about this problem.", "msg_date": "Mon, 1 Mar 2021 17:46:03 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "> On 1 Mar 2021, at 05:46, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>> The 0003 patch to achieve $SUBJECT needs\n>> more discussion.\n> \n> Rebased.\n> \n> The more I think about it, the more I think that this approach is good\n> enough for an initial solution to the problem. It only affects\n> Windows, dropping tablespaces is hopefully rare, and it's currently\n> broken on that OS. That said, it's complex enough, and I guess more\n> to the point, enough of a compromise, that I'm hoping to get some\n> explicit consensus about that.\n> \n> A better solution would probably have to be based on the sinval queue,\n> somehow. Perhaps with a new theory or rule making it safe to process\n> at every CFI(), or by deciding that we're prepared have the operation\n> wait arbitrarily long until backends reach a point where it is known\n> to be safe (probably near ProcessClientReadInterrupt()'s call to\n> ProcessCatchupInterrupt()), or by inventing a new kind of lightweight\n> \"sinval peek\" that is safe to run at every CFI() point, being based on\n> reading (but not consuming!) the sinval queue and performing just\n> vfd-close of referenced smgr relations in this case. The more I think\n> about all that complexity for a super rare event on only one OS, the\n> more I want to just do it the stupid way and close all the fds.\n> Robert opined similarly in an off-list chat about this problem.\n\nI don't know Windows at all so I can't really comment on that portion, but from\nmy understanding of procsignalbarriers I think this seems right. No tests\nbreak when forcing the codepath to run on Linux and macOS.\n\nShould this be performed in tblspc_redo as well for the similar case?\n\n+#if defined(WIN32) || defined(USE_ASSERT_CHECKING)\n\nIs the USE_ASSERT_CHECKING clause to exercise the code a more frequent than\njust on Windows? That could warrant a quick word in the comment if so IMO to\navoid confusion.\n\n-ProcessBarrierPlaceholder(void)\n+ProcessBarrierSmgrRelease(void)\n {\n-\t/*\n-\t * XXX. This is just a placeholder until the first real user of this\n-\t * machinery gets committed. Rename PROCSIGNAL_BARRIER_PLACEHOLDER to\n-\t * PROCSIGNAL_BARRIER_SOMETHING_ELSE where SOMETHING_ELSE is something\n-\t * appropriately descriptive. Get rid of this function and instead have\n-\t * ProcessBarrierSomethingElse. Most likely, that function should live in\n-\t * the file pertaining to that subsystem, rather than here.\n-\t *\n-\t * The return value should be 'true' if the barrier was successfully\n-\t * absorbed and 'false' if not. Note that returning 'false' can lead to\n-\t * very frequent retries, so try hard to make that an uncommon case.\n-\t */\n+\tsmgrrelease();\n\nShould this instead be in smgr.c to avoid setting a precedent for procsignal.c\nto be littered with absorption functions?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 1 Mar 2021 11:06:40 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Mon, Mar 1, 2021 at 11:07 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> I don't know Windows at all so I can't really comment on that portion, but from\n> my understanding of procsignalbarriers I think this seems right. No tests\n> break when forcing the codepath to run on Linux and macOS.\n\nHey Daniel,\n\nThanks for looking!\n\n> Should this be performed in tblspc_redo as well for the similar case?\n\nAh. Yes. Added (not tested yet).\n\n> +#if defined(WIN32) || defined(USE_ASSERT_CHECKING)\n>\n> Is the USE_ASSERT_CHECKING clause to exercise the code a more frequent than\n> just on Windows? That could warrant a quick word in the comment if so IMO to\n> avoid confusion.\n\nNote added.\n\n> -ProcessBarrierPlaceholder(void)\n> +ProcessBarrierSmgrRelease(void)\n> {\n> - /*\n> - * XXX. This is just a placeholder until the first real user of this\n> - * machinery gets committed. Rename PROCSIGNAL_BARRIER_PLACEHOLDER to\n> - * PROCSIGNAL_BARRIER_SOMETHING_ELSE where SOMETHING_ELSE is something\n> - * appropriately descriptive. Get rid of this function and instead have\n> - * ProcessBarrierSomethingElse. Most likely, that function should live in\n> - * the file pertaining to that subsystem, rather than here.\n> - *\n> - * The return value should be 'true' if the barrier was successfully\n> - * absorbed and 'false' if not. Note that returning 'false' can lead to\n> - * very frequent retries, so try hard to make that an uncommon case.\n> - */\n> + smgrrelease();\n>\n> Should this instead be in smgr.c to avoid setting a precedent for procsignal.c\n> to be littered with absorption functions?\n\nDone.", "msg_date": "Tue, 2 Mar 2021 00:54:49 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Tue, Feb 2, 2021 at 11:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Right, the checkpoint itself is probably worse than this\n> \"close-all-your-files!\" thing in some cases [...]\n\nI've been wondering what obscure hazards these \"tombstone\" (for want\nof a better word) files guard against, besides the one described in\nthe comments for mdunlink(). I've been thinking about various\nschemes that can be summarised as \"put the tombstones somewhere else\",\nbut first... this is probably a stupid question, but what would break\nif we just ... turned all this stuff off when wal_level is high enough\n(as it is by default)?", "msg_date": "Tue, 2 Mar 2021 17:28:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "> On 1 Mar 2021, at 12:54, Thomas Munro <thomas.munro@gmail.com> wrote:\n\nBased on my (limited) experience with procsignalbarriers I think this patch is\ncorrect; the general rule-of-thumb of synchronizing backend state on barrier\nabsorption doesn't really apply in this case, literally all we want is to know\nthat we've hit one interrupt and performed removals.\n\n>> +#if defined(WIN32) || defined(USE_ASSERT_CHECKING)\n>> \n>> Is the USE_ASSERT_CHECKING clause to exercise the code a more frequent than\n>> just on Windows? That could warrant a quick word in the comment if so IMO to\n>> avoid confusion.\n> \n> Note added.\n\nSince there is no way to get make the first destroy_tablespace_directories call\nfail on purpose in testing, the assertion coverage may have limited use though?\n\nI don't have a Windows env handy right now, but everything works as expected\nwhen testing this on Linux and macOS by inducing the codepath. Will try to do\nsome testing in Windows as well.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 3 Mar 2021 16:18:39 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Thu, Mar 4, 2021 at 4:18 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 1 Mar 2021, at 12:54, Thomas Munro <thomas.munro@gmail.com> wrote:\n> Based on my (limited) experience with procsignalbarriers I think this patch is\n\nHelp wanted: must have at least 14 years experience with\nProcSignalBarrier! Yeah, I'm still figuring out the programming rules\nhere myself...\n\n> correct; the general rule-of-thumb of synchronizing backend state on barrier\n> absorption doesn't really apply in this case, literally all we want is to know\n> that we've hit one interrupt and performed removals.\n\nI guess the way to think about it is that the desired state is \"you\nhave no files open that have been unlinked\".\n\n> >> +#if defined(WIN32) || defined(USE_ASSERT_CHECKING)\n> >>\n> >> Is the USE_ASSERT_CHECKING clause to exercise the code a more frequent than\n> >> just on Windows? That could warrant a quick word in the comment if so IMO to\n> >> avoid confusion.\n> >\n> > Note added.\n>\n> Since there is no way to get make the first destroy_tablespace_directories call\n> fail on purpose in testing, the assertion coverage may have limited use though?\n\nThere is: all you have to do is drop a table, and then drop the\ntablespace that held it without a checkpoint in between. That\nscenario is exercised by the \"tablespace\" regression test, and you can\nreach it manually like this on a Unix system, with assertions enabled.\nOn a Windows box, I believe it should be reached even if there was a\ncheckpoint in between (or maybe you need to have a second session that\nhas accessed the table, not sure, no actual Windows here I just fling\nstuff at CI). I've added an elog() message to show the handler\nrunning in each process in my cluster, so you can see it (it's also\ninstructive to put a sleep in there):\n\nMy psql session:\n\n postgres=# create tablespace ts location '/tmp/ts';\n CREATE TABLESPACE\n postgres=# create table t () tablespace ts;\n CREATE TABLE\n postgres=# drop table t;\n DROP TABLE\n postgres=# drop tablespace ts;\n\nAt this point the log shows:\n\n 2021-03-04 09:54:33.429 NZDT [239811] LOG: ProcessBarrierSmgrRelease()\n 2021-03-04 09:54:33.429 NZDT [239821] LOG: ProcessBarrierSmgrRelease()\n 2021-03-04 09:54:33.429 NZDT [239821] STATEMENT: drop tablespace ts;\n 2021-03-04 09:54:33.429 NZDT [239814] LOG: ProcessBarrierSmgrRelease()\n 2021-03-04 09:54:33.429 NZDT [239816] LOG: ProcessBarrierSmgrRelease()\n 2021-03-04 09:54:33.429 NZDT [239812] LOG: ProcessBarrierSmgrRelease()\n 2021-03-04 09:54:33.429 NZDT [239813] LOG: ProcessBarrierSmgrRelease()\n\nNow back to my session:\n\n DROP TABLESPACE\n postgres=#\n\n> I don't have a Windows env handy right now, but everything works as expected\n> when testing this on Linux and macOS by inducing the codepath. Will try to do\n> some testing in Windows as well.\n\nThanks!\n\nOne question on my mind is: since this wait is interruptible (if you\nget sick of waiting for a slow-to-respond process you can hit ^C, or\nstatement_timeout can presumably do it for you), do we leave things in\na sane state on error (catalog changes rolled back, no damage done on\ndisk)? There is actually a nasty race there already (\"If we crash\nbefore committing...\"), and we need to make sure we don't make that\nwindow wider. One thing I am pretty sure of is that it's never OK to\nwait for a ProcSignalBarrier when you're not interruptible; for one\nthing, you won't process the request yourself (self deadlock) and for\nanother, it would be hypocritical of you to expect others to process\ninterrupts when you can't (interprocess deadlock); perhaps there\nshould be an assertion about that, but it's pretty obvious if you\nscrew that up: it hangs. That's why I release and reacquire that\nLWLock. But does that break some logic?\n\nAndres just pointed me at the following CI failure on the AIO branch,\nwhich seems to be due to a variant of this problem involving DROP\nDATABASE.\n\nhttps://cirrus-ci.com/task/6730034573475840?command=windows_worker_buf#L7\n\nDuh, of course, we need the same thing in that case, and also in its\nredo routine.\n\nAnd... the same problem must also exist for the closely related ALTER\nDATABASE ... SET TABLESPACE. I guess these cases are pretty unlikely\nto fail without the AIO branch's funky \"io worker\" processes that love\nhoarding file descriptors, but I suppose it must be possible for the\nbgwriter to have a relevant file descriptor open at the wrong time on\nmaster today.\n\nOne thing I haven't tried to do yet is improve the \"pipelining\" by\nissuing the request sooner, in the cases where we do this stuff\nunconditionally.", "msg_date": "Thu, 4 Mar 2021 11:19:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "Hi,\n\nOn 2021-03-02 00:54:49 +1300, Thomas Munro wrote:\n> Subject: [PATCH v6] Use a global barrier to fix DROP TABLESPACE on Windows.\n\nAfter finally getting the windows CI tests to work on AIO I noticed that\nthe windows tests show the following:\nhttps://cirrus-ci.com/task/4536820663844864\n\n...\n============================================================\nChecking dummy_seclabel\nC:/Users/ContainerAdministrator/AppData/Local/Temp/cirrus-ci-build/Debug/pg_regress/pg_regress --bindir=C:/Users/ContainerAdministrator/AppData/Local/Temp/cirrus-ci-build/Debug/psql --dbname=contrib_regression dummy_seclabel\n(using postmaster on localhost, default port)\n============== dropping database \"contrib_regression\" ==============\nWARNING: could not remove file or directory \"base/16384\": Directory not empty\n...\n\nwhich makes sense - the exact same problem exists for DROP DATABASE.\n\n\nI suspect it makes sense to tackle the problem as part of the same\ncommit, but I'm not opposed to splitting it if that makes sense...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Mar 2021 14:21:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Tue, Mar 2, 2021 at 5:28 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Feb 2, 2021 at 11:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Right, the checkpoint itself is probably worse than this\n> > \"close-all-your-files!\" thing in some cases [...]\n>\n> I've been wondering what obscure hazards these \"tombstone\" (for want\n> of a better word) files guard against, besides the one described in\n> the comments for mdunlink(). I've been thinking about various\n> schemes that can be summarised as \"put the tombstones somewhere else\",\n> but first... this is probably a stupid question, but what would break\n> if we just ... turned all this stuff off when wal_level is high enough\n> (as it is by default)?\n>\n> [0001-Make-relfile-tombstone-files-conditional-on-WAL-leve.not-for-cfbot-patch]\n\nI had the opportunity to ask the inventor of UNLOGGED TABLEs, who\nanswered my question with another question, something like, \"yeah, but\nwhat about UNLOGGED TABLEs?\". It seems to me that any schedule where\na relfilenode is recycled should be recovered correctly, no matter\nwhat sequence of persistence levels is involved. If you dropped an\nUNLOGGED table, then its init fork is removed on commit, so a\npermanent table created later with the same relfilenode has no init\nfork and no data is eaten; the other way around you get an init fork,\nand your table is reset on crash recovery, as it should be. It works\nbecause we still log and replay the create/drop; it doesn't matter\nthat we don't log the table's data as far as I can see so far.\n\n\n", "msg_date": "Thu, 4 Mar 2021 11:54:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Thu, Mar 4, 2021 at 11:54 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I've been wondering what obscure hazards these \"tombstone\" (for want\n> > of a better word) files guard against, besides the one described in\n> > the comments for mdunlink(). I've been thinking about various\n> > schemes that can be summarised as \"put the tombstones somewhere else\",\n> > but first... this is probably a stupid question, but what would break\n> > if we just ... turned all this stuff off when wal_level is high enough\n> > (as it is by default)?\n\nThe \"how-to-make-it-so-that-we-don't-need-a-checkpoint\" subtopic is\nhereby ejected from this thead, and moved over here:\nhttps://commitfest.postgresql.org/33/3030/\n\n\n", "msg_date": "Fri, 5 Mar 2021 11:08:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "> On 3 Mar 2021, at 23:19, Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Mar 4, 2021 at 4:18 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>> Since there is no way to get make the first destroy_tablespace_directories call\n>> fail on purpose in testing, the assertion coverage may have limited use though?\n> \n> There is: all you have to do is drop a table, and then drop the\n> tablespace that held it without a checkpoint in between.\n\nOf course, that makes a lot of sense.\n\n> One thing I am pretty sure of is that it's never OK to\n> wait for a ProcSignalBarrier when you're not interruptible;\n\nAgreed.\n\n> for one thing, you won't process the request yourself (self deadlock) and for\n> another, it would be hypocritical of you to expect others to process interrupts\n> when you can't (interprocess deadlock); perhaps there should be an assertion\n> about that, but it's pretty obvious if you screw that up: it hangs.\n\n\nAn assertion for interrupts not being held off doesn't seem like a terrible\nidea, if only to document the intent of the code for readers.\n\n> That's why I release and reacquire that LWLock. But does that break some\n> logic?\n\n\nOne clear change to current behavior is naturally that a concurrent\nTablespaceCreateDbspace can happen while barrier absorption is performed.\nGiven where we are that might not be a problem, but I don't have enough\ncaffeine at the moment to conclude anything there. Testing nu inducing\nconcurent calls while absorption was stalled didn't trigger anything, but I'm\nsure I didn't test every scenario. Do you see anything off the cuff?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 6 Mar 2021 00:10:52 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Sat, Mar 6, 2021 at 12:10 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 3 Mar 2021, at 23:19, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > That's why I release and reacquire that LWLock. But does that break some\n> > logic?\n>\n> One clear change to current behavior is naturally that a concurrent\n> TablespaceCreateDbspace can happen while barrier absorption is performed.\n> Given where we are that might not be a problem, but I don't have enough\n> caffeine at the moment to conclude anything there. Testing nu inducing\n> concurent calls while absorption was stalled didn't trigger anything, but I'm\n> sure I didn't test every scenario. Do you see anything off the cuff?\n\nNow I may have the opposite problem (too much coffee) but it looks\nlike it should work about as well as it does today. At this new point\nwhere we released the LWLock, all we've really done is possibly unlink\nsome empty database directories in destroy_tablespace_directories(),\nand that's harmless, they'll be recreated on demand if we abandon\nship. If TablespaceCreateDbspace() happened while we were absorbing\nthe barrier and not holding the lock in this new code, then a\nconcurrent mdcreate() is running and so we have a race where we'll\nagain try to drop all empty directories, and it'll try to create its\nrelfile in the new empty directory, and one of us will fail (possibly\nwith an ugly ENOENT error message). But that's already the case in\nthe master branch: mdcreate() could have run TablespaceCreateDbspace()\nbefore we acquire the lock in the master branch, and (with\npathological enough scheduling) it could reach its attempt to create\nits relfile after DropTableSpace() has unlinked the empty directory.\n\nThe interlocking here is hard to follow. I wonder why we don't use\nheavyweight locks to do per-tablespace interlocking between\nDefineRelation() and DropTableSpace(). I'm sure this question is\nhopelessly naive and I should probably go and read some history.\n\n\n", "msg_date": "Sat, 20 Mar 2021 17:47:47 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "Just as an FYI: this entry currently fails with \"Timed out!\" on cfbot\nbecause of an oversight in the master branch[1], AFAICS. It should\npass again once that's fixed.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGLah2w1pWKHonZP_%2BEQw69%3Dq56AHYwCgEN8GDzsRG_Hgw%40mail.gmail.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 12:24:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Sun, Jun 13, 2021 at 8:25 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Just as an FYI: this entry currently fails with \"Timed out!\" on cfbot\n> because of an oversight in the master branch[1], AFAICS. It should\n> pass again once that's fixed.\n>\n> [1] https://www.postgresql.org/message-id/CA%2BhUKGLah2w1pWKHonZP_%2BEQw69%3Dq56AHYwCgEN8GDzsRG_Hgw%40mail.gmail.com\n\nThat's fixed now. So what should we do about this patch? This is a\nbug, so it would be nice to do *something*. I don't really like the\nfact that this makes the behavior contingent on USE_ASSERT_CHECKING,\nand I suggest that you make a new symbol like USE_BARRIER_SMGR_RELEASE\nwhich by default gets defined on WIN32, but can be defined elsewhere\nif you want (see the treatment of EXEC_BACKEND in pg_config_manual.h).\nFurthermore, I can't see back-patching this, given that it would be\nthe very first use of the barrier machinery. But I think it would be\ngood to get something into master, because then we'd actually be using\nthis procsignalbarrier stuff for something. On a good day we've fixed\na bug. On a bad day we'll learn something new about how\nprocsignalbarrier needs to work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Jan 2022 16:22:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" }, { "msg_contents": "On Thu, Jan 6, 2022 at 10:23 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> That's fixed now. So what should we do about this patch? This is a\n> bug, so it would be nice to do *something*. I don't really like the\n> fact that this makes the behavior contingent on USE_ASSERT_CHECKING,\n> and I suggest that you make a new symbol like USE_BARRIER_SMGR_RELEASE\n> which by default gets defined on WIN32, but can be defined elsewhere\n> if you want (see the treatment of EXEC_BACKEND in pg_config_manual.h).\n\nOk, done like that.\n\n> Furthermore, I can't see back-patching this, given that it would be\n> the very first use of the barrier machinery. But I think it would be\n> good to get something into master, because then we'd actually be using\n> this procsignalbarrier stuff for something. On a good day we've fixed\n> a bug. On a bad day we'll learn something new about how\n> procsignalbarrier needs to work.\n\nAgreed.\n\nPushed. The basic Windows/tablespace bug seen occasionally in CI[1]\nshould now be fixed.\n\nFor the sake of the archives, here's a link to the ongoing discussion\nabout further potential uses of this mechanism:\n\nhttps://www.postgresql.org/message-id/flat/20220209220004.kb3dgtn2x2k2gtdm%40alap3.anarazel.de\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJp-m8uAD_wS7%2BdkTgif013SNBSoJujWxvRUzZ1nkoUyA%40mail.gmail.com\n\n\n", "msg_date": "Sat, 12 Feb 2022 10:22:50 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix DROP TABLESPACE on Windows with ProcSignalBarrier?" } ]
[ { "msg_contents": "Doc: improve documentation of oid columns that can be zero.\n\npg_attribute.atttypid\nZero if column is dropped.\n\npg_class.relam\nCan be zero, e.g. for views.\n\npg_depend.classid\nZero for pinned objects.\n\npg_language.lanplcallfoid\nZero for internal languages.\n\npg_operator.oprcode\nZero if none.\n\npg_operator.oprcom\nZero if none.\n\npg_operator.oprjoin\nZero if none.\n\npg_operator.oprnegate\nZero if none.\n\npg_operator.oprrest\nZero if none.\n\npg_operator.oprresult\nZero if none.\n\npg_policy.polroles\nArray with a zero element if none.\n\npg_shdepend.classid\nZero for pinned objects (deptype='p'),\nmeaning there is no dependent object.\n\npg_shdepend.objid\nZero if none.\n\npg_trigger.tgconstrindid\nZero if none.\n\npg_trigger.tgconstrrelid\nZero if none.\n\ndoc/src/sgml/catalogs.sgml | 34 ++++++++++++++++++----------------\n1 file changed, 18 insertions(+), 16 deletions(-)", "msg_date": "Sun, 31 Jan 2021 14:22:32 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "[PATCH] Doc: improve documentation of oid columns that can be zero." } ]
[ { "msg_contents": "Please ignore previous email, the attached file was 0 bytes.\nHere comes the patch again, now including data.\n\n--\n\nDoc: improve documentation of oid columns that can be zero.\n\npg_attribute.atttypid\nZero if column is dropped.\n\npg_class.relam\nCan be zero, e.g. for views.\n\npg_depend.classid\nZero for pinned objects.\n\npg_language.lanplcallfoid\nZero for internal languages.\n\npg_operator.oprcode\nZero if none.\n\npg_operator.oprcom\nZero if none.\n\npg_operator.oprjoin\nZero if none.\n\npg_operator.oprnegate\nZero if none.\n\npg_operator.oprrest\nZero if none.\n\npg_operator.oprresult\nZero if none.\n\npg_policy.polroles\nArray with a zero element if none.\n\npg_shdepend.classid\nZero for pinned objects (deptype='p'),\nmeaning there is no dependent object.\n\npg_shdepend.objid\nZero if none.\n\npg_trigger.tgconstrindid\nZero if none.\n\npg_trigger.tgconstrrelid\nZero if none.", "msg_date": "Sun, 31 Jan 2021 14:27:53 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?[PATCH]_Doc:_improve_documentation_of_oid_columns_that_can_be_?=\n =?UTF-8?Q?zero._(correct_version)?=" }, { "msg_contents": "On Sun, Jan 31, 2021, at 10:27 AM, Joel Jacobson wrote:\n> Here comes the patch again, now including data.\nJoel, register this patch into the next CF [1] so we don't lose track of it.\n\n\n[1] https://commitfest.postgresql.org/32/\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, Jan 31, 2021, at 10:27 AM, Joel Jacobson wrote:Here comes the patch again, now including data.Joel, register this patch into the next CF [1] so we don't lose track of it.[1] https://commitfest.postgresql.org/32/--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 01 Feb 2021 23:10:24 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_Doc:_improve_documentation_of_oid_columns_that_can?=\n =?UTF-8?Q?_be_zero._(correct_version)?=" }, { "msg_contents": "Hi Euler,\n\nI've tried to login to the CF interface a couple of times now, but seems to have lost my password.\nI've tried to use the \"Password reset\" form [1], but I don't get any email.\nThe email is correct, because when I try to re-register it says it's taken.\n\nNot sure who I should ask for help. Anyone?\n\n/Joel\n\n[1] https://www.postgresql.org/account/reset/\n\nOn Tue, Feb 2, 2021, at 03:10, Euler Taveira wrote:\n> On Sun, Jan 31, 2021, at 10:27 AM, Joel Jacobson wrote:\n>> Here comes the patch again, now including data.\n> Joel, register this patch into the next CF [1] so we don't lose track of it.\n> \n> \n> [1] https://commitfest.postgresql.org/32/\n> \n> \n> --\n> Euler Taveira\n> EDB https://www.enterprisedb.com/\n> \n\nKind regards,\n\nJoel\n\nHi Euler,I've tried to login to the CF interface a couple of times now, but seems to have lost my password.I've tried to use the \"Password reset\" form [1], but I don't get any email.The email is correct, because when I try to re-register it says it's taken.Not sure who I should ask for help. Anyone?/Joel[1] https://www.postgresql.org/account/reset/On Tue, Feb 2, 2021, at 03:10, Euler Taveira wrote:On Sun, Jan 31, 2021, at 10:27 AM, Joel Jacobson wrote:Here comes the patch again, now including data.Joel, register this patch into the next CF [1] so we don't lose track of it.[1] https://commitfest.postgresql.org/32/--Euler TaveiraEDB   https://www.enterprisedb.com/Kind regards,Joel", "msg_date": "Tue, 02 Feb 2021 09:13:43 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_Doc:_improve_documentation_of_oid_columns_that_can?=\n =?UTF-8?Q?_be_zero._(correct_version)?=" }, { "msg_contents": "On Tue, Feb 2, 2021, at 5:13 AM, Joel Jacobson wrote:\n> I've tried to login to the CF interface a couple of times now, but seems to have lost my password.\n> I've tried to use the \"Password reset\" form [1], but I don't get any email.\n> The email is correct, because when I try to re-register it says it's taken.\n> \n> Not sure who I should ask for help. Anyone?\nYou should probably email: webmaster (at) postgresql (dot) org\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Feb 2, 2021, at 5:13 AM, Joel Jacobson wrote:I've tried to login to the CF interface a couple of times now, but seems to have lost my password.I've tried to use the \"Password reset\" form [1], but I don't get any email.The email is correct, because when I try to re-register it says it's taken.Not sure who I should ask for help. Anyone?You should probably email: webmaster (at) postgresql (dot) org--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 02 Feb 2021 08:34:30 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_Doc:_improve_documentation_of_oid_columns_that_can?=\n =?UTF-8?Q?_be_zero._(correct_version)?=" }, { "msg_contents": "On Tue, Feb 2, 2021, at 12:34, Euler Taveira wrote:\n>You should probably email: webmaster (at) postgresql (dot) org\n\nThanks, done.\n\n/Joel\nOn Tue, Feb 2, 2021, at 12:34, Euler Taveira wrote:>You should probably email: webmaster (at) postgresql (dot) orgThanks, done./Joel", "msg_date": "Tue, 02 Feb 2021 13:50:42 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_Doc:_improve_documentation_of_oid_columns_that_can?=\n =?UTF-8?Q?_be_zero._(correct_version)?=" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Doc: improve documentation of oid columns that can be zero.\n\nSince this is pretty closely tied to the catalog-foreign-key work,\nI went ahead and reviewed/pushed it. The zero notations now match\nup with what we'd found in the other thread.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Feb 2021 16:17:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?[PATCH]_Doc:_improve_documentation_of_oid_columns_that_can_be_?=\n =?UTF-8?Q?zero._(correct_version)?=" } ]
[ { "msg_contents": "Implementation of subscripting for jsonb\n\nSubscripting for jsonb does not support slices, does not have a limit for the\nnumber of subscripts, and an assignment expects a replace value to have jsonb\ntype. There is also one functional difference between assignment via\nsubscripting and assignment via jsonb_set(). When an original jsonb container\nis NULL, the subscripting replaces it with an empty jsonb and proceeds with\nan assignment.\n\nFor the sake of code reuse, we rearrange some parts of jsonb functionality\nto allow the usage of the same functions for jsonb_set and assign subscripting\noperation.\n\nThe original idea belongs to Oleg Bartunov.\n\nCatversion is bumped.\n\nDiscussion: https://postgr.es/m/CA%2Bq6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf%3Dg%40mail.gmail.com\nDiscussion: https://postgr.es/m/CA%2Bq6zcX3mdxGCgdThzuySwH-ApyHHM-G4oB1R0fn0j2hZqqkLQ%40mail.gmail.com\nDiscussion: https://postgr.es/m/CA%2Bq6zcVDuGBv%3DM0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w%40mail.gmail.com\nDiscussion: https://postgr.es/m/CA%2Bq6zcVovR%2BXY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA%40mail.gmail.com\nAuthor: Dmitry Dolgov\nReviewed-by: Tom Lane, Arthur Zakirov, Pavel Stehule, Dian M Fay\nReviewed-by: Andrew Dunstan, Chapman Flack, Merlin Moncure, Peter Geoghegan\nReviewed-by: Alvaro Herrera, Jim Nasby, Josh Berkus, Victor Wagner\nReviewed-by: Aleksander Alekseev, Robert Haas, Oleg Bartunov\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/676887a3b0b8e3c0348ac3f82ab0d16e9a24bd43\n\nModified Files\n--------------\ndoc/src/sgml/json.sgml | 51 +++++\nsrc/backend/utils/adt/Makefile | 1 +\nsrc/backend/utils/adt/jsonb_util.c | 72 ++++++-\nsrc/backend/utils/adt/jsonbsubs.c | 412 ++++++++++++++++++++++++++++++++++++\nsrc/backend/utils/adt/jsonfuncs.c | 188 ++++++++--------\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 4 +\nsrc/include/catalog/pg_type.dat | 3 +-\nsrc/include/utils/jsonb.h | 6 +-\nsrc/test/regress/expected/jsonb.out | 272 +++++++++++++++++++++++-\nsrc/test/regress/sql/jsonb.sql | 84 +++++++-\nsrc/tools/pgindent/typedefs.list | 1 +\n12 files changed, 988 insertions(+), 108 deletions(-)", "msg_date": "Sun, 31 Jan 2021 20:54:27 +0000", "msg_from": "Alexander Korotkov <akorotkov@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Implementation of subscripting for jsonb" }, { "msg_contents": "On 31/01/2021 22:54, Alexander Korotkov wrote:\n> Implementation of subscripting for jsonb\n\nThe Itanium and sparc64 buildfarm members didn't like this, and are \ncrashing at \"select ('123'::jsonb)['a'];\". Unaligned memory access, perhaps?\n\n- Heikki\n\n\n", "msg_date": "Mon, 1 Feb 2021 08:55:52 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implementation of subscripting for jsonb" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 31/01/2021 22:54, Alexander Korotkov wrote:\n>> Implementation of subscripting for jsonb\n\n> The Itanium and sparc64 buildfarm members didn't like this, and are \n> crashing at \"select ('123'::jsonb)['a'];\". Unaligned memory access, perhaps?\n\nI think I just identified the cause.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Feb 2021 02:05:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implementation of subscripting for jsonb" }, { "msg_contents": "On Mon, Feb 1, 2021 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > On 31/01/2021 22:54, Alexander Korotkov wrote:\n> >> Implementation of subscripting for jsonb\n>\n> > The Itanium and sparc64 buildfarm members didn't like this, and are\n> > crashing at \"select ('123'::jsonb)['a'];\". Unaligned memory access, perhaps?\n>\n> I think I just identified the cause.\n\nThanks again for fixing this.\n\nBTW, I managed to reproduce the issue by compiling with CFLAGS=\"-O0\n-fsanitize=alignment -fsanitize-trap=alignment\" and the patch\nattached.\n\nI can propose the following to catch such issues earlier. We could\nfinish (wrap attribute with macro and apply it to other places with\nmisalignment access if any) and apply the attached patch and make\ncommitfest.cputube.org check patches with CFLAGS=\"-O0\n-fsanitize=alignment -fsanitize-trap=alignment\". What do you think?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 1 Feb 2021 15:41:59 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implementation of subscripting for jsonb" }, { "msg_contents": "On Mon, Feb 1, 2021 at 3:41 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, Feb 1, 2021 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > > On 31/01/2021 22:54, Alexander Korotkov wrote:\n> > >> Implementation of subscripting for jsonb\n> >\n> > > The Itanium and sparc64 buildfarm members didn't like this, and are\n> > > crashing at \"select ('123'::jsonb)['a'];\". Unaligned memory access, perhaps?\n> >\n> > I think I just identified the cause.\n>\n> Thanks again for fixing this.\n>\n> BTW, I managed to reproduce the issue by compiling with CFLAGS=\"-O0\n> -fsanitize=alignment -fsanitize-trap=alignment\" and the patch\n> attached.\n>\n> I can propose the following to catch such issues earlier. We could\n> finish (wrap attribute with macro and apply it to other places with\n> misalignment access if any) and apply the attached patch and make\n> commitfest.cputube.org check patches with CFLAGS=\"-O0\n> -fsanitize=alignment -fsanitize-trap=alignment\". What do you think?\n\nThe revised patch is attached. The attribute is wrapped into\npg_attribute_no_sanitize_alignment() macro. I've checked it works for\nme with gcc-10 and clang-11.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 1 Feb 2021 16:00:21 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implementation of subscripting for jsonb" }, { "msg_contents": "[ redirecting to -hackers ]\n\nAlexander Korotkov <aekorotkov@gmail.com> writes:\n>> BTW, I managed to reproduce the issue by compiling with CFLAGS=\"-O0\n>> -fsanitize=alignment -fsanitize-trap=alignment\" and the patch\n>> attached.\n>> I can propose the following to catch such issues earlier. We could\n>> finish (wrap attribute with macro and apply it to other places with\n>> misalignment access if any) and apply the attached patch and make\n>> commitfest.cputube.org check patches with CFLAGS=\"-O0\n>> -fsanitize=alignment -fsanitize-trap=alignment\". What do you think?\n\n> The revised patch is attached. The attribute is wrapped into\n> pg_attribute_no_sanitize_alignment() macro. I've checked it works for\n> me with gcc-10 and clang-11.\n\nI found some time to experiment with this today. It is really nice\nto be able to detect these problems without using obsolete hardware.\nHowever, I have a few issues:\n\n* Why do you recommend -O0? Seems to me we want to test the code\nas we'd normally use it, ie typically -O2.\n\n* -fsanitize-trap=alignment seems to be a clang-ism; gcc won't take it.\nHowever, after some experimenting I found that \"-fno-sanitize-recover=all\"\n(or \"-fno-sanitize-recover=alignment\" if you prefer) produces roughly\nequivalent results on gcc.\n\n* Both clang and gcc seem to be happy with the same spelling of the\nfunction attribute, which is fortunate. However, I seriously doubt\nthat bare \"#ifdef __GNUC__\" is going to be good enough. At the very\nleast there's going to need to be a compiler version test in there,\nand we might end up needing to get the configure script involved.\n\n* I think the right place to run such a check is in some buildfarm\nanimals. The cfbot only sees portions of what goes into our tree.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Feb 2021 19:47:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "I wrote:\n> * Both clang and gcc seem to be happy with the same spelling of the\n> function attribute, which is fortunate. However, I seriously doubt\n> that bare \"#ifdef __GNUC__\" is going to be good enough. At the very\n> least there's going to need to be a compiler version test in there,\n> and we might end up needing to get the configure script involved.\n\nAfter digging in gcc's release history, it seems they invented\n\"-fsanitize=alignment\" in GCC 5, so we can make this work for gcc\nby writing\n\n#if __GNUC__ >= 5\n\n(the likely() macro already uses a similar approach). Can't say\nif that's close enough for clang too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Feb 2021 20:20:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "I wrote:\n> After digging in gcc's release history, it seems they invented\n> \"-fsanitize=alignment\" in GCC 5, so we can make this work for gcc\n> by writing\n> #if __GNUC__ >= 5\n> (the likely() macro already uses a similar approach). Can't say\n> if that's close enough for clang too.\n\nUgh, no it isn't: even pretty recent clang releases only define\n__GNUC__ as 4. It looks like we need a separate test on clang's\nversion. I looked at their version history and sanitizers seem\nto have come in around clang 7, so I propose the attached (where\nI worked a bit harder on the comment, too).\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 08 Feb 2021 11:49:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "Hi, Tom!\n\nThank you for taking care of this.\n\nOn Mon, Feb 8, 2021 at 3:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> [ redirecting to -hackers ]\n>\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> >> BTW, I managed to reproduce the issue by compiling with CFLAGS=\"-O0\n> >> -fsanitize=alignment -fsanitize-trap=alignment\" and the patch\n> >> attached.\n> >> I can propose the following to catch such issues earlier. We could\n> >> finish (wrap attribute with macro and apply it to other places with\n> >> misalignment access if any) and apply the attached patch and make\n> >> commitfest.cputube.org check patches with CFLAGS=\"-O0\n> >> -fsanitize=alignment -fsanitize-trap=alignment\". What do you think?\n>\n> > The revised patch is attached. The attribute is wrapped into\n> > pg_attribute_no_sanitize_alignment() macro. I've checked it works for\n> > me with gcc-10 and clang-11.\n>\n> I found some time to experiment with this today. It is really nice\n> to be able to detect these problems without using obsolete hardware.\n> However, I have a few issues:\n>\n> * Why do you recommend -O0? Seems to me we want to test the code\n> as we'd normally use it, ie typically -O2.\n\nMy idea was that with -O0 we can see some unaligned accesses, which\nwould be optimized away with -O2. I mean with -O2 we might completely\nskip accessing some pointer, which would be accessed in -O0. However,\nthis situation is probably very rare.\n\n> * I think the right place to run such a check is in some buildfarm\n> animals. The cfbot only sees portions of what goes into our tree.\n\nCould we have both cfbot + buildfarm animals?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Feb 2021 03:34:27 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "On Mon, Feb 8, 2021 at 7:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > After digging in gcc's release history, it seems they invented\n> > \"-fsanitize=alignment\" in GCC 5, so we can make this work for gcc\n> > by writing\n> > #if __GNUC__ >= 5\n> > (the likely() macro already uses a similar approach). Can't say\n> > if that's close enough for clang too.\n>\n> Ugh, no it isn't: even pretty recent clang releases only define\n> __GNUC__ as 4. It looks like we need a separate test on clang's\n> version. I looked at their version history and sanitizers seem\n> to have come in around clang 7, so I propose the attached (where\n> I worked a bit harder on the comment, too).\n\nLooks good to me. Thank you for revising!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Feb 2021 03:35:01 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Mon, Feb 8, 2021 at 7:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ugh, no it isn't: even pretty recent clang releases only define\n>> __GNUC__ as 4. It looks like we need a separate test on clang's\n>> version. I looked at their version history and sanitizers seem\n>> to have come in around clang 7, so I propose the attached (where\n>> I worked a bit harder on the comment, too).\n\n> Looks good to me. Thank you for revising!\n\nWere you going to push this, or did you expect me to?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Feb 2021 13:46:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "On Tue, Feb 9, 2021 at 1:34 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Could we have both cfbot + buildfarm animals?\n\nHi Alexander,\n\nFor cfbot, yeah it does seem like a good idea to throw whatever code\nsanitiser stuff we can into the automated tests, especially stuff that\nisn't prone to false alarms. Can you please recommend an exact change\nto apply to:\n\nhttps://github.com/macdice/cfbot/blob/master/cirrus/.cirrus.yml\n\nNote that FreeBSD and macOS are using clang (though you might think\nthe latter is using gcc from its configure output...), and Linux is\nusing gcc.\n\n\n", "msg_date": "Fri, 12 Feb 2021 10:03:55 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "On Thu, Feb 11, 2021 at 9:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > On Mon, Feb 8, 2021 at 7:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Ugh, no it isn't: even pretty recent clang releases only define\n> >> __GNUC__ as 4. It looks like we need a separate test on clang's\n> >> version. I looked at their version history and sanitizers seem\n> >> to have come in around clang 7, so I propose the attached (where\n> >> I worked a bit harder on the comment, too).\n>\n> > Looks good to me. Thank you for revising!\n>\n> Were you going to push this, or did you expect me to?\n\nThank you for noticing. I'll commit this today.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 12 Feb 2021 04:36:02 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "Hi, Thomas!\n\nOn Fri, Feb 12, 2021 at 12:04 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Feb 9, 2021 at 1:34 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > Could we have both cfbot + buildfarm animals?\n> For cfbot, yeah it does seem like a good idea to throw whatever code\n> sanitiser stuff we can into the automated tests, especially stuff that\n> isn't prone to false alarms. Can you please recommend an exact change\n> to apply to:\n>\n> https://github.com/macdice/cfbot/blob/master/cirrus/.cirrus.yml\n>\n> Note that FreeBSD and macOS are using clang (though you might think\n> the latter is using gcc from its configure output...), and Linux is\n> using gcc.\n\nThank you for the feedback!\nI'll propose a pull-request at github.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 12 Feb 2021 17:29:48 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "I've updated buildfarm member longfin to use \"-fsanitize=alignment\n-fsanitize-trap=alignment\", and it just got through a run successfully\nwith that. It'd be good perhaps if some other buildfarm owners\nfollowed suit (mumble JIT coverage mumble).\n\nLooking around at other recent reports, it looks like we'll need to tweak\nthe compiler version cutoffs a bit. I see for instance that spurfowl,\nwith gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609, is whining:\n\npg_crc32c_sse42.c:24:1: warning: \\342\\200\\230no_sanitize\\342\\200\\231 attribute directive ignored [-Wattributes]\n\nSo maybe it'd better be __GNUC__ >= 6 not __GNUC__ >= 5. I think\nwe can wait a little bit for more reports before messing with that,\nthough.\n\nOnce this does settle, should we consider back-patching so that it's\npossible to run alignment checks in the back branches too?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Feb 2021 12:19:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "On Fri, Feb 12, 2021 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've updated buildfarm member longfin to use \"-fsanitize=alignment\n> -fsanitize-trap=alignment\", and it just got through a run successfully\n> with that. It'd be good perhaps if some other buildfarm owners\n> followed suit (mumble JIT coverage mumble).\n>\n> Looking around at other recent reports, it looks like we'll need to tweak\n> the compiler version cutoffs a bit. I see for instance that spurfowl,\n> with gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609, is whining:\n>\n> pg_crc32c_sse42.c:24:1: warning: \\342\\200\\230no_sanitize\\342\\200\\231 attribute directive ignored [-Wattributes]\n>\n> So maybe it'd better be __GNUC__ >= 6 not __GNUC__ >= 5. I think\n> we can wait a little bit for more reports before messing with that,\n> though.\n\nI've rechecked this in the documentation. no_sanitize attribute seems\nto appear since gcc 8.0. Much later than alignment sanitizer itself.\nhttps://gcc.gnu.org/gcc-8/changes.html\n\"A new attribute no_sanitize can be applied to functions to instruct\nthe compiler not to do sanitization of the options provided as\narguments to the attribute. Acceptable values for no_sanitize match\nthose acceptable by the -fsanitize command-line option.\"\n\nYes, let's wait for more feedback from buildfarm and fix the version\nrequirement.\n\n> Once this does settle, should we consider back-patching so that it's\n> possible to run alignment checks in the back branches too?\n\n+1\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 13 Feb 2021 01:29:43 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "I wrote:\n> Looking around at other recent reports, it looks like we'll need to tweak\n> the compiler version cutoffs a bit. I see for instance that spurfowl,\n> with gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609, is whining:\n> ...\n> So maybe it'd better be __GNUC__ >= 6 not __GNUC__ >= 5. I think\n> we can wait a little bit for more reports before messing with that,\n> though.\n\nFurther reports show that gcc 6.x and 7.x also produce warnings,\nso I moved the cutoff up to 8. Hopefully that's good enough.\nWe could write a configure test instead, but I'd just as soon not\nexpend configure cycles on this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Feb 2021 17:35:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Fri, Feb 12, 2021 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So maybe it'd better be __GNUC__ >= 6 not __GNUC__ >= 5. I think\n>> we can wait a little bit for more reports before messing with that,\n>> though.\n\n> I've rechecked this in the documentation. no_sanitize attribute seems\n> to appear since gcc 8.0. Much later than alignment sanitizer itself.\n\nYeah, I'd just come to that conclusion from scraping the buildfarm\nlogs. Good to see it confirmed in the manual though.\n\n>> Once this does settle, should we consider back-patching so that it's\n>> possible to run alignment checks in the back branches too?\n\n> +1\n\nLet's make sure we have a clean set of builds and then do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Feb 2021 17:55:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "I wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n>> On Fri, Feb 12, 2021 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Once this does settle, should we consider back-patching so that it's\n>>> possible to run alignment checks in the back branches too?\n\n>> +1\n\n> Let's make sure we have a clean set of builds and then do that.\n\nThe buildfarm seems to be happy --- the active members that haven't\nreported in should be unaffected by this patch, either because their\ncompiler versions are too old or because they're not x86 architecture.\nSo I went ahead and back-patched, and have adjusted longfin to apply\nthe -fsanitize switch in all branches.\n\n(I've checked that 9.6 passes check-world this way, but not the\nintermediate branches, so it's possible something will fail...)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Feb 2021 17:59:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" }, { "msg_contents": "On Sun, Feb 14, 2021 at 1:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> >> On Fri, Feb 12, 2021 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Once this does settle, should we consider back-patching so that it's\n> >>> possible to run alignment checks in the back branches too?\n>\n> >> +1\n>\n> > Let's make sure we have a clean set of builds and then do that.\n>\n> The buildfarm seems to be happy --- the active members that haven't\n> reported in should be unaffected by this patch, either because their\n> compiler versions are too old or because they're not x86 architecture.\n> So I went ahead and back-patched, and have adjusted longfin to apply\n> the -fsanitize switch in all branches.\n\nPerfect, thank you very much!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 14 Feb 2021 03:42:28 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Detecting pointer misalignment (was Re: pgsql: Implementation of\n subscripting for jsonb)" } ]
[ { "msg_contents": "Now that dfb75e478 is in, I looked into whether we can have some\nmachine-readable representation of the catalogs' foreign key\nrelationships. As per the previous discussion [1], it's not\npractical to have \"real\" SQL foreign key constraints, because\nthe semantics we use aren't quite right (i.e., using 0 instead\nof NULL in rows with no reference). Nonetheless it would be\nnice to have the knowledge available in some form, and we agreed\nthat a set-returning function returning the data would be useful.\n\nThe attached patch creates that function, and rewrites the oidjoins.sql\nregression test to use it, in place of the very incomplete info that's\nreverse-engineered by findoidjoins. It's mostly straightforward.\n\nMy original thought had been to add DECLARE_FOREIGN_KEY() macros\nfor all references, but I soon realized that in a large majority of\nthe cases, that's redundant with the BKI_LOOKUP() annotations we\nalready have. So I taught genbki.pl to extract FK data from\nBKI_LOOKUP() as well as the explicit DECLARE macros. That didn't\nremove the work entirely, because it turned out that we hadn't\nbothered to apply BKI_LOOKUP() labels to most of the catalogs that\nhave no hand-made data. A big chunk of the patch consists in\nadding those as needed. Also, I had to make the BKI_LOOKUP()\nmechanism a little more complete, because it failed on pg_namespace\nand pg_authid references. (It will still fail on some other\ncases such as BKI_LOOKUP(pg_foreign_server), but I think there's\nno need to fill that in until/unless we have some built-in data\nthat needs it.)\n\nThere are various loose ends yet to be cleaned up:\n\n* I'm unsure whether it's better for the SRF to return the\ncolumn names as textual names, or as column numbers. Names was\na bit easier for all the parts of the current patch so I did\nit that way, but maybe there's a case for the other way.\nActually the whole API for the SRF is just spur-of-the-moment,\nso maybe a different API would be better.\n\n* It would now be possible to remove the PGNSP and PGUID kluges\nentirely in favor of plain BKI_LOOKUP references to pg_namespace\nand pg_authid. The catalog header usage would get a little\nmore verbose: BKI_DEFAULT(PGNSP) becomes BKI_DEFAULT(pg_catalog)\nand BKI_DEFAULT(PGUID) becomes BKI_DEFAULT(POSTGRES). I'm a bit\ninclined to do it, simply to remove one bit of mechanism that has\nto be documented; but it's material for a separate patch perhaps.\n\n* src/tools/findoidjoins should be nuked entirely, AFAICS.\nAgain, that could be a follow-on patch.\n\n* I've not touched the SGML docs. Certainly\npg_get_catalog_foreign_keys() should be documented, and some\nadjustments in bki.sgml might be appropriate.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/dc5f44d9-5ec1-a596-0251-dadadcdede98%402ndquadrant.com", "msg_date": "Sun, 31 Jan 2021 16:39:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Recording foreign key relationships for the system catalogs" }, { "msg_contents": "Very nice. Thanks to this patch, I can get rid of my own parse-catalogs.pl hack and use pg_get_catalog_foreign_keys() instead.\n\n+1\n\nI can with high confidence assert the correctness of pg_get_catalog_foreign_keys()'s output,\nas it matches the lookup tables for the tool I'm hacking on precisely:\n\n--\n-- verify single column foreign keys\n--\nWITH\na AS (\n SELECT\n fktable::text,\n fkcols[1]::text,\n pktable::text,\n pkcols[1]::text\n FROM pg_get_catalog_foreign_keys()\n WHERE cardinality(fkcols) = 1\n),\nb AS (\n SELECT\n table_name,\n column_name,\n ref_table_name,\n ref_column_name\n FROM pit.oid_joins\n)\nSELECT\n (SELECT COUNT(*) FROM (SELECT * FROM a EXCEPT SELECT * FROM b) AS x) AS except_b,\n (SELECT COUNT(*) FROM (SELECT * FROM b EXCEPT SELECT * FROM a) AS x) AS except_a,\n (SELECT COUNT(*) FROM (SELECT * FROM b INTERSECT SELECT * FROM a) AS x) AS a_intersect_b\n;\n\nexcept_b | except_a | a_intersect_b\n----------+----------+---------------\n 0 | 0 | 209\n(1 row)\n\n--\n-- verify multi-column foreign keys\n--\nWITH\na AS (\n SELECT\n fktable::text,\n fkcols,\n pktable::text,\n pkcols\n FROM pg_get_catalog_foreign_keys()\n WHERE cardinality(fkcols) > 1\n),\nb AS (\n SELECT\n table_name,\n ARRAY[rel_column_name,attnum_column_name],\n 'pg_attribute',\n '{attrelid,attnum}'::text[]\n FROM pit.attnum_joins\n)\nSELECT\n (SELECT COUNT(*) FROM (SELECT * FROM a EXCEPT SELECT * FROM b) AS x) AS except_b,\n (SELECT COUNT(*) FROM (SELECT * FROM b EXCEPT SELECT * FROM a) AS x) AS except_a,\n (SELECT COUNT(*) FROM (SELECT * FROM b INTERSECT SELECT * FROM a) AS x) AS a_intersect_b\n;\n\nexcept_b | except_a | a_intersect_b\n----------+----------+---------------\n 0 | 0 | 8\n(1 row)\n\n/Joel\n\nOn Sun, Jan 31, 2021, at 22:39, Tom Lane wrote:\n> Now that dfb75e478 is in, I looked into whether we can have some\n> machine-readable representation of the catalogs' foreign key\n> relationships. As per the previous discussion [1], it's not\n> practical to have \"real\" SQL foreign key constraints, because\n> the semantics we use aren't quite right (i.e., using 0 instead\n> of NULL in rows with no reference). Nonetheless it would be\n> nice to have the knowledge available in some form, and we agreed\n> that a set-returning function returning the data would be useful.\n> \n> The attached patch creates that function, and rewrites the oidjoins.sql\n> regression test to use it, in place of the very incomplete info that's\n> reverse-engineered by findoidjoins. It's mostly straightforward.\n> \n> My original thought had been to add DECLARE_FOREIGN_KEY() macros\n> for all references, but I soon realized that in a large majority of\n> the cases, that's redundant with the BKI_LOOKUP() annotations we\n> already have. So I taught genbki.pl to extract FK data from\n> BKI_LOOKUP() as well as the explicit DECLARE macros. That didn't\n> remove the work entirely, because it turned out that we hadn't\n> bothered to apply BKI_LOOKUP() labels to most of the catalogs that\n> have no hand-made data. A big chunk of the patch consists in\n> adding those as needed. Also, I had to make the BKI_LOOKUP()\n> mechanism a little more complete, because it failed on pg_namespace\n> and pg_authid references. (It will still fail on some other\n> cases such as BKI_LOOKUP(pg_foreign_server), but I think there's\n> no need to fill that in until/unless we have some built-in data\n> that needs it.)\n> \n> There are various loose ends yet to be cleaned up:\n> \n> * I'm unsure whether it's better for the SRF to return the\n> column names as textual names, or as column numbers. Names was\n> a bit easier for all the parts of the current patch so I did\n> it that way, but maybe there's a case for the other way.\n> Actually the whole API for the SRF is just spur-of-the-moment,\n> so maybe a different API would be better.\n> \n> * It would now be possible to remove the PGNSP and PGUID kluges\n> entirely in favor of plain BKI_LOOKUP references to pg_namespace\n> and pg_authid. The catalog header usage would get a little\n> more verbose: BKI_DEFAULT(PGNSP) becomes BKI_DEFAULT(pg_catalog)\n> and BKI_DEFAULT(PGUID) becomes BKI_DEFAULT(POSTGRES). I'm a bit\n> inclined to do it, simply to remove one bit of mechanism that has\n> to be documented; but it's material for a separate patch perhaps.\n> \n> * src/tools/findoidjoins should be nuked entirely, AFAICS.\n> Again, that could be a follow-on patch.\n> \n> * I've not touched the SGML docs. Certainly\n> pg_get_catalog_foreign_keys() should be documented, and some\n> adjustments in bki.sgml might be appropriate.\n> \n> regards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/flat/dc5f44d9-5ec1-a596-0251-dadadcdede98%402ndquadrant.com\n> \n> \n> \n> *Attachments:*\n> * add-catalog-foreign-key-info-1.patch\n\nKind regards,\n\nJoel\n\nVery nice. Thanks to this patch, I can get rid of my own parse-catalogs.pl hack and use pg_get_catalog_foreign_keys() instead.+1I can with high confidence assert the correctness of pg_get_catalog_foreign_keys()'s output,as it matches the lookup tables for the tool I'm hacking on precisely:---- verify single column foreign keys--WITHa AS (  SELECT    fktable::text,    fkcols[1]::text,    pktable::text,    pkcols[1]::text  FROM pg_get_catalog_foreign_keys()  WHERE cardinality(fkcols) = 1),b AS (  SELECT    table_name,    column_name,    ref_table_name,    ref_column_name  FROM pit.oid_joins)SELECT  (SELECT COUNT(*) FROM (SELECT * FROM a EXCEPT SELECT * FROM b) AS x) AS except_b,  (SELECT COUNT(*) FROM (SELECT * FROM b EXCEPT SELECT * FROM a) AS x) AS except_a,  (SELECT COUNT(*) FROM (SELECT * FROM b INTERSECT SELECT * FROM a) AS x) AS a_intersect_b;except_b | except_a | a_intersect_b----------+----------+---------------        0 |        0 |           209(1 row)---- verify multi-column foreign keys--WITHa AS (  SELECT    fktable::text,    fkcols,    pktable::text,    pkcols  FROM pg_get_catalog_foreign_keys()  WHERE cardinality(fkcols) > 1),b AS (  SELECT    table_name,    ARRAY[rel_column_name,attnum_column_name],    'pg_attribute',    '{attrelid,attnum}'::text[]  FROM pit.attnum_joins)SELECT  (SELECT COUNT(*) FROM (SELECT * FROM a EXCEPT SELECT * FROM b) AS x) AS except_b,  (SELECT COUNT(*) FROM (SELECT * FROM b EXCEPT SELECT * FROM a) AS x) AS except_a,  (SELECT COUNT(*) FROM (SELECT * FROM b INTERSECT SELECT * FROM a) AS x) AS a_intersect_b;except_b | except_a | a_intersect_b----------+----------+---------------        0 |        0 |             8(1 row)/JoelOn Sun, Jan 31, 2021, at 22:39, Tom Lane wrote:Now that dfb75e478 is in, I looked into whether we can have somemachine-readable representation of the catalogs' foreign keyrelationships.  As per the previous discussion [1], it's notpractical to have \"real\" SQL foreign key constraints, becausethe semantics we use aren't quite right (i.e., using 0 insteadof NULL in rows with no reference).  Nonetheless it would benice to have the knowledge available in some form, and we agreedthat a set-returning function returning the data would be useful.The attached patch creates that function, and rewrites the oidjoins.sqlregression test to use it, in place of the very incomplete info that'sreverse-engineered by findoidjoins.  It's mostly straightforward.My original thought had been to add DECLARE_FOREIGN_KEY() macrosfor all references, but I soon realized that in a large majority ofthe cases, that's redundant with the BKI_LOOKUP() annotations wealready have.  So I taught genbki.pl to extract FK data fromBKI_LOOKUP() as well as the explicit DECLARE macros.  That didn'tremove the work entirely, because it turned out that we hadn'tbothered to apply BKI_LOOKUP() labels to most of the catalogs thathave no hand-made data.  A big chunk of the patch consists inadding those as needed.  Also, I had to make the BKI_LOOKUP()mechanism a little more complete, because it failed on pg_namespaceand pg_authid references.  (It will still fail on some othercases such as BKI_LOOKUP(pg_foreign_server), but I think there'sno need to fill that in until/unless we have some built-in datathat needs it.)There are various loose ends yet to be cleaned up:* I'm unsure whether it's better for the SRF to return thecolumn names as textual names, or as column numbers.  Names wasa bit easier for all the parts of the current patch so I didit that way, but maybe there's a case for the other way.Actually the whole API for the SRF is just spur-of-the-moment,so maybe a different API would be better.* It would now be possible to remove the PGNSP and PGUID klugesentirely in favor of plain BKI_LOOKUP references to pg_namespaceand pg_authid.  The catalog header usage would get a littlemore verbose: BKI_DEFAULT(PGNSP) becomes BKI_DEFAULT(pg_catalog)and BKI_DEFAULT(PGUID) becomes BKI_DEFAULT(POSTGRES).  I'm a bitinclined to do it, simply to remove one bit of mechanism that hasto be documented; but it's material for a separate patch perhaps.* src/tools/findoidjoins should be nuked entirely, AFAICS.Again, that could be a follow-on patch.* I've not touched the SGML docs.  Certainlypg_get_catalog_foreign_keys() should be documented, and someadjustments in bki.sgml might be appropriate.regards, tom lane[1] https://www.postgresql.org/message-id/flat/dc5f44d9-5ec1-a596-0251-dadadcdede98%402ndquadrant.comAttachments:add-catalog-foreign-key-info-1.patchKind regards,Joel", "msg_date": "Mon, 01 Feb 2021 14:31:29 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "Could it be an idea to also add\n\n OUT can_be_zero boolean\n\nto pg_get_catalog_foreign_keys()'s out parameters?\n\nThis information is useful to know if one should be doing an INNER JOIN or a LEFT JOIN on the foreign keys.\n\nThe information is mostly available in the documentation already,\nbut not quite accurate, which I've proposed a patch [1] to fix.\n\n[1] https://www.postgresql.org/message-id/4ed9a372-7bf9-479a-926c-ae8e774717a8@www.fastmail.com\n\nCould it be an idea to also add   OUT can_be_zero booleanto pg_get_catalog_foreign_keys()'s out parameters?This information is useful to know if one should be doing an INNER JOIN or a LEFT JOIN on the foreign keys.The information is mostly available in the documentation already,but not quite accurate, which I've proposed a patch [1] to fix.[1] https://www.postgresql.org/message-id/4ed9a372-7bf9-479a-926c-ae8e774717a8@www.fastmail.com", "msg_date": "Mon, 01 Feb 2021 14:41:11 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "Hi again,\n\nAfter trying to use pg_get_catalog_foreign_keys() to replace what I had before,\nI notice one ambiguity which I think is a serious problem in the machine-readable context.\n\nThe is_array OUT parameter doesn't say which of the possibly many fkcols that is the array column.\n\nOne example:\n\n fktable | fkcols | pktable | pkcols | is_array\n----------------------+-----------------------+--------------+-------------------+----------\npg_constraint | {conrelid,conkey} | pg_attribute | {attrelid,attnum} | t\n\nIs the array \"conrelid\" or is it \"conkey\"? As a human, I know it's \"conkey\", but for a machine to figure out, one would need to join information_schema.columns and check the data_type or something similar.\n\nSuggestions on how to fix:\n\n* Make is_array an boolean[], and let each element represent the is_array value for each fkcols element.\n\n* Change interface to be more like information_schema, and add a \"ordinal_position\" column, and return each column on a separate row.\n\nI think I prefer the latter since it's more information_schema-conformant, but any works.\n\n/Joel\n\nHi again,After trying to use pg_get_catalog_foreign_keys() to replace what I had before,I notice one ambiguity which I think is a serious problem in the machine-readable context.The is_array OUT parameter doesn't say which of the possibly many fkcols that is the array column.One example:       fktable        |        fkcols         |   pktable    |      pkcols       | is_array----------------------+-----------------------+--------------+-------------------+----------pg_constraint        | {conrelid,conkey}     | pg_attribute | {attrelid,attnum} | tIs the array \"conrelid\" or is it \"conkey\"? As a human, I know it's \"conkey\", but for a machine to figure out, one would need to join information_schema.columns and check the data_type or something similar.Suggestions on how to fix:* Make is_array an boolean[], and let each element represent the is_array value for each fkcols element.* Change interface to be more like information_schema, and add a \"ordinal_position\" column, and return each column on a separate row.I think I prefer the latter since it's more information_schema-conformant, but any works./Joel", "msg_date": "Mon, 01 Feb 2021 20:33:26 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "On Mon, Feb 1, 2021, at 20:33, Joel Jacobson wrote:\n>Suggestions on how to fix:\n>\n>* Make is_array an boolean[], and let each element represent the is_array value for each fkcols element.\n>\n>* Change interface to be more like information_schema, and add a \"ordinal_position\" column, and return each column on a separate row.\n>\n>I think I prefer the latter since it's more information_schema-conformant, but any works.\n\nOps. I see a problem with just adding a \"ordinal_position\", since then one would also need enumerate the \"foreing key\" constraints or give them names like \"constraint_name\" in information_schema.table_constraints (since there can be multiple foreign keys per fktable, so there would be multiple e.g. ordinal_position=1 per fktable).\n\nWith this into consideration, I think the easiest and cleanest solution is to make is_array a boolean[].\n\nI like the usage of arrays, it makes it much easier to understand the output,\nas it's the visually easy to see what groups of columns that references what group of columns.\n\n/Joel\nOn Mon, Feb 1, 2021, at 20:33, Joel Jacobson wrote:>Suggestions on how to fix:>>* Make is_array an boolean[], and let each element represent the is_array value for each fkcols element.>>* Change interface to be more like information_schema, and add a \"ordinal_position\" column, and return each column on a separate row.>>I think I prefer the latter since it's more information_schema-conformant, but any works.Ops. I see a problem with just adding a \"ordinal_position\", since then one would also need enumerate the \"foreing key\" constraints or give them names like \"constraint_name\" in information_schema.table_constraints (since there can be multiple foreign keys per fktable, so there would be multiple e.g. ordinal_position=1 per fktable).With this into consideration, I think the easiest and cleanest solution is to make is_array a boolean[].I like the usage of arrays, it makes it much easier to understand the output,as it's the visually easy to see what groups of columns that references what group of columns./Joel", "msg_date": "Mon, 01 Feb 2021 20:47:34 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> The is_array OUT parameter doesn't say which of the possibly many fkcols that is the array column.\n\nYeah, I didn't write the sgml docs yet, but the comments explain that\nthe array is always the last fkcol. Maybe someday that won't be\ngeneral enough, but we can cross that bridge when we come to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Feb 2021 15:03:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Could it be an idea to also add\n> OUT can_be_zero boolean\n> to pg_get_catalog_foreign_keys()'s out parameters?\n\nI was initially feeling resistant to that idea, but warmed to it\nonce I realized that a majority of the FK referencing columns\nactually should not contain zeroes. So we can get a useful\nimprovement in the strictness of the test coverage if we make this\ndistinction --- and we can enforce it in the initial catalog data,\ntoo.\n\nSo here's a v2 that does that. In the interests of brevity,\nI spelled the declaration macros that allow a zero as BKI_LOOKUP_OPT,\nDECLARE_FOREIGN_KEY_OPT, etc; and thus the output column is\nalso is_opt. I'm not wedded to that term but I think we need\nsomething pretty short.\n\nThis also moves the oidjoins regression test to run near the\nend of the test suite. As I commented earlier, that test was\noriginally mainly meant to validate the handwritten initial\ndata; but nowadays it's hard to see what it would catch that\ngenbki.pl doesn't. So the usefulness is in looking at rows\nthat get added later, and therefore we ought to run it after\nthe regression tests have created stuff. I've tried here\nto run it in parallel with event_triggers, which might be\nfoolish.\n\nI also added some documentation. I feel like this might\nbe committable at this point.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 01 Feb 2021 22:27:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "On Tue, Feb 2, 2021, at 04:27, Tom Lane wrote:\n>Attachments:\n>add-catalog-foreign-key-info-2.patch\n\nVery nice.\n\nI could only find one minor error,\nfound by running the regression-tests,\nand then using the query below to compare \"is_opt\"\nwith my own \"zero_values\" in my tool\nthat derives its value from pg_catalog content.\n\n--\n-- Are there any observed oid columns with zero values\n-- that are also marked as NOT is_opt by pg_get_catalog_foreign_keys()?\n--\nregression=# SELECT\n table_name,\n column_name\nFROM pit.oid_columns\nWHERE zero_values\nINTERSECT\nSELECT\n fktable::text,\n unnest(fkcols)\nFROM pg_get_catalog_foreign_keys()\nWHERE NOT is_opt;\n\nExpected to return no rows but:\n\n table_name | column_name\n---------------+-------------\npg_constraint | confrelid\n(1 row)\n\nregression=# SELECT * FROM pg_get_catalog_foreign_keys() WHERE 'confrelid' = ANY(fkcols);\n fktable | fkcols | pktable | pkcols | is_array | is_opt\n---------------+---------------------+--------------+-------------------+----------+--------\npg_constraint | {confrelid} | pg_class | {oid} | f | t\npg_constraint | {confrelid,confkey} | pg_attribute | {attrelid,attnum} | t | f\n(2 rows)\n\nReading the new documentation, I interpret \"is_opt=false\" to be a negation of\n\n \"the referencing column(s) are allowed to contain zeroes instead of a valid reference\"\n\ni.e. that none of the referencing columns (fkcols) are allowed to contain zeroes,\nbut since \"confrelid\" apparently can contain zeroes:\n\nregression=# select * from pg_constraint where confrelid = 0 limit 1;\n-[ RECORD 1 ]-+------------------\noid | 12111\nconname | pg_proc_oid_index\nconnamespace | 11\ncontype | p\ncondeferrable | f\ncondeferred | f\nconvalidated | t\nconrelid | 1255\ncontypid | 0\nconindid | 2690\nconparentid | 0\nconfrelid | 0\nconfupdtype |\nconfdeltype |\nconfmatchtype |\nconislocal | t\nconinhcount | 0\nconnoinherit | t\nconkey | {1}\nconfkey |\nconfreftype |\nconpfeqop |\nconppeqop |\nconffeqop |\nconexclop |\nconbin |\n\nI therefore think is_opt should be changed to true for this row:\n fktable | fkcols | pktable | pkcols | is_array | is_opt\n---------------+---------------------+--------------+-------------------+----------+--------\npg_constraint | {confrelid,confkey} | pg_attribute | {attrelid,attnum} | t | f\n\nIf this is fixed, I also agree this is ready to be committed.\n\n/Joel\nOn Tue, Feb 2, 2021, at 04:27, Tom Lane wrote:>Attachments:>add-catalog-foreign-key-info-2.patchVery nice.I could only find one minor error,found by running the regression-tests,and then using the query below to compare \"is_opt\"with my own \"zero_values\" in my toolthat derives its value from pg_catalog content.---- Are there any observed oid columns with zero values-- that are also marked as NOT is_opt by pg_get_catalog_foreign_keys()?--regression=# SELECT  table_name,  column_nameFROM pit.oid_columnsWHERE zero_valuesINTERSECTSELECT  fktable::text,  unnest(fkcols)FROM pg_get_catalog_foreign_keys()WHERE NOT is_opt;Expected to return no rows but:  table_name   | column_name---------------+-------------pg_constraint | confrelid(1 row)regression=# SELECT * FROM pg_get_catalog_foreign_keys() WHERE 'confrelid' = ANY(fkcols);    fktable    |       fkcols        |   pktable    |      pkcols       | is_array | is_opt---------------+---------------------+--------------+-------------------+----------+--------pg_constraint | {confrelid}         | pg_class     | {oid}             | f        | tpg_constraint | {confrelid,confkey} | pg_attribute | {attrelid,attnum} | t        | f(2 rows)Reading the new documentation, I interpret \"is_opt=false\" to be a negation of   \"the referencing column(s) are allowed to contain zeroes instead of a valid reference\"i.e. that none of the referencing columns (fkcols) are allowed to contain zeroes,but since \"confrelid\" apparently can contain zeroes:regression=# select * from pg_constraint where confrelid = 0 limit 1;-[ RECORD 1 ]-+------------------oid           | 12111conname       | pg_proc_oid_indexconnamespace  | 11contype       | pcondeferrable | fcondeferred   | fconvalidated  | tconrelid      | 1255contypid      | 0conindid      | 2690conparentid   | 0confrelid     | 0confupdtype   |confdeltype   |confmatchtype |conislocal    | tconinhcount   | 0connoinherit  | tconkey        | {1}confkey       |confreftype   |conpfeqop     |conppeqop     |conffeqop     |conexclop     |conbin        |I therefore think is_opt should be changed to true for this row:    fktable    |       fkcols        |   pktable    |      pkcols       | is_array | is_opt---------------+---------------------+--------------+-------------------+----------+--------pg_constraint | {confrelid,confkey} | pg_attribute | {attrelid,attnum} | t        | fIf this is fixed, I also agree this is ready to be committed./Joel", "msg_date": "Tue, 02 Feb 2021 08:51:09 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> I could only find one minor error,\n> found by running the regression-tests,\n> and then using the query below to compare \"is_opt\"\n> with my own \"zero_values\" in my tool\n> that derives its value from pg_catalog content.\n> ...\n> I therefore think is_opt should be changed to true for this row:\n> fktable | fkcols | pktable | pkcols | is_array | is_opt\n> ---------------+---------------------+--------------+-------------------+----------+--------\n> pg_constraint | {confrelid,confkey} | pg_attribute | {attrelid,attnum} | t | f\n\nNo, I think it's correct as-is (and this is one reason that I chose to\nhave two separate FK entries for cases like this). confrelid can be\nzero in rows that are not FK constraints. However, such a row must\nalso have empty confkey. The above entry states that for each element\nof confkey, the pair (confrelid,confkey[i]) must be nonzero and have\na match in pg_attribute. It creates no constraint if confkey is empty.\n\n> If this is fixed, I also agree this is ready to be committed.\n\nAppreciate the review! Please confirm if you agree with above\nanalysis.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Feb 2021 11:00:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "On Tue, Feb 2, 2021, at 17:00, Tom Lane wrote:\n>No, I think it's correct as-is (and this is one reason that I chose to\n>have two separate FK entries for cases like this). confrelid can be\n>zero in rows that are not FK constraints. However, such a row must\n>also have empty confkey. The above entry states that for each element\n>of confkey, the pair (confrelid,confkey[i]) must be nonzero and have\n>a match in pg_attribute. It creates no constraint if confkey is empty.\n\nThanks for explaining, I get it now.\n\n>Appreciate the review! Please confirm if you agree with above\n>analysis.\n\nYes, I agree with the analysis.\n\n/Joel\nOn Tue, Feb 2, 2021, at 17:00, Tom Lane wrote:>No, I think it's correct as-is (and this is one reason that I chose to>have two separate FK entries for cases like this).  confrelid can be>zero in rows that are not FK constraints.  However, such a row must>also have empty confkey.  The above entry states that for each element>of confkey, the pair (confrelid,confkey[i]) must be nonzero and have>a match in pg_attribute.  It creates no constraint if confkey is empty.Thanks for explaining, I get it now.>Appreciate the review!  Please confirm if you agree with above>analysis.Yes, I agree with the analysis./Joel", "msg_date": "Tue, 02 Feb 2021 21:05:01 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Tue, Feb 2, 2021, at 17:00, Tom Lane wrote:\n>> Appreciate the review! Please confirm if you agree with above\n>> analysis.\n\n> Yes, I agree with the analysis.\n\nCool, thanks. I've pushed it now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Feb 2021 17:12:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "I wrote:\n> * It would now be possible to remove the PGNSP and PGUID kluges\n> entirely in favor of plain BKI_LOOKUP references to pg_namespace\n> and pg_authid. The catalog header usage would get a little\n> more verbose: BKI_DEFAULT(PGNSP) becomes BKI_DEFAULT(pg_catalog)\n> and BKI_DEFAULT(PGUID) becomes BKI_DEFAULT(POSTGRES). I'm a bit\n> inclined to do it, simply to remove one bit of mechanism that has\n> to be documented; but it's material for a separate patch perhaps.\n\nHere's a patch for that part. I think this is probably a good\nidea not only because it removes magic, but because now that we\nhave various predefined roles it's becoming more and more likely\nthat some of those will need to be cross-referenced in other\ncatalogs' initial data. With this change, nothing special\nwill be needed for that. Multiple built-in schemas also become\nmore feasible than they were.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 02 Feb 2021 18:26:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "On Mon, Feb 1, 2021, at 21:03, Tom Lane wrote:\n>\"Joel Jacobson\" <joel@compiler.org> writes:\n>> The is_array OUT parameter doesn't say which of the possibly many fkcols that is the array column.\n>\n>Yeah, I didn't write the sgml docs yet, but the comments explain that\n>the array is always the last fkcol. Maybe someday that won't be\n>general enough, but we can cross that bridge when we come to it.\n\nI've now fully migrated to using pg_get_catalog_foreign_keys()\ninstead of my own lookup tables, and have some additional hands-on experiences\nto share with you.\n\nI struggle to come up with a clean way to make use of is_array,\nwithout being forced to introduce some CASE logic to figure out\nif the fkcol is an array or not.\n\nThe alternative to join information_schema.columns and check data_type='ARRAY' is almost simpler,\nbut that seems wrong, since we now have is_array, and using it should be simpler than\njoining information_schema.columns.\n\nThe best approach I've come up with so far is the CASE logic below:\n\nWITH\nforeign_keys AS\n(\n SELECT\n fktable::text AS table_name,\n unnest(fkcols) AS column_name,\n pktable::text AS ref_table_name,\n unnest(pkcols) AS ref_column_name,\n --\n -- is_array refers to the last fkcols column\n --\n unnest\n (\n CASE cardinality(fkcols)\n WHEN 1 THEN ARRAY[is_array]\n WHEN 2 THEN ARRAY[FALSE,is_array]\n END\n ) AS is_array\n FROM pg_get_catalog_foreign_keys()\n)\n\nIf is_array would instead have been an boolean[], the query could have been written:\n\nWITH\nforeign_keys AS\n(\n SELECT\n fktable::text AS table_name,\n unnest(fkcols) AS column_name,\n pktable::text AS ref_table_name,\n unnest(pkcols) AS ref_column_name,\n unnest(is_array) AS is_array\n FROM pg_get_catalog_foreign_keys()\n)\n\nMaybe this can be written in a simpler way already.\n\nOtherwise I think it would be more natural to change both is_array and is_opt\nto boolean[] with the same cardinality as fkcols and pkcols,\nto allow unnest()ing of them as well.\n\nThis would also be a more future proof solution,\nand wouldn't require a code change to code using pg_get_catalog_foreign_keys(),\nif we would ever add more complex cases in the future.\n\nBut even without increased future complexity,\nI think the example above demonstrates a problem already today.\n\nMaybe there is a simpler way to achieve what I'm trying to do,\ni.e. to figure out if a specific fkcol is an array or not,\nusing some other simpler clever trick than the CASE variant above?\n\n/Joel\n\n\nOn Mon, Feb 1, 2021, at 21:03, Tom Lane wrote:>\"Joel Jacobson\" <joel@compiler.org> writes:>> The is_array OUT parameter doesn't say which of the possibly many fkcols that is the array column.>>Yeah, I didn't write the sgml docs yet, but the comments explain that>the array is always the last fkcol.  Maybe someday that won't be>general enough, but we can cross that bridge when we come to it.I've now fully migrated to using pg_get_catalog_foreign_keys()instead of my own lookup tables, and have some additional hands-on experiencesto share with you.I struggle to come up with a clean way to make use of is_array,without being forced to introduce some CASE logic to figure outif the fkcol is an array or not.The alternative to join information_schema.columns and check data_type='ARRAY' is almost simpler,but that seems wrong, since we now have is_array, and using it should be simpler thanjoining information_schema.columns.The best approach I've come up with so far is the CASE logic below:WITHforeign_keys AS(  SELECT    fktable::text AS table_name,    unnest(fkcols) AS column_name,    pktable::text AS ref_table_name,    unnest(pkcols) AS ref_column_name,    --    -- is_array refers to the last fkcols column    --    unnest    (      CASE cardinality(fkcols)      WHEN 1 THEN ARRAY[is_array]      WHEN 2 THEN ARRAY[FALSE,is_array]      END    ) AS is_array  FROM pg_get_catalog_foreign_keys())If is_array would instead have been an boolean[], the query could have been written:WITHforeign_keys AS(  SELECT    fktable::text AS table_name,    unnest(fkcols) AS column_name,    pktable::text AS ref_table_name,    unnest(pkcols) AS ref_column_name,    unnest(is_array) AS is_array  FROM pg_get_catalog_foreign_keys())Maybe this can be written in a simpler way already.Otherwise I think it would be more natural to change both is_array and is_optto boolean[] with the same cardinality as fkcols and pkcols,to allow unnest()ing of them as well. This would also be a more future proof solution,and wouldn't require a code change to code using pg_get_catalog_foreign_keys(),if we would ever add more complex cases in the future.But even without increased future complexity,I think the example above demonstrates a problem already today.Maybe there is a simpler way to achieve what I'm trying to do,i.e. to figure out if a specific fkcol is an array or not,using some other simpler clever trick than the CASE variant above?/Joel", "msg_date": "Wed, 03 Feb 2021 21:41:09 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" }, { "msg_contents": "On Wed, Feb 3, 2021, at 21:41, Joel Jacobson wrote:\n>Otherwise I think it would be more natural to change both is_array and is_opt\n>to boolean[] with the same cardinality as fkcols and pkcols,\n>to allow unnest()ing of them as well.\n\nAnother option would perhaps be to add a new\nsystem view in src/backend/catalog/system_views.sql\n\nI see there are other cases with a slightly more complex view\nusing a function with a similar name, such as\nthe pg_stat_activity using pg_stat_get_activity().\n\nSimilar to this, maybe we could add a pg_catalog_foreign_keys view\nusing the output from pg_get_catalog_foreign_keys():\n\nExample usage:\n\nSELECT * FROM pg_catalog_foreign_keys\nWHERE fktable = 'pg_constraint'::regclass\nAND pktable = 'pg_attribute'::regclass;\n\nfkid | fktable | fkcol | pktable | pkcol | is_array | is_opt | ordinal_position\n------+---------------+-----------+--------------+----------+----------+--------+------------------\n 48 | pg_constraint | conkey | pg_attribute | attnum | t | t | 1\n 48 | pg_constraint | conrelid | pg_attribute | attrelid | f | f | 2\n 49 | pg_constraint | confkey | pg_attribute | attnum | t | f | 1\n 49 | pg_constraint | confrelid | pg_attribute | attrelid | f | f | 2\n(4 rows)\n\nThe point of this would be to avoid unnecessary increase of data model complexity,\nwhich I agree is not needed, since we only need single booleans as of today,\nbut to provide a more information_schema-like system view,\ni.e. with columns on separate rows, with ordinal_position.\n\nSince we don't have any \"constraint_name\" for these,\nwe need to enumerate the fks first, to let ordinal_position\nbe the position within each such fkid.\n\nHere is my proposal on how to implement:\n\nCREATE VIEW pg_catalog_foreign_keys AS\n WITH\n enumerate_fks AS (\n SELECT\n *,\n ROW_NUMBER() OVER () AS fkid\n FROM pg_catalog.pg_get_catalog_foreign_keys()\n ),\n unnest_cols AS (\n SELECT\n C.fkid,\n C.fktable,\n unnest(C.fkcols) AS fkcol,\n C.pktable,\n unnest(C.pkcols) AS pkcol,\n unnest(\n CASE cardinality(fkcols)\n WHEN 1 THEN ARRAY[C.is_array]\n WHEN 2 THEN ARRAY[FALSE,C.is_array]\n END\n ) AS is_array,\n unnest(\n CASE cardinality(fkcols)\n WHEN 1 THEN ARRAY[C.is_opt]\n WHEN 2 THEN ARRAY[FALSE,C.is_opt]\n END\n ) AS is_opt\n FROM enumerate_fks AS C\n )\n SELECT\n *,\n ROW_NUMBER() OVER (\n PARTITION BY U.fkid\n ORDER BY U.fkcol, U.pkcol\n ) AS ordinal_position\n FROM unnest_cols AS U;\n\nI think both the pg_get_catalog_foreign_keys() function\nand this view are useful in different ways,\nso it's good to provide both.\n\nOnly providing pg_get_catalog_foreign_keys() will\narguably mean some users of the function will need to implement\nsomething like the same as above on their own, if they need the is_array and is_opt\nvalue for a specific fkcol.\n\n/Joel\nOn Wed, Feb 3, 2021, at 21:41, Joel Jacobson wrote:>Otherwise I think it would be more natural to change both is_array and is_opt>to boolean[] with the same cardinality as fkcols and pkcols,>to allow unnest()ing of them as well.Another option would perhaps be to add a newsystem view in src/backend/catalog/system_views.sqlI see there are other cases with a slightly more complex viewusing a function with a similar name, such asthe pg_stat_activity using pg_stat_get_activity().Similar to this, maybe we could add a pg_catalog_foreign_keys viewusing the output from pg_get_catalog_foreign_keys():Example usage:SELECT * FROM pg_catalog_foreign_keysWHERE fktable = 'pg_constraint'::regclassAND pktable = 'pg_attribute'::regclass;fkid |    fktable    |   fkcol   |   pktable    |  pkcol   | is_array | is_opt | ordinal_position------+---------------+-----------+--------------+----------+----------+--------+------------------   48 | pg_constraint | conkey    | pg_attribute | attnum   | t        | t      |                1   48 | pg_constraint | conrelid  | pg_attribute | attrelid | f        | f      |                2   49 | pg_constraint | confkey   | pg_attribute | attnum   | t        | f      |                1   49 | pg_constraint | confrelid | pg_attribute | attrelid | f        | f      |                2(4 rows)The point of this would be to avoid unnecessary increase of data model complexity,which I agree is not needed, since we only need single booleans as of today,but to provide a more information_schema-like system view,i.e. with columns on separate rows, with ordinal_position.Since we don't have any \"constraint_name\" for these,we need to enumerate the fks first, to let ordinal_positionbe the position within each such fkid.Here is my proposal on how to implement:CREATE VIEW pg_catalog_foreign_keys AS    WITH    enumerate_fks AS (        SELECT            *,            ROW_NUMBER() OVER () AS fkid        FROM pg_catalog.pg_get_catalog_foreign_keys()    ),    unnest_cols AS (        SELECT            C.fkid,            C.fktable,            unnest(C.fkcols) AS fkcol,            C.pktable,            unnest(C.pkcols) AS pkcol,            unnest(                CASE cardinality(fkcols)                    WHEN 1 THEN ARRAY[C.is_array]                    WHEN 2 THEN ARRAY[FALSE,C.is_array]                END            ) AS is_array,            unnest(                CASE cardinality(fkcols)                    WHEN 1 THEN ARRAY[C.is_opt]                    WHEN 2 THEN ARRAY[FALSE,C.is_opt]                END            ) AS is_opt        FROM enumerate_fks AS C    )    SELECT        *,        ROW_NUMBER() OVER (            PARTITION BY U.fkid            ORDER BY U.fkcol, U.pkcol        ) AS ordinal_position    FROM unnest_cols AS U;I think both the pg_get_catalog_foreign_keys() functionand this view are useful in different ways,so it's good to provide both.Only providing pg_get_catalog_foreign_keys() willarguably mean some users of the function will need to implementsomething like the same as above on their own, if they need the is_array and is_optvalue for a specific fkcol./Joel", "msg_date": "Thu, 04 Feb 2021 03:37:13 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Recording foreign key relationships for the system catalogs" } ]
[ { "msg_contents": "Hi all,\n\nCommitfest 2021-01 is now closed. All patches have now been dealt\nwith. Before starting on moving and closing patches the stats were:\n\nNeeds review: 146 (-4)\nWaiting on Author: 25 (+1)\nReady for Committer : 23 (-1)\nCommitted: 56 (+4)\nMoved to next CF: 2 (+2)\nWithdrawn: 8 (+0)\n\nThe following 4 patches out of 25 WoA patches were waiting for the\nauthor during this commit fest without any updates. I've closed those\npatches as \"Returned with Feedback\":\n\n* Fix comment about generate_gather_paths\n * https://commitfest.postgresql.org/31/2876/\n* remove deprecated v8.2 containment operators\n * https://commitfest.postgresql.org/31/2798/\n* bitmap cost should account for correlated indexes\n * https://commitfest.postgresql.org/31/2310/\n* avoid bitmapOR-ing indexes for scan conditions implied by partition constraint\n * https://commitfest.postgresql.org/31/2644/\n\nOther patches have been moved to the commitfest. As a result, a lot of\npatches are still in the reviewing queue. When closing the patches, I\nchose and closed the patches that are clearly inactive for more than\nabout 1 month. But if I confirm that the author has a plan to update\nthe patch soon I didn't close them. So I might have left too many\npatches for the next commitfest. If you have a patch that was moved,\nand you intend to rewrite enough of it to warrant a resubmission then\nplease go in and close your entry.\n\nFinally, during this commitfest whereas a lot of patches get reviewed\nand committed, about 80 patches have been moved to the next commitfest\nwithout any reviews. We clearly need more bandwidth among reviewers\nand committers to cope with the increasing size of the commitfests.\n From another point of view, those patches are likely to have a long\ndiscussion and a certain level of difficulty, so it's relatively hard\nfor beginners. It would be good if the experienced hackers more focus\non such difficult patches. It's a just idea but I thought that it\nwould be helpful if we could have something like a mark on CF app\nindicating the patch is good for beginners like we have [E] mark in\nthe ToDo wiki page[1]. This would be a good indicator for new-coming\ncontributors to choose the patch to review and might help increase the\nreviewers. Which could help that the experienced hackers can focus on\nother patches. The mark can be added/edited either by the patch author\nor CFM.\n\nI've closed this commitfest. If you have feedback or comment on my CFM\nwork, please tell me here or by directly emailing me. Thanks to\neveryone who participated in writing, reviewing a patch, joining the\ndiscussion, and the commitfest!\n\nRegards,\n\n[1] https://wiki.postgresql.org/wiki/Todo\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 1 Feb 2021 23:33:40 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Commitfest 2021-01 is now closed." }, { "msg_contents": "On Mon, 2021-02-01 at 23:33 +0900, Masahiko Sawada wrote:\n> I've closed this commitfest. If you have feedback or comment on my CFM\n> work, please tell me here or by directly emailing me.\n\nI think you did an excellent job.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 01 Feb 2021 15:53:34 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2021-01 is now closed." }, { "msg_contents": "Le lun. 1 févr. 2021 à 22:53, Laurenz Albe <laurenz.albe@cybertec.at> a\nécrit :\n\n> On Mon, 2021-02-01 at 23:33 +0900, Masahiko Sawada wrote:\n> > I've closed this commitfest. If you have feedback or comment on my CFM\n> > work, please tell me here or by directly emailing me.\n>\n> I think you did an excellent job.\n>\n\ndefinitely agreed, thanks a lot for running the commit fest Sawada-san!\n\n>\n\nLe lun. 1 févr. 2021 à 22:53, Laurenz Albe <laurenz.albe@cybertec.at> a écrit :On Mon, 2021-02-01 at 23:33 +0900, Masahiko Sawada wrote:\n> I've closed this commitfest. If you have feedback or comment on my CFM\n> work, please tell me here or by directly emailing me.\n\nI think you did an excellent job.definitely agreed, thanks a lot for running the commit fest Sawada-san!", "msg_date": "Mon, 1 Feb 2021 23:16:17 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2021-01 is now closed." }, { "msg_contents": "As Commitfest 2021-01 is now closed. I am volunteering to manage next\ncommitfest.\n\n\n--\nIbrar Ahmed\n\nAs Commitfest 2021-01 is now closed. I am volunteering to manage next commitfest. --Ibrar Ahmed", "msg_date": "Mon, 1 Feb 2021 22:17:21 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Next Commitfest Manager." }, { "msg_contents": "From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> about 1 month. But if I confirm that the author has a plan to update\r\n> the patch soon I didn't close them. So I might have left too many\r\n> patches for the next commitfest. If you have a patch that was moved,\r\n> and you intend to rewrite enough of it to warrant a resubmission then\r\n> please go in and close your entry.\r\n\r\nI respect your kind treatment like this. A great job and great thanks! It must have been tough to shift through so many difficult discussions.\r\n\r\n\r\n> From another point of view, those patches are likely to have a long\r\n> discussion and a certain level of difficulty, so it's relatively hard\r\n> for beginners. It would be good if the experienced hackers more focus\r\n> on such difficult patches. It's a just idea but I thought that it\r\n> would be helpful if we could have something like a mark on CF app\r\n> indicating the patch is good for beginners like we have [E] mark in\r\n> the ToDo wiki page[1]. This would be a good indicator for new-coming\r\n> contributors to choose the patch to review and might help increase the\r\n> reviewers. Which could help that the experienced hackers can focus on\r\n> other patches. The mark can be added/edited either by the patch author\r\n> or CFM.\r\n\r\n+10\r\nOr maybe we can add some difficulty score like e-commerce's review score, so that multiple people (patch author(s), serious persistent reviewers, CFM, and others who had a look but gave up reviewing) can reflect their impressions.\r\nFurther, something like stars or \"Likes\" could be encouraging (while 0 count may be discouraging for the author.)\r\nAlso, I'd be happy if I could know the patch set size -- the total of the last line of diffstat for each patch file.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Tue, 2 Feb 2021 00:44:35 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Commitfest 2021-01 is now closed." }, { "msg_contents": "On Mon, Feb 1, 2021 at 7:16 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Le lun. 1 févr. 2021 à 22:53, Laurenz Albe <laurenz.albe@cybertec.at> a écrit :\n>>\n>> On Mon, 2021-02-01 at 23:33 +0900, Masahiko Sawada wrote:\n>> > I've closed this commitfest. If you have feedback or comment on my CFM\n>> > work, please tell me here or by directly emailing me.\n>>\n>> I think you did an excellent job.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 Feb 2021 22:56:24 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2021-01 is now closed." }, { "msg_contents": "Hi,\nAnyone else already volunteers that? It is my first time so need some\naccess, if all agree.\n\nOn Mon, Feb 1, 2021 at 10:17 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n> As Commitfest 2021-01 is now closed. I am volunteering to manage next\n> commitfest.\n>\n>\n> --\n> Ibrar Ahmed\n>\n\n\n-- \nIbrar Ahmed\n\nHi,Anyone else already volunteers that? It is my first time so need some access, if all agree. On Mon, Feb 1, 2021 at 10:17 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:As Commitfest 2021-01 is now closed. I am volunteering to manage next commitfest. --Ibrar Ahmed\n-- Ibrar Ahmed", "msg_date": "Wed, 3 Feb 2021 17:44:24 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Next Commitfest Manager." }, { "msg_contents": "Greetings,\n\n* Ibrar Ahmed (ibrar.ahmad@gmail.com) wrote:\n> Anyone else already volunteers that? It is my first time so need some\n> access, if all agree.\n\nThanks for volunteering!\n\nThat said, our last commitfest tends to be the most difficult as it's\nthe last opportunity for features to land in time for the next major\nrelease and, for my part at least, I think it'd be best to have\nsomeone who has experience running a CF previously manage it.\n\nTo that end, I've talked to David Steele, who has run this last CF for\nthe past few years and we're in agreement that he's willing to run this\nCF again this year, assuming there's no objections. What we've thought\nto suggest is that you follow along with David as he runs this CF and\nthen offer to run the July CF. Of course, we would encourage you and\nDavid to communicate and for you to ask David any questions you have\nabout how he handles things as part of the CF. This is in line with how\nother CF managers have started out also.\n\nOpen to your thoughts, as well as those of anyone else who wishes to\ncomment.\n\nThanks!\n\nStephen", "msg_date": "Wed, 3 Feb 2021 15:13:17 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Next Commitfest Manager." }, { "msg_contents": "On 2/3/21 3:13 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Ibrar Ahmed (ibrar.ahmad@gmail.com) wrote:\n>> Anyone else already volunteers that? It is my first time so need some\n>> access, if all agree.\n> \n> Thanks for volunteering!\n> \n> That said, our last commitfest tends to be the most difficult as it's\n> the last opportunity for features to land in time for the next major\n> release and, for my part at least, I think it'd be best to have\n> someone who has experience running a CF previously manage it.\n> \n> To that end, I've talked to David Steele, who has run this last CF for\n> the past few years and we're in agreement that he's willing to run this\n> CF again this year, assuming there's no objections. What we've thought\n> to suggest is that you follow along with David as he runs this CF and\n> then offer to run the July CF. Of course, we would encourage you and\n> David to communicate and for you to ask David any questions you have\n> about how he handles things as part of the CF. This is in line with how\n> other CF managers have started out also.\n> \n> Open to your thoughts, as well as those of anyone else who wishes to\n> comment.\n\n+1. This all sounds good to me!\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 3 Feb 2021 16:00:08 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Next Commitfest Manager." }, { "msg_contents": "On Thu, Feb 4, 2021 at 2:00 AM David Steele <david@pgmasters.net> wrote:\n\n> On 2/3/21 3:13 PM, Stephen Frost wrote:\n> > Greetings,\n> >\n> > * Ibrar Ahmed (ibrar.ahmad@gmail.com) wrote:\n> >> Anyone else already volunteers that? It is my first time so need some\n> >> access, if all agree.\n> >\n> > Thanks for volunteering!\n> >\n> > That said, our last commitfest tends to be the most difficult as it's\n> > the last opportunity for features to land in time for the next major\n> > release and, for my part at least, I think it'd be best to have\n> > someone who has experience running a CF previously manage it.\n> >\n> > To that end, I've talked to David Steele, who has run this last CF for\n> > the past few years and we're in agreement that he's willing to run this\n> > CF again this year, assuming there's no objections. What we've thought\n> > to suggest is that you follow along with David as he runs this CF and\n> > then offer to run the July CF. Of course, we would encourage you and\n> > David to communicate and for you to ask David any questions you have\n> > about how he handles things as part of the CF. This is in line with how\n> > other CF managers have started out also.\n> >\n> > Open to your thoughts, as well as those of anyone else who wishes to\n> > comment.\n>\n> +1. This all sounds good to me!\n>\n> --\n> -David\n> david@pgmasters.net\n\nSounds good, I am happy to work with David.\n\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Feb 4, 2021 at 2:00 AM David Steele <david@pgmasters.net> wrote:On 2/3/21 3:13 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Ibrar Ahmed (ibrar.ahmad@gmail.com) wrote:\n>> Anyone else already volunteers that? It is my first time so need some\n>> access, if all agree.\n> \n> Thanks for volunteering!\n> \n> That said, our last commitfest tends to be the most difficult as it's\n> the last opportunity for features to land in time for the next major\n> release and, for my part at least, I think it'd be best to have\n> someone who has experience running a CF previously manage it.\n> \n> To that end, I've talked to David Steele, who has run this last CF for\n> the past few years and we're in agreement that he's willing to run this\n> CF again this year, assuming there's no objections.  What we've thought\n> to suggest is that you follow along with David as he runs this CF and\n> then offer to run the July CF.  Of course, we would encourage you and\n> David to communicate and for you to ask David any questions you have\n> about how he handles things as part of the CF.  This is in line with how\n> other CF managers have started out also.\n> \n> Open to your thoughts, as well as those of anyone else who wishes to\n> comment.\n\n+1. This all sounds good to me!\n\n-- \n-David\ndavid@pgmasters.netSounds good, I am happy to work with David.  -- Ibrar Ahmed", "msg_date": "Thu, 4 Feb 2021 15:07:22 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Next Commitfest Manager." }, { "msg_contents": "On Thu, Feb 4, 2021 at 1:13 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Ibrar Ahmed (ibrar.ahmad@gmail.com) wrote:\n> > Anyone else already volunteers that? It is my first time so need some\n> > access, if all agree.\n>\n> Thanks for volunteering!\n>\n> That said, our last commitfest tends to be the most difficult as it's\n> the last opportunity for features to land in time for the next major\n> release and, for my part at least, I think it'd be best to have\n> someone who has experience running a CF previously manage it.\n>\n> To that end, I've talked to David Steele, who has run this last CF for\n> the past few years and we're in agreement that he's willing to run this\n> CF again this year, assuming there's no objections. What we've thought\n> to suggest is that you follow along with David as he runs this CF and\n> then offer to run the July CF. Of course, we would encourage you and\n> David to communicate and for you to ask David any questions you have\n> about how he handles things as part of the CF. This is in line with how\n> other CF managers have started out also.\n>\n\nAs we know July commitfest is coming, I already volunteered to manage that.\nI\ndid small work with David in the last commitfest and now ready to work on\nthat\n\n\n>\n> Open to your thoughts, as well as those of anyone else who wishes to\n> comment.\n>\n> Thanks!\n>\n> Stephen\n>\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Feb 4, 2021 at 1:13 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Ibrar Ahmed (ibrar.ahmad@gmail.com) wrote:\n> Anyone else already volunteers that? It is my first time so need some\n> access, if all agree.\n\nThanks for volunteering!\n\nThat said, our last commitfest tends to be the most difficult as it's\nthe last opportunity for features to land in time for the next major\nrelease and, for my part at least, I think it'd be best to have\nsomeone who has experience running a CF previously manage it.\n\nTo that end, I've talked to David Steele, who has run this last CF for\nthe past few years and we're in agreement that he's willing to run this\nCF again this year, assuming there's no objections.  What we've thought\nto suggest is that you follow along with David as he runs this CF and\nthen offer to run the July CF.  Of course, we would encourage you and\nDavid to communicate and for you to ask David any questions you have\nabout how he handles things as part of the CF.  This is in line with how\nother CF managers have started out also. As we know July commitfest is coming, I already volunteered to manage that. I did small work with David in the last commitfest and now ready to work on that \n\nOpen to your thoughts, as well as those of anyone else who wishes to\ncomment.\n\nThanks!\n\nStephen\n-- Ibrar Ahmed", "msg_date": "Wed, 26 May 2021 19:52:17 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Next Commitfest Manager." } ]
[ { "msg_contents": "Hi,\n\nAs of b80e10638e3, there is a new API for validating the encoding of\nstrings, and one of the side effects is that we have a wider choice of\nalgorithms. For UTF-8, it has been demonstrated that SIMD is much faster at\ndecoding [1] and validation [2] than the standard approach we use.\n\nIt makes sense to start with the ascii subset of UTF-8 for a couple\nreasons. First, ascii is very widespread in database content, particularly\nin bulk loads. Second, ascii can be validated using the simple SSE2\nintrinsics that come with (I believe) any x64-64 chip, and I'm guessing we\ncan detect that at compile time and not mess with runtime checks. The\nexamples above using SSE for the general case are much more complicated and\ninvolve SSE 4.2 or AVX.\n\nHere are some numbers on my laptop (MacOS/clang 10 -- if the concept is\nokay, I'll do Linux/gcc and add more inputs). The test is the same as\nHeikki shared in [3], but I added a case with >95% Chinese characters just\nto show how that compares to the mixed ascii/multibyte case.\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 1081 | 761 | 366\n\npatch:\n\n chinese | mixed | ascii\n---------+-------+-------\n 1103 | 498 | 51\n\nThe speedup in the pure ascii case is nice.\n\nIn the attached POC, I just have a pro forma portability stub, and left\nfull portability detection for later. The fast path is inlined inside\npg_utf8_verifystr(). I imagine the ascii fast path could be abstracted into\na separate function to which is passed a function pointer for full encoding\nvalidation. That would allow other encodings with strict ascii subsets to\nuse this as well, but coding that abstraction might be a little messy, and\nb80e10638e3 already gives a performance boost over PG13.\n\nI also gave a shot at doing full UTF-8 recognition using a DFA, but so far\nthat has made performance worse. If I ever have more success with that,\nI'll add that in the mix.\n\n[1] https://woboq.com/blog/utf-8-processing-using-simd.html\n[2]\nhttps://lemire.me/blog/2020/10/20/ridiculously-fast-unicode-utf-8-validation/\n[3]\nhttps://www.postgresql.org/message-id/06d45421-61b8-86dd-e765-f1ce527a5a2f@iki.fi\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 1 Feb 2021 13:32:23 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On 01/02/2021 19:32, John Naylor wrote:\n> It makes sense to start with the ascii subset of UTF-8 for a couple \n> reasons. First, ascii is very widespread in database content, \n> particularly in bulk loads. Second, ascii can be validated using the \n> simple SSE2 intrinsics that come with (I believe) any x64-64 chip, and \n> I'm guessing we can detect that at compile time and not mess with \n> runtime checks. The examples above using SSE for the general case are \n> much more complicated and involve SSE 4.2 or AVX.\n\nI wonder how using SSE compares with dealing with 64 or 32-bit words at \na time, using regular instructions? That would be more portable.\n\n> Here are some numbers on my laptop (MacOS/clang 10 -- if the concept is \n> okay, I'll do Linux/gcc and add more inputs). The test is the same as \n> Heikki shared in [3], but I added a case with >95% Chinese characters \n> just to show how that compares to the mixed ascii/multibyte case.\n> \n> master:\n> \n>  chinese | mixed | ascii\n> ---------+-------+-------\n>     1081 |   761 |   366\n> \n> patch:\n> \n>  chinese | mixed | ascii\n> ---------+-------+-------\n>     1103 |   498 |    51\n> \n> The speedup in the pure ascii case is nice.\n\nYep.\n\n> In the attached POC, I just have a pro forma portability stub, and left \n> full portability detection for later. The fast path is inlined inside \n> pg_utf8_verifystr(). I imagine the ascii fast path could be abstracted \n> into a separate function to which is passed a function pointer for full \n> encoding validation. That would allow other encodings with strict ascii \n> subsets to use this as well, but coding that abstraction might be a \n> little messy, and b80e10638e3 already gives a performance boost over PG13.\n\nAll supported encodings are ASCII subsets. Might be best to putt the \nASCII-check into a static inline function and use it in all the verify \nfunctions. I presume it's only a few instructions, and these functions \ncan be pretty performance sensitive.\n\n> I also gave a shot at doing full UTF-8 recognition using a DFA, but so \n> far that has made performance worse. If I ever have more success with \n> that, I'll add that in the mix.\n\nThat's disappointing. Perhaps the SIMD algorithms have higher startup \ncosts, so that you need longer inputs to benefit? In that case, it might \nmake sense to check the length of the input and only use the SIMD \nalgorithm if the input is long enough.\n\n- Heikki\n\n\n", "msg_date": "Mon, 1 Feb 2021 20:01:50 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Mon, Feb 1, 2021 at 2:01 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 01/02/2021 19:32, John Naylor wrote:\n> > It makes sense to start with the ascii subset of UTF-8 for a couple\n> > reasons. First, ascii is very widespread in database content,\n> > particularly in bulk loads. Second, ascii can be validated using the\n> > simple SSE2 intrinsics that come with (I believe) any x64-64 chip, and\n> > I'm guessing we can detect that at compile time and not mess with\n> > runtime checks. The examples above using SSE for the general case are\n> > much more complicated and involve SSE 4.2 or AVX.\n>\n> I wonder how using SSE compares with dealing with 64 or 32-bit words at\n> a time, using regular instructions? That would be more portable.\n\nI gave that a shot, and it's actually pretty good. According to this paper,\n[1], 16 bytes was best and gives a good apples-to-apples comparison to SSE\nregisters, so I tried both 16 and 8 bytes.\n\n> All supported encodings are ASCII subsets. Might be best to putt the\n> ASCII-check into a static inline function and use it in all the verify\n> functions. I presume it's only a few instructions, and these functions\n> can be pretty performance sensitive.\n\nI tried both the static inline function and also putting the whole\noptimized utf-8 loop in a separate function to which the caller passes a\npointer to the appropriate pg_*_verifychar().\n\nIn the table below, \"inline\" refers to coding directly inside\npg_utf8_verifystr(). Both C and SSE are in the same patch, with an #ifdef.\nI didn't bother splitting them out because for other encodings, we want one\nof the other approaches above. For those, \"C retail\" refers to a static\ninline function to code the contents of the inner loop, if I understood\nyour suggestion correctly. This needs more boilerplate in each function, so\nI don't prefer this. \"C func pointer\" refers to the pointer approach I just\nmentioned. That is the cleanest looking way to generalize it, so I only\ntested that version with different strides -- 8- and 16-bytes\n\nThis is the same test I used earlier, which is the test in [2] but adding\nan almost-pure multibyte Chinese text of about the same size.\n\nx64-64 Linux gcc 8.4.0:\n\n build | chinese | mixed | ascii\n------------------+---------+-------+-------\n master | 1480 | 848 | 428\n inline SSE | 1617 | 634 | 63\n inline C | 1481 | 843 | 50\n C retail | 1493 | 838 | 49\n C func pointer | 1467 | 851 | 49\n C func pointer 8 | 1518 | 757 | 56\n\nx64-64 MacOS clang 10.0.0:\n\n build | chinese | mixed | ascii\n------------------+---------+-------+-------\n master | 1086 | 760 | 374\n inline SSE | 1081 | 529 | 70\n inline C | 1093 | 649 | 49\n C retail | 1132 | 695 | 152\n C func pointer | 1085 | 609 | 59\n C func pointer 8 | 1099 | 571 | 71\n\nPowerPC-LE Linux gcc 4.8.5:\n\n build | chinese | mixed | ascii\n------------------+---------+-------+-------\n master | 2961 | 1525 | 871\n inline SSE | (n/a) | (n/a) | (n/a)\n inline C | 2911 | 1329 | 80\n C retail | 2838 | 1311 | 102\n C func pointer | 2828 | 1314 | 80\n C func pointer 8 | 3143 | 1249 | 133\n\nLooking at the results, the main advantage of SSE here is it's more robust\nfor mixed inputs. If a 16-byte chunk is not ascii-only but contains a block\nof ascii at the front, we can skip those with a single CPU instruction, but\nin C, we have to verify the whole chunk using the slow path.\n\nThe \"C func pointer approach\" seems to win out over the \"C retail\" approach\n(static inline function).\n\nUsing an 8-byte stride is slightly better for mixed inputs on all platforms\ntested, but regresses on pure ascii and also seems to regress on pure\nmultibyte. The difference in the multibyte caes is small enough that it\ncould be random, but it happens on two platforms, so I'd say it's real. On\nthe other hand, pure multibyte is not as common as mixed text.\n\nOverall, I think the function pointer approach with an 8-byte stride is the\nbest balance. If that's agreeable, next I plan to test with short inputs,\nbecause I think we'll want a guard if-statement to only loop through the\nfast path if the string is long enough to justify that.\n\n> > I also gave a shot at doing full UTF-8 recognition using a DFA, but so\n> > far that has made performance worse. If I ever have more success with\n> > that, I'll add that in the mix.\n>\n> That's disappointing. Perhaps the SIMD algorithms have higher startup\n> costs, so that you need longer inputs to benefit? In that case, it might\n> make sense to check the length of the input and only use the SIMD\n> algorithm if the input is long enough.\n\nI changed topics a bit quickly, but here I'm talking about using a\ntable-driven state machine to verify the multibyte case. It's possible I\ndid something wrong, since my model implementation decodes, and having to\nkeep track of how many bytes got verified might be the culprit. I'd like to\ntry again to speed up multibyte, but that might be a PG15 project.\n\n[1] https://arxiv.org/abs/2010.03090\n[2]\nhttps://www.postgresql.org/message-id/06d45421-61b8-86dd-e765-f1ce527a5a2f@iki.fi\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Feb 1, 2021 at 2:01 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> On 01/02/2021 19:32, John Naylor wrote:> > It makes sense to start with the ascii subset of UTF-8 for a couple> > reasons. First, ascii is very widespread in database content,> > particularly in bulk loads. Second, ascii can be validated using the> > simple SSE2 intrinsics that come with (I believe) any x64-64 chip, and> > I'm guessing we can detect that at compile time and not mess with> > runtime checks. The examples above using SSE for the general case are> > much more complicated and involve SSE 4.2 or AVX.>> I wonder how using SSE compares with dealing with 64 or 32-bit words at> a time, using regular instructions? That would be more portable.I gave that a shot, and it's actually pretty good. According to this paper, [1], 16 bytes was best and gives a good apples-to-apples comparison to SSE registers, so I tried both 16 and 8 bytes.> All supported encodings are ASCII subsets. Might be best to putt the> ASCII-check into a static inline function and use it in all the verify> functions. I presume it's only a few instructions, and these functions> can be pretty performance sensitive.I tried both the static inline function and also putting the whole optimized utf-8 loop in a separate function to which the caller passes a pointer to the appropriate pg_*_verifychar().In the table below, \"inline\" refers to coding directly inside pg_utf8_verifystr(). Both C and SSE are in the same patch, with an #ifdef. I didn't bother splitting them out because for other encodings, we want one of the other approaches above. For those, \"C retail\" refers to a static inline function to code the contents of the inner loop, if I understood your suggestion correctly. This needs more boilerplate in each function, so I don't prefer this. \"C func pointer\" refers to the pointer approach I just mentioned. That is the cleanest looking way to generalize it, so I only tested that version with different strides -- 8- and 16-bytesThis is the same test I used earlier, which is the test in [2] but adding an almost-pure multibyte Chinese text of about the same size.x64-64 Linux gcc 8.4.0:      build       | chinese | mixed | ascii------------------+---------+-------+------- master           |    1480 |   848 |   428 inline SSE       |    1617 |   634 |    63 inline C         |    1481 |   843 |    50 C retail         |    1493 |   838 |    49 C func pointer   |    1467 |   851 |    49 C func pointer 8 |    1518 |   757 |    56x64-64 MacOS clang 10.0.0:      build       | chinese | mixed | ascii------------------+---------+-------+------- master           |    1086 |   760 |   374 inline SSE       |    1081 |   529 |    70 inline C         |    1093 |   649 |    49 C retail         |    1132 |   695 |   152 C func pointer   |    1085 |   609 |    59 C func pointer 8 |    1099 |   571 |    71PowerPC-LE Linux gcc 4.8.5:      build       | chinese | mixed | ascii------------------+---------+-------+------- master           |    2961 |  1525 |   871 inline SSE       |   (n/a) | (n/a) | (n/a) inline C         |    2911 |  1329 |    80 C retail         |    2838 |  1311 |   102 C func pointer   |    2828 |  1314 |    80 C func pointer 8 |    3143 |  1249 |   133Looking at the results, the main advantage of SSE here is it's more robust for mixed inputs. If a 16-byte chunk is not ascii-only but contains a block of ascii at the front, we can skip those with a single CPU instruction, but in C, we have to verify the whole chunk using the slow path.The \"C func pointer approach\" seems to win out over the \"C retail\" approach (static inline function).Using an 8-byte stride is slightly better for mixed inputs on all platforms tested, but regresses on pure ascii and also seems to regress on pure multibyte. The difference in the multibyte caes is small enough that it could be random, but it happens on two platforms, so I'd say it's real. On the other hand, pure multibyte is not as common as mixed text.Overall, I think the function pointer approach with an 8-byte stride is the best balance. If that's agreeable, next I plan to test with short inputs, because I think we'll want a guard if-statement to only loop through the fast path if the string is long enough to justify that.> > I also gave a shot at doing full UTF-8 recognition using a DFA, but so> > far that has made performance worse. If I ever have more success with> > that, I'll add that in the mix.>> That's disappointing. Perhaps the SIMD algorithms have higher startup> costs, so that you need longer inputs to benefit? In that case, it might> make sense to check the length of the input and only use the SIMD> algorithm if the input is long enough.I changed topics a bit quickly, but here I'm talking about using a table-driven state machine to verify the multibyte case. It's possible I did something wrong, since my model implementation decodes, and having to keep track of how many bytes got verified might be the culprit. I'd like to try again to speed up multibyte, but that might be a PG15 project.[1] https://arxiv.org/abs/2010.03090[2] https://www.postgresql.org/message-id/06d45421-61b8-86dd-e765-f1ce527a5a2f@iki.fi--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Feb 2021 17:48:35 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "Here is a more polished version of the function pointer approach, now\nadapted to all multibyte encodings. Using the not-yet-committed tests from\n[1], I found a thinko bug that resulted in the test for nul bytes to not\nonly be wrong, but probably also elided by the compiler. Doing it correctly\nis noticeably slower on pure ascii, but still several times faster than\nbefore, so the conclusions haven't changed any. I'll run full measurements\nlater this week, but I'll share the patch now for review.\n\n[1]\nhttps://www.postgresql.org/message-id/11d39e63-b80a-5f8d-8043-fff04201fadc@iki.fi\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 7 Feb 2021 16:24:16 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On 07/02/2021 22:24, John Naylor wrote:\n> Here is a more polished version of the function pointer approach, now \n> adapted to all multibyte encodings. Using the not-yet-committed tests \n> from [1], I found a thinko bug that resulted in the test for nul bytes \n> to not only be wrong, but probably also elided by the compiler. Doing it \n> correctly is noticeably slower on pure ascii, but still several times \n> faster than before, so the conclusions haven't changed any. I'll run \n> full measurements later this week, but I'll share the patch now for review.\n\nAs a quick test, I hacked up pg_utf8_verifystr() to use Lemire's \nalgorithm from the simdjson library [1], see attached patch. I \nmicrobenchmarked it using the the same test I used before [2].\n\nThese results are with \"gcc -O2\" using \"gcc (Debian 10.2.1-6) 10.2.1 \n20210110\"\n\nunpatched master:\n\npostgres=# \\i mbverifystr-speed.sql\nCREATE FUNCTION\n mixed | ascii\n-------+-------\n 728 | 393\n(1 row)\n\nv1-0001-Add-an-ASCII-fast-path-to-multibyte-encoding-veri.patch:\n\n mixed | ascii\n-------+-------\n 759 | 98\n(1 row)\n\nsimdjson-utf8-hack.patch:\n\n mixed | ascii\n-------+-------\n 53 | 31\n(1 row)\n\nSo clearly that algorithm is fast. Not sure if it has a high startup \ncost, or large code size, or other tradeoffs that we don't want. At \nleast it depends on SIMD instructions, so it requires more code for the \narchitecture-specific implementations and autoconf logic and all that. \nNevertheless I think it deserves a closer look, I'm a bit reluctant to \nput in half-way measures, when there's a clearly superior algorithm out \nthere.\n\nI also tested the fallback implementation from the simdjson library \n(included in the patch, if you uncomment it in simdjson-glue.c):\n\n mixed | ascii\n-------+-------\n 447 | 46\n(1 row)\n\nI think we should at least try to adopt that. At a high level, it looks \npretty similar your patch: you load the data 8 bytes at a time, check if \nthere are all ASCII. If there are any non-ASCII chars, you check the \nbytes one by one, otherwise you load the next 8 bytes. Your patch should \nbe able to achieve the same performance, if done right. I don't think \nthe simdjson code forbids \\0 bytes, so that will add a few cycles, but \nstill.\n\n[1] https://github.com/simdjson/simdjson\n[2] \nhttps://www.postgresql.org/message-id/06d45421-61b8-86dd-e765-f1ce527a5a2f@iki.fi\n\n- Heikki\n\nPS. Your patch as it stands isn't safe on systems with strict alignment, \nthe string passed to the verify function isn't guaranteed to be 8 bytes \naligned. Use memcpy to fetch the next 8-byte chunk to fix.", "msg_date": "Mon, 8 Feb 2021 12:17:11 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Mon, Feb 8, 2021 at 6:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> As a quick test, I hacked up pg_utf8_verifystr() to use Lemire's\n> algorithm from the simdjson library [1], see attached patch. I\n> microbenchmarked it using the the same test I used before [2].\n\nI've been looking at various iterations of Lemire's utf8 code, and trying\nit out was next on my list, so thanks for doing that!\n\n> These results are with \"gcc -O2\" using \"gcc (Debian 10.2.1-6) 10.2.1\n> 20210110\"\n>\n> unpatched master:\n>\n> postgres=# \\i mbverifystr-speed.sql\n> CREATE FUNCTION\n> mixed | ascii\n> -------+-------\n> 728 | 393\n> (1 row)\n>\n> v1-0001-Add-an-ASCII-fast-path-to-multibyte-encoding-veri.patch:\n>\n> mixed | ascii\n> -------+-------\n> 759 | 98\n> (1 row)\n\nHmm, the mixed case got worse -- I haven't seen that in any of my tests.\n\n> simdjson-utf8-hack.patch:\n>\n> mixed | ascii\n> -------+-------\n> 53 | 31\n> (1 row)\n>\n> So clearly that algorithm is fast. Not sure if it has a high startup\n> cost, or large code size, or other tradeoffs that we don't want.\n\nThe simdjson lib uses everything up through AVX512 depending on what\nhardware is available. I seem to remember reading that high start-up cost\nis more relevant to floating point than to integer ops, but I could be\nwrong. Just the utf8 portion is surely tiny also.\n\n> At\n> least it depends on SIMD instructions, so it requires more code for the\n> architecture-specific implementations and autoconf logic and all that.\n\nOne of his earlier demos [1] (in simdutf8check.h) had a version that used\nmostly SSE2 with just three intrinsics from SSSE3. That's widely available\nby now. He measured that at 0.7 cycles per byte, which is still good\ncompared to AVX2 0.45 cycles per byte [2].\n\nTesting for three SSSE3 intrinsics in autoconf is pretty easy. I would\nassume that if that check (and the corresponding runtime check) passes, we\ncan assume SSE2. That code has three licenses to choose from -- Apache 2,\nBoost, and MIT. Something like that might be straightforward to start\nfrom. I think the only obstacles to worry about are license and getting it\nto fit into our codebase. Adding more than zero high-level comments with a\ngood description of how it works in detail is also a bit of a challenge.\n\n> I also tested the fallback implementation from the simdjson library\n> (included in the patch, if you uncomment it in simdjson-glue.c):\n>\n> mixed | ascii\n> -------+-------\n> 447 | 46\n> (1 row)\n>\n> I think we should at least try to adopt that. At a high level, it looks\n> pretty similar your patch: you load the data 8 bytes at a time, check if\n> there are all ASCII. If there are any non-ASCII chars, you check the\n> bytes one by one, otherwise you load the next 8 bytes. Your patch should\n> be able to achieve the same performance, if done right. I don't think\n> the simdjson code forbids \\0 bytes, so that will add a few cycles, but\n> still.\n\nOkay, I'll look into that.\n\n> PS. Your patch as it stands isn't safe on systems with strict alignment,\n> the string passed to the verify function isn't guaranteed to be 8 bytes\n> aligned. Use memcpy to fetch the next 8-byte chunk to fix.\n\nWill do.\n\n[1] https://github.com/lemire/fastvalidate-utf-8/tree/master/include\n[2]\nhttps://lemire.me/blog/2018/10/19/validating-utf-8-bytes-using-only-0-45-cycles-per-byte-avx-edition/\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Feb 8, 2021 at 6:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:> As a quick test, I hacked up pg_utf8_verifystr() to use Lemire's> algorithm from the simdjson library [1], see attached patch. I> microbenchmarked it using the the same test I used before [2].I've been looking at various iterations of Lemire's utf8 code, and trying it out was next on my list, so thanks for doing that! > These results are with \"gcc -O2\" using \"gcc (Debian 10.2.1-6) 10.2.1> 20210110\">> unpatched master:>> postgres=# \\i mbverifystr-speed.sql> CREATE FUNCTION>   mixed | ascii> -------+------->     728 |   393> (1 row)>> v1-0001-Add-an-ASCII-fast-path-to-multibyte-encoding-veri.patch:>>   mixed | ascii> -------+------->     759 |    98> (1 row)Hmm, the mixed case got worse -- I haven't seen that in any of my tests.> simdjson-utf8-hack.patch:>>   mixed | ascii> -------+------->      53 |    31> (1 row)>> So clearly that algorithm is fast. Not sure if it has a high startup> cost, or large code size, or other tradeoffs that we don't want. The simdjson lib uses everything up through AVX512 depending on what hardware is available. I seem to remember reading that high start-up cost is more relevant to floating point than to integer ops, but I could be wrong. Just the utf8 portion is surely tiny also. > At> least it depends on SIMD instructions, so it requires more code for the> architecture-specific implementations and autoconf logic and all that.One of his earlier demos [1] (in simdutf8check.h) had a version that used mostly SSE2 with just three intrinsics from SSSE3. That's widely available by now. He measured that at 0.7 cycles per byte, which is still good compared to AVX2 0.45 cycles per byte [2].Testing for three SSSE3 intrinsics in autoconf is pretty easy. I would assume that if that check (and the corresponding runtime check) passes, we can assume SSE2. That code has three licenses to choose from -- Apache 2, Boost, and MIT. Something like that might be straightforward to start from. I think the only obstacles to worry about are license and getting it to fit into our codebase. Adding more than zero high-level comments with a good description of how it works in detail is also a bit of a challenge.> I also tested the fallback implementation from the simdjson library> (included in the patch, if you uncomment it in simdjson-glue.c):>>   mixed | ascii> -------+------->     447 |    46> (1 row)>> I think we should at least try to adopt that. At a high level, it looks> pretty similar your patch: you load the data 8 bytes at a time, check if> there are all ASCII. If there are any non-ASCII chars, you check the> bytes one by one, otherwise you load the next 8 bytes. Your patch should> be able to achieve the same performance, if done right. I don't think> the simdjson code forbids \\0 bytes, so that will add a few cycles, but> still.Okay, I'll look into that.> PS. Your patch as it stands isn't safe on systems with strict alignment,> the string passed to the verify function isn't guaranteed to be 8 bytes> aligned. Use memcpy to fetch the next 8-byte chunk to fix.Will do.[1] https://github.com/lemire/fastvalidate-utf-8/tree/master/include[2] https://lemire.me/blog/2018/10/19/validating-utf-8-bytes-using-only-0-45-cycles-per-byte-avx-edition/--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 8 Feb 2021 09:14:44 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Mon, Feb 8, 2021 at 6:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> I also tested the fallback implementation from the simdjson library\n> (included in the patch, if you uncomment it in simdjson-glue.c):\n>\n> mixed | ascii\n> -------+-------\n> 447 | 46\n> (1 row)\n>\n> I think we should at least try to adopt that. At a high level, it looks\n> pretty similar your patch: you load the data 8 bytes at a time, check if\n> there are all ASCII. If there are any non-ASCII chars, you check the\n> bytes one by one, otherwise you load the next 8 bytes. Your patch should\n> be able to achieve the same performance, if done right. I don't think\n> the simdjson code forbids \\0 bytes, so that will add a few cycles, but\n> still.\n\nThat fallback is very similar to my \"inline C\" case upthread, and they both\nactually check 16 bytes at a time (the comment is wrong in the patch you\nshared). I can work back and show how the performance changes with each\ndifference (just MacOS, clang 10 here):\n\nmaster\n\n mixed | ascii\n-------+-------\n 757 | 366\n\nv1, but using memcpy()\n\n mixed | ascii\n-------+-------\n 601 | 129\n\nremove zero-byte check:\n\n mixed | ascii\n-------+-------\n 588 | 93\n\ninline ascii fastpath into pg_utf8_verifystr()\n\n mixed | ascii\n-------+-------\n 595 | 71\n\nuse 16-byte stride\n\n mixed | ascii\n-------+-------\n 652 | 49\n\nWith this cpu/compiler, v1 is fastest on the mixed input all else being\nequal.\n\nMaybe there's a smarter way to check for zeros in C. Or maybe be more\ncareful about cache -- running memchr() on the whole input first might not\nbe the best thing to do.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Feb 8, 2021 at 6:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> I also tested the fallback implementation from the simdjson library> (included in the patch, if you uncomment it in simdjson-glue.c):>>   mixed | ascii> -------+------->     447 |    46> (1 row)>> I think we should at least try to adopt that. At a high level, it looks> pretty similar your patch: you load the data 8 bytes at a time, check if> there are all ASCII. If there are any non-ASCII chars, you check the> bytes one by one, otherwise you load the next 8 bytes. Your patch should> be able to achieve the same performance, if done right. I don't think> the simdjson code forbids \\0 bytes, so that will add a few cycles, but> still.That fallback is very similar to my \"inline C\" case upthread, and they both actually check 16 bytes at a time (the comment is wrong in the patch you shared). I can work back and show how the performance changes with each difference (just MacOS, clang 10 here):master mixed | ascii-------+-------   757 |   366v1, but using memcpy() mixed | ascii-------+-------   601 |   129remove zero-byte check: mixed | ascii-------+-------   588 |    93inline ascii fastpath into pg_utf8_verifystr() mixed | ascii-------+-------   595 |    71use 16-byte stride mixed | ascii-------+-------   652 |    49With this cpu/compiler, v1 is fastest on the mixed input all else being equal. Maybe there's a smarter way to check for zeros in C. Or maybe be more careful about cache -- running memchr() on the whole input first might not be the best thing to do. --John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 9 Feb 2021 16:08:21 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On 09/02/2021 22:08, John Naylor wrote:\n> Maybe there's a smarter way to check for zeros in C. Or maybe be more \n> careful about cache -- running memchr() on the whole input first might \n> not be the best thing to do.\n\nThe usual trick is the haszero() macro here: \nhttps://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord. That's \nhow memchr() is typically implemented, too.\n\n- Heikki\n\n\n", "msg_date": "Tue, 9 Feb 2021 22:22:02 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "I wrote:\n>\n> On Mon, Feb 8, 2021 at 6:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> One of his earlier demos [1] (in simdutf8check.h) had a version that used\nmostly SSE2 with just three intrinsics from SSSE3. That's widely available\nby now. He measured that at 0.7 cycles per byte, which is still good\ncompared to AVX2 0.45 cycles per byte [2].\n>\n> Testing for three SSSE3 intrinsics in autoconf is pretty easy. I would\nassume that if that check (and the corresponding runtime check) passes, we\ncan assume SSE2. That code has three licenses to choose from -- Apache 2,\nBoost, and MIT. Something like that might be straightforward to start from.\nI think the only obstacles to worry about are license and getting it to fit\ninto our codebase. Adding more than zero high-level comments with a good\ndescription of how it works in detail is also a bit of a challenge.\n\nI double checked, and it's actually two SSSE3 intrinsics and one SSE4.1,\nbut the 4.1 one can be emulated with a few SSE2 intrinsics. But we could\nprobably fold all three into the SSE4.2 CRC check and have a single symbol\nto save on boilerplate.\n\nI hacked that demo [1] into wchar.c (very ugly patch attached), and got the\nfollowing:\n\nmaster\n\n mixed | ascii\n-------+-------\n 757 | 366\n\nLemire demo:\n\n mixed | ascii\n-------+-------\n 172 | 168\n\nThis one lacks an ascii fast path, but the AVX2 version in the same file\nhas one that could probably be easily adapted. With that, I think this\nwould be worth adapting to our codebase and license. Thoughts?\n\nThe advantage of this demo is that it's not buried in a mountain of modern\nC++.\n\nSimdjson can use AVX -- do you happen to know which target it got compiled\nto? AVX vectors are 256-bits wide and that requires OS support. The OS's we\ncare most about were updated 8-12 years ago, but that would still be\nsomething to check, in addition to more configure checks.\n\n[1] https://github.com/lemire/fastvalidate-utf-8/tree/master/include\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 9 Feb 2021 17:12:22 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Tue, Feb 9, 2021 at 4:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 09/02/2021 22:08, John Naylor wrote:\n> > Maybe there's a smarter way to check for zeros in C. Or maybe be more\n> > careful about cache -- running memchr() on the whole input first might\n> > not be the best thing to do.\n>\n> The usual trick is the haszero() macro here:\n> https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord. That's\n> how memchr() is typically implemented, too.\n\nThanks for that. Checking with that macro each loop iteration gives a small\nboost:\n\nv1, but using memcpy()\n\n mixed | ascii\n-------+-------\n 601 | 129\n\nwith haszero()\n\n mixed | ascii\n-------+-------\n 583 | 105\n\nremove zero-byte check:\n\n mixed | ascii\n-------+-------\n 588 | 93\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Feb 9, 2021 at 4:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> On 09/02/2021 22:08, John Naylor wrote:> > Maybe there's a smarter way to check for zeros in C. Or maybe be more> > careful about cache -- running memchr() on the whole input first might> > not be the best thing to do.>> The usual trick is the haszero() macro here:> https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord. That's> how memchr() is typically implemented, too.Thanks for that. Checking with that macro each loop iteration gives a small boost:v1, but using memcpy() mixed | ascii-------+-------   601 |   129with haszero() mixed | ascii-------+-------   583 |   105remove zero-byte check: mixed | ascii-------+-------   588 |    93--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Feb 2021 00:00:53 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Mon, Feb 8, 2021 at 6:17 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> I also tested the fallback implementation from the simdjson library\n> (included in the patch, if you uncomment it in simdjson-glue.c):\n>\n> mixed | ascii\n> -------+-------\n> 447 | 46\n> (1 row)\n>\n> I think we should at least try to adopt that. At a high level, it looks\n> pretty similar your patch: you load the data 8 bytes at a time, check if\n> there are all ASCII. If there are any non-ASCII chars, you check the\n> bytes one by one, otherwise you load the next 8 bytes. Your patch should\n> be able to achieve the same performance, if done right. I don't think\n> the simdjson code forbids \\0 bytes, so that will add a few cycles, but\n> still.\n\nAttached is a patch that does roughly what simdjson fallback did, except I\nuse straight tests on the bytes and only calculate code points in assertion\nbuilds. In the course of doing this, I found that my earlier concerns about\nputting the ascii check in a static inline function were due to my\nsuboptimal loop implementation. I had assumed that if the chunked ascii\ncheck failed, it had to check all those bytes one at a time. As it turns\nout, that's a waste of the branch predictor. In the v2 patch, we do the\nchunked ascii check every time we loop. With that, I can also confirm the\nclaim in the Lemire paper that it's better to do the check on 16-byte\nchunks:\n\n(MacOS, Clang 10)\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 1081 | 761 | 366\n\nv2 patch, with 16-byte stride:\n\n chinese | mixed | ascii\n---------+-------+-------\n 806 | 474 | 83\n\npatch but with 8-byte stride:\n\n chinese | mixed | ascii\n---------+-------+-------\n 792 | 490 | 105\n\nI also included the fast path in all other multibyte encodings, and that is\nalso pretty good performance-wise. It regresses from master on pure\nmultibyte input, but that case is still faster than PG13, which I simulated\nby reverting 6c5576075b0f9 and b80e10638e3:\n\n~PG13:\n\n chinese | mixed | ascii\n---------+-------+-------\n 1565 | 848 | 365\n\nascii fast-path plus pg_*_verifychar():\n\n chinese | mixed | ascii\n---------+-------+-------\n 1279 | 656 | 94\n\n\nv2 has a rough start to having multiple implementations in\nsrc/backend/port. Next steps are:\n\n1. Add more tests for utf-8 coverage (in addition to the ones to be added\nby the noError argument patch)\n2. Add SSE4 validator -- it turns out the demo I referred to earlier\ndoesn't match the algorithm in the paper. I plan to only copy the lookup\ntables from simdjson verbatim, but the code will basically be written from\nscratch, using simdjson as a hint.\n3. Adjust configure.ac\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Feb 2021 21:31:33 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On 13/02/2021 03:31, John Naylor wrote:\n> On Mon, Feb 8, 2021 at 6:17 AM Heikki Linnakangas <hlinnaka@iki.fi \n> <mailto:hlinnaka@iki.fi>> wrote:\n> >\n> > I also tested the fallback implementation from the simdjson library\n> > (included in the patch, if you uncomment it in simdjson-glue.c):\n> >\n> >   mixed | ascii\n> > -------+-------\n> >     447 |    46\n> > (1 row)\n> >\n> > I think we should at least try to adopt that. At a high level, it looks\n> > pretty similar your patch: you load the data 8 bytes at a time, check if\n> > there are all ASCII. If there are any non-ASCII chars, you check the\n> > bytes one by one, otherwise you load the next 8 bytes. Your patch should\n> > be able to achieve the same performance, if done right. I don't think\n> > the simdjson code forbids \\0 bytes, so that will add a few cycles, but\n> > still.\n> \n> Attached is a patch that does roughly what simdjson fallback did, except \n> I use straight tests on the bytes and only calculate code points in \n> assertion builds. In the course of doing this, I found that my earlier \n> concerns about putting the ascii check in a static inline function were \n> due to my suboptimal loop implementation. I had assumed that if the \n> chunked ascii check failed, it had to check all those bytes one at a \n> time. As it turns out, that's a waste of the branch predictor. In the v2 \n> patch, we do the chunked ascii check every time we loop. With that, I \n> can also confirm the claim in the Lemire paper that it's better to do \n> the check on 16-byte chunks:\n> \n> (MacOS, Clang 10)\n> \n> master:\n> \n>  chinese | mixed | ascii\n> ---------+-------+-------\n>     1081 |   761 |   366\n> \n> v2 patch, with 16-byte stride:\n> \n>  chinese | mixed | ascii\n> ---------+-------+-------\n>      806 |   474 |    83\n> \n> patch but with 8-byte stride:\n> \n>  chinese | mixed | ascii\n> ---------+-------+-------\n>      792 |   490 |   105\n> \n> I also included the fast path in all other multibyte encodings, and that \n> is also pretty good performance-wise.\n\nCool.\n\n> It regresses from master on pure \n> multibyte input, but that case is still faster than PG13, which I \n> simulated by reverting 6c5576075b0f9 and b80e10638e3:\n\nI thought the \"chinese\" numbers above are pure multibyte input, and it \nseems to do well on that. Where does it regress? In multibyte encodings \nother than UTF-8? How bad is the regression?\n\nI tested this on my first generation Raspberry Pi (chipmunk). I had to \ntweak it a bit to make it compile, since the SSE autodetection code was \nnot finished yet. And I used generate_series(1, 1000) instead of \ngenerate_series(1, 10000) in the test script (mbverifystr-speed.sql) \nbecause this system is so slow.\n\nmaster:\n\n mixed | ascii\n-------+-------\n 1310 | 1041\n(1 row)\n\nv2-add-portability-stub-and-new-fallback.patch:\n\n mixed | ascii\n-------+-------\n 2979 | 910\n(1 row)\n\nI'm guessing that's because the unaligned access in check_ascii() is \nexpensive on this platform.\n\n- Heikki\n\n\n", "msg_date": "Mon, 15 Feb 2021 15:18:09 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Mon, Feb 15, 2021 at 9:18 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n\nAttached is the first attempt at using SSE4 to do the validation, but first\nI'll answer your questions about the fallback.\n\nI should mention that v2 had a correctness bug for 4-byte characters that I\nfound when I was writing regression tests. It shouldn't materially affect\nperformance, however.\n\n> I thought the \"chinese\" numbers above are pure multibyte input, and it\n> seems to do well on that. Where does it regress? In multibyte encodings\n> other than UTF-8?\n\nYes, the second set of measurements was intended to represent multibyte\nencodings other than UTF-8. But instead of using one of those encodings, I\nsimulated non-UTF-8 by copying the pattern used for those: in the loop,\ncheck for ascii then either advance or verify one character. It was a quick\nway to use the same test.\n\n> How bad is the regression?\n\nI'll copy the measurements here together with master so it's easier to\ncompare:\n\n~= PG13 (revert 6c5576075b0f9 and b80e10638e3):\n\n chinese | mixed | ascii\n---------+-------+-------\n 1565 | 848 | 365\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 1081 | 761 | 366\n\nascii fast-path plus pg_*_verifychar():\n\n chinese | mixed | ascii\n---------+-------+-------\n 1279 | 656 | 94\n\nAs I mentioned upthread, pure multibyte is still faster than PG13. Reducing\nthe ascii check to 8-bytes at time might alleviate the regression.\n\n> I tested this on my first generation Raspberry Pi (chipmunk). I had to\n> tweak it a bit to make it compile, since the SSE autodetection code was\n> not finished yet. And I used generate_series(1, 1000) instead of\n> generate_series(1, 10000) in the test script (mbverifystr-speed.sql)\n> because this system is so slow.\n>\n> master:\n>\n> mixed | ascii\n> -------+-------\n> 1310 | 1041\n> (1 row)\n>\n> v2-add-portability-stub-and-new-fallback.patch:\n>\n> mixed | ascii\n> -------+-------\n> 2979 | 910\n> (1 row)\n>\n> I'm guessing that's because the unaligned access in check_ascii() is\n> expensive on this platform.\n\nHmm, I used memcpy() as suggested. Is that still slow on that platform?\nThat's 32-bit, right? Some possible remedies:\n\n1) For the COPY FROM case, we should align the allocation on a cacheline --\nwe already have examples of that idiom elsewhere. I was actually going to\nsuggest doing this anyway, since unaligned SIMD loads are often slower, too.\n\n2) As the simdjson fallback was based on Fuchsia (the Lemire paper implies\nit was tested carefully on Arm and I have no reason to doubt that), I could\ntry to follow that example more faithfully by computing the actual\ncodepoints. It's more computation and just as many branches as far as I can\ntell, but it's not a lot of work. I can add that alternative fallback to\nthe patch set. I have no Arm machines, but I can test on a POWER8 machine.\n\n3) #ifdef out the ascii check for 32-bit platforms.\n\n4) Same as the non-UTF8 case -- only check for ascii 8 bytes at a time.\nI'll probably try this first.\n\nNow, I'm pleased to report that I got SSE4 working, and it seems to work.\nIt still needs some stress testing to find any corner case bugs, but it\nshouldn't be too early to share some numbers on Clang 10 / MacOS:\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 1082 | 751 | 364\n\nv3 with SSE4.1:\n\n chinese | mixed | ascii\n---------+-------+-------\n 127 | 128 | 126\n\nSome caveats and notes:\n\n- It takes almost no recognizable code from simdjson, but it does take the\nmagic constants lookup tables almost verbatim. The main body of the code\nhas no intrinsics at all (I think). They're all hidden inside static inline\nhelper functions. I reused some cryptic variable names from simdjson. It's\na bit messy but not terrible.\n\n- It diffs against the noError conversion patch and adds additional tests.\n\n- It's not smart enough to stop at the last valid character boundary --\nit's either all-valid or it must start over with the fallback. That will\nhave to change in order to work with the proposed noError conversions. It\nshouldn't be very hard, but needs thought as to the clearest and safest way\nto code it.\n\n- There is no ascii fast-path yet. With this algorithm we have to be a bit\nmore careful since a valid ascii chunk could be preceded by an incomplete\nsequence at the end of the previous chunk. Not too hard, just a bit more\nwork.\n\n- This is my first time hacking autoconf, and it still seems slightly\nbroken, yet functional on my machine at least.\n\n- It only needs SSE4.1, but I didn't want to create a whole new CFLAGS, so\nit just reuses SSE4.2 for the runtime check and the macro names. Also, it\ndoesn't test for SSE2, it just insists on 64-bit for the runtime check. I\nimagine it would refuse to build on 32-bit machines if you passed it -msse42\n\n- There is a placeholder for Windows support, but it's not developed.\n\n- I had to add a large number of casts to get rid of warnings in the magic\nconstants macros. That needs some polish.\n\nI also attached a C file that visually demonstrates every step of the\nalgorithm following the example found in Table 9 in the paper. That\ncontains the skeleton coding I started with and got abandoned early, so it\nmight differ from the actual patch.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 15 Feb 2021 21:32:52 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "I wrote:\n\n> [v3]\n> - It's not smart enough to stop at the last valid character boundary --\nit's either all-valid or it must start over with the fallback. That will\nhave to change in order to work with the proposed noError conversions. It\nshouldn't be very hard, but needs thought as to the clearest and safest way\nto code it.\n\nIn v4, it should be able to return an accurate count of valid bytes even\nwhen the end crosses a character boundary.\n\n> - This is my first time hacking autoconf, and it still seems slightly\nbroken, yet functional on my machine at least.\n\nIt was actually completely broken if you tried to pass the special flags to\nconfigure. I redesigned this part and it seems to work now.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Feb 2021 01:40:32 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Mon, Feb 15, 2021 at 9:32 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> On Mon, Feb 15, 2021 at 9:18 AM Heikki Linnakangas <hlinnaka@iki.fi>\nwrote:\n> >\n> > I'm guessing that's because the unaligned access in check_ascii() is\n> > expensive on this platform.\n\n> Some possible remedies:\n\n> 3) #ifdef out the ascii check for 32-bit platforms.\n\n> 4) Same as the non-UTF8 case -- only check for ascii 8 bytes at a time.\nI'll probably try this first.\n\nI've attached a couple patches to try on top of v4; maybe they'll help the\nArm32 regression. 01 reduces the stride to 8 bytes, and 02 applies on top\nof v1 to disable the fallback fast path entirely on 32-bit platforms. A bit\nof a heavy hammer, but it'll confirm (or not) your theory about unaligned\nloads.\n\nAlso, I've included patches to explain more fully how I modeled non-UTF-8\nperformance while still using the UTF-8 tests. I think it was a useful\nthing to do, and I have a theory that might predict how a non-UTF8 encoding\nwill perform with the fast path.\n\n03A and 03B are independent of each other and conflict, but both apply on\ntop of v4 (don't need 02). Both replace the v4 fallback with the ascii\nfastpath + pg_utf8_verifychar() in the loop, similar to utf-8 on master.\n03A has a local static copy of pg_utf8_islegal(), and 03B uses the existing\nglobal function. (On x86, you can disable SSE4 by passing\nUSE_FALLBACK_UTF8=1 to configure.)\n\nWhile Clang 10 regressed for me on pure multibyte in a similar test\nupthread, on Linux gcc 8.4 there isn't a regression at all. IIRC, gcc\nwasn't as good as Clang when the API changed a few weeks ago, so its\nregression from v4 is still faster than master. Clang only regressed with\nmy changes because it somehow handled master much better to begin with.\n\nx86-64 Linux gcc 8.4\n\nmaster\n\n chinese | mixed | ascii\n---------+-------+-------\n 1453 | 857 | 428\n\nv4 (fallback verifier written as a single function)\n\n chinese | mixed | ascii\n---------+-------+-------\n 815 | 514 | 82\n\nv4 plus addendum 03A -- emulate non-utf-8 using a copy of\npg_utf8_is_legal() as a static function\n\n chinese | mixed | ascii\n---------+-------+-------\n 1115 | 547 | 87\n\nv4 plus addendum 03B -- emulate non-utf-8 using pg_utf8_is_legal() as a\nglobal function\n\n chinese | mixed | ascii\n---------+-------+-------\n 1279 | 604 | 82\n\n(I also tried the same on ppc64le Linux, gcc 4.8.5 and while not great, it\nnever got worse than master either on pure multibyte.)\n\nThis is supposed to model the performance of a non-utf8 encoding, where we\ndon't have a bespoke function written from scratch. Here's my theory: If an\nencoding has pg_*_mblen(), a global function, inside pg_*_verifychar(), it\nseems it won't benefit as much from an ascii fast path as one whose\npg_*_verifychar() has no function calls. I'm not sure whether a compiler\ncan inline a global function's body into call sites in the unit where it's\ndefined. (I haven't looked at the assembly.) But recall that you didn't\ncommit 0002 from the earlier encoding change, because it wasn't performing.\nI looked at that patch again, and while it inlined the pg_utf8_verifychar()\ncall, it still called the global function pg_utf8_islegal().\n\nIf the above is anything to go by, on gcc at least, I don't think we need\nto worry about a regression when adding an ascii fast path to non-utf-8\nmultibyte encodings.\n\nRegarding SSE, I've added an ascii fast path in my local branch, but it's\nnot going to be as big a difference because 1) the check is more expensive\nin terms of branches than the C case, and 2) because the general case is so\nfast already, it's hard to improve upon. I just need to do some testing and\ncleanup on the whole thing, and that'll be ready to share.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 18 Feb 2021 20:43:04 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "I made some substantial improvements in v5, and I've taken care of all my\nTODOs below. I separated out the non-UTF-8 ascii fast path into a separate\npatch, since it's kind of off-topic, and it's not yet clear it's always the\nbest thing to do.\n\n> - It takes almost no recognizable code from simdjson, but it does take\nthe magic constants lookup tables almost verbatim. The main body of the\ncode has no intrinsics at all (I think). They're all hidden inside static\ninline helper functions. I reused some cryptic variable names from\nsimdjson. It's a bit messy but not terrible.\n\nIn v5, the lookup tables and their comments are cleaned up and modified to\nplay nice with pgindent.\n\n> - It diffs against the noError conversion patch and adds additional tests.\n\nI wanted to get some cfbot testing, so I went ahead and prepended v4 of\nHeikki's noError patch so it would apply against master.\n\n> - There is no ascii fast-path yet. With this algorithm we have to be a\nbit more careful since a valid ascii chunk could be preceded by an\nincomplete sequence at the end of the previous chunk. Not too hard, just a\nbit more work.\n\nv5 adds an ascii fast path.\n\n> - I had to add a large number of casts to get rid of warnings in the\nmagic constants macros. That needs some polish.\n\nThis is much nicer now, only one cast really necessary.\n\nI'm pretty pleased with how it is now, but it could use some thorough\ntesting for correctness. I'll work on that a bit later.\n\nOn my laptop, Clang 10:\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 1081 | 761 | 366\n\nv5:\n\n chinese | mixed | ascii\n---------+-------+-------\n 136 | 93 | 54\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 20 Feb 2021 17:10:58 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "The cfbot reported a build failure on Windows because of the use of binary\nliterals. I've turned those into hex for v6, so let's see how far it gets\nnow.\n\nI also decided to leave out the patch that adds an ascii fast path to\nnon-UTF-8 encodings. That would really require more testing than I have\ntime for.\n\nAs before, 0001 is v4 of Heikk's noError conversion patch, whose\nregressions tests I build upon.\n\n0002 has no ascii fast path in the fallback implementation. 0003 and 0004\nadd it back in using 8- and 16-byte strides, respectively. That will make\nit easier to test on non-Intel platforms, so we can decide which way to go\nhere. Also did a round of editing the comments in the SSE4.2 file.\n\nI ran the multibyte conversion regression test found in the message below,\nand it passed. That doesn't test UTF-8 explicitly, but all conversions\nround-trip through UTF-8, so it does get some coverage.\n\nhttps://www.postgresql.org/message-id/b9e3167f-f84b-7aa4-5738-be578a4db924%40iki.fi\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 Feb 2021 12:25:49 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "v7 fixes an obvious mistake in Solution.pm\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 Feb 2021 17:50:50 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "Hi,\n\nJust a quick question before I move on to review the patch ... The\nimprovement looks like it is only meant for x86 platforms. Can this be\ndone in a portable way by arranging for auto-vectorization ? Something\nlike commit 88709176236caf. This way it would benefit other platforms\nas well.\n\nI tried to compile the following code using -O3, and the assembly does\nhave vectorized instructions.\n\n#include <stdio.h>\nint main()\n{\n int i;\n char s1[200] = \"abcdewhruerhetr\";\n char s2[200] = \"oweurietiureuhtrethre\";\n char s3[200] = {0};\n\n for (i = 0; i < sizeof(s1); i++)\n {\n s3[i] = s1[i] ^ s2[i];\n }\n\n printf(\"%s\\n\", s3);\n}\n\n\n", "msg_date": "Tue, 9 Mar 2021 14:30:21 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Tue, Mar 9, 2021 at 5:00 AM Amit Khandekar <amitdkhan.pg@gmail.com>\nwrote:\n>\n> Hi,\n>\n> Just a quick question before I move on to review the patch ... The\n> improvement looks like it is only meant for x86 platforms.\n\nActually it's meant to be faster for all platforms, since the C fallback is\nquite a bit different from HEAD. I've found it to be faster on ppc64le. An\nearlier version of the patch was a loser on 32-bit Arm because of alignment\nissues, but if you could run the test script attached to [1] on 64-bit Arm,\nI'd be curious to see how it does on 0002, and whether 0003 and 0004 make\nthings better or worse. If there is trouble building on non-x86 platforms,\nI'd want to fix that also.\n\n(Note: 0001 is not my patch, and I just include it for the tests)\n\n> Can this be\n> done in a portable way by arranging for auto-vectorization ? Something\n> like commit 88709176236caf. This way it would benefit other platforms\n> as well.\n\nI'm fairly certain that the author of a compiler capable of doing that in\nthis case would be eligible for some kind of AI prize. :-)\n\n[1]\nhttps://www.postgresql.org/message-id/06d45421-61b8-86dd-e765-f1ce527a5a2f@iki.fi\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Mar 9, 2021 at 5:00 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:>> Hi,>> Just a quick question before I move on to review the patch ... The> improvement looks like it is only meant for x86 platforms. Actually it's meant to be faster for all platforms, since the C fallback is quite a bit different from HEAD. I've found it to be faster on ppc64le. An earlier version of the patch was a loser on 32-bit Arm because of alignment issues, but if you could run the test script attached to [1] on 64-bit Arm, I'd be curious to see how it does on 0002, and whether 0003 and 0004 make things better or worse. If there is trouble building on non-x86 platforms, I'd want to fix that also.(Note: 0001 is not my patch, and I just include it for the tests)> Can this be> done in a portable way by arranging for auto-vectorization ? Something> like commit 88709176236caf. This way it would benefit other platforms> as well.I'm fairly certain that the author of a compiler capable of doing that in this case would be eligible for some kind of AI prize. :-)[1] https://www.postgresql.org/message-id/06d45421-61b8-86dd-e765-f1ce527a5a2f@iki.fi--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 9 Mar 2021 07:43:52 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Tue, 9 Mar 2021 at 17:14, John Naylor <john.naylor@enterprisedb.com> wrote:\n> On Tue, Mar 9, 2021 at 5:00 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > Just a quick question before I move on to review the patch ... The\n> > improvement looks like it is only meant for x86 platforms.\n>\n> Actually it's meant to be faster for all platforms, since the C fallback is quite a bit different from HEAD. I've found it to be faster on ppc64le. An earlier version of the patch was a loser on 32-bit Arm because of alignment issues, but if you could run the test script attached to [1] on 64-bit Arm, I'd be curious to see how it does on 0002, and whether 0003 and 0004 make things better or worse. If there is trouble building on non-x86 platforms, I'd want to fix that also.\n\nOn my Arm64 VM :\n\nHEAD :\n mixed | ascii\n-------+-------\n 1091 | 628\n(1 row)\n\nPATCHED :\n mixed | ascii\n-------+-------\n 681 | 119\n\nSo the fallback function does show improvements on Arm64.\n\nI guess, if at all we use the equivalent Arm NEON intrinsics, the\n\"mixed\" figures will be close to the \"ascii\" figures, going by your\nfigures on x86.\n\n> > Can this be\n> > done in a portable way by arranging for auto-vectorization ? Something\n> > like commit 88709176236caf. This way it would benefit other platforms\n> > as well.\n>\n> I'm fairly certain that the author of a compiler capable of doing that in this case would be eligible for some kind of AI prize. :-)\n\n:)\n\nI was not thinking about auto-vectorizing the code in\npg_validate_utf8_sse42(). Rather, I was considering auto-vectorization\ninside the individual helper functions that you wrote, such as\n_mm_setr_epi8(), shift_right(), bitwise_and(), prev1(), splat(),\nsaturating_sub() etc. I myself am not sure whether it is feasible to\nwrite code that auto-vectorizes all these function definitions.\nsaturating_sub() seems hard, but I could see the gcc docs mentioning\nsupport for generating such instructions for a particular code loop.\nBut for the index lookup function() it seems impossible to generate\nthe needed index lookup intrinsics. We can have platform-specific\nfunction definitions for such exceptional cases.\n\nI am considering this only because that would make the exact code work\non other platforms like arm64 and ppc, and won't have to have\nplatform-specific files. But I understand that it is easier said than\ndone. We will have to process the loop in pg_validate_utf8_sse42() in\n128-bit chunks, and pass each chunk to individual functions, which\ncould mean extra work and extra copy in extracting the chunk data and\npassing it around, which may make things drastically slow. You are\npassing around the chunks using __m128i type, so perhaps it means\npassing around just a reference to the simd registers. Not sure.\n\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n", "msg_date": "Fri, 12 Mar 2021 18:44:13 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Fri, Mar 12, 2021 at 9:14 AM Amit Khandekar <amitdkhan.pg@gmail.com>\nwrote:\n>\n> On my Arm64 VM :\n>\n> HEAD :\n> mixed | ascii\n> -------+-------\n> 1091 | 628\n> (1 row)\n>\n> PATCHED :\n> mixed | ascii\n> -------+-------\n> 681 | 119\n\nThanks for testing! Good, the speedup is about as much as I can hope for\nusing plain C. In the next patch I'll go ahead and squash in the ascii fast\npath, using 16-byte stride, unless there are objections. I claim we can\nlive with the regression Heikki found on an old 32-bit Arm platform since\nit doesn't seem to be true of Arm in general.\n\n> I guess, if at all we use the equivalent Arm NEON intrinsics, the\n> \"mixed\" figures will be close to the \"ascii\" figures, going by your\n> figures on x86.\n\nI would assume so.\n\n> I was not thinking about auto-vectorizing the code in\n> pg_validate_utf8_sse42(). Rather, I was considering auto-vectorization\n> inside the individual helper functions that you wrote, such as\n> _mm_setr_epi8(), shift_right(), bitwise_and(), prev1(), splat(),\n\nIf the PhD holders who came up with this algorithm thought it possible to\ndo it that way, I'm sure they would have. In reality, simdjson has\ndifferent files for SSE4, AVX, AVX512, NEON, and Altivec. We can\nincorporate any of those as needed. That's a PG15 project, though, and I'm\nnot volunteering.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Mar 12, 2021 at 9:14 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:>> On my Arm64 VM :>> HEAD :>  mixed | ascii> -------+------->   1091 |   628> (1 row)>> PATCHED :>  mixed | ascii> -------+------->    681 |   119Thanks for testing! Good, the speedup is about as much as I can hope for using plain C. In the next patch I'll go ahead and squash in the ascii fast path, using 16-byte stride, unless there are objections. I claim we can live with the regression Heikki found on an old 32-bit Arm platform since it doesn't seem to be true of Arm in general.> I guess, if at all we use the equivalent Arm NEON intrinsics, the> \"mixed\" figures will be close to the \"ascii\" figures, going by your> figures on x86.I would assume so.> I was not thinking about auto-vectorizing the code in> pg_validate_utf8_sse42(). Rather, I was considering auto-vectorization> inside the individual helper functions that you wrote, such as> _mm_setr_epi8(), shift_right(), bitwise_and(), prev1(), splat(),If the PhD holders who came up with this algorithm thought it possible to do it that way, I'm sure they would have. In reality, simdjson has different files for SSE4, AVX, AVX512, NEON, and Altivec. We can incorporate any of those as needed. That's a PG15 project, though, and I'm not volunteering.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Mar 2021 11:36:51 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "I wrote:\n\n> Thanks for testing! Good, the speedup is about as much as I can hope for\nusing plain C. In the next patch I'll go ahead and squash in the ascii fast\npath, using 16-byte stride, unless there are objections. I claim we can\nlive with the regression Heikki found on an old 32-bit Arm platform since\nit doesn't seem to be true of Arm in general.\n\nIn v8, I've squashed the 16-byte stride into 0002. I also removed the sole\nholdout of hard-coded intrinsics, by putting _mm_setr_epi8 inside a\nvariadic macro, and also did some reordering of the one-line function\ndefinitions. (As before, 0001 is not my patch, but parts of it are a\nprerequisite to my regressions tests).\n\nOver in [1] , I tested in-situ in a COPY FROM test and found a 10% speedup\nwith mixed ascii and multibyte in the copy code, i.e. with buffer and\nstorage taken completely out of the picture.\n\n[1]\nhttps://www.postgresql.org/message-id/CAFBsxsEybzagsrmuoLsKYx417Sce9cgnM91nf8f9HKGLadixPg%40mail.gmail.com\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 19 Mar 2021 15:24:06 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "v9 is just a rebase.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Apr 2021 10:22:06 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "For v10, I've split the patch up into two parts. 0001 uses pure C\neverywhere. This is much smaller and easier to review, and gets us the most\nbang for the buck.\n\nOne concern Heikki raised upthread is that platforms with poor\nunaligned-memory access will see a regression. We could easily add an\n#ifdef to take care of that, but I haven't done so here.\n\nTo recap: On ascii-only input with storage taken out of the picture,\nprofiles of COPY FROM show a reduction from nealy 10% down to just over 1%.\nIn microbenchmarks found earlier in this thread, this works out to about 7\ntimes faster. On multibyte/mixed input, 0001 is a bit faster, but not\nreally enough to make a difference in copy performance.\n\n0002 adds the SSE4 implementation on x86-64, and is equally fast on all\ninput, at the cost of greater complexity.\n\nTo reflect the split, I've changed the thread subject and the commitfest\ntitle.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Jun 2021 12:26:41 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "speed up verifying UTF-8" }, { "msg_contents": "On 02/06/2021 19:26, John Naylor wrote:\n> For v10, I've split the patch up into two parts. 0001 uses pure C \n> everywhere. This is much smaller and easier to review, and gets us the \n> most bang for the buck.\n> \n> One concern Heikki raised upthread is that platforms with poor \n> unaligned-memory access will see a regression. We could easily add an \n> #ifdef to take care of that, but I haven't done so here.\n> \n> To recap: On ascii-only input with storage taken out of the picture, \n> profiles of COPY FROM show a reduction from nealy 10% down to just over \n> 1%. In microbenchmarks found earlier in this thread, this works out to \n> about 7 times faster. On multibyte/mixed input, 0001 is a bit faster, \n> but not really enough to make a difference in copy performance.\n\nNice!\n\nThis kind of bit-twiddling is fun, so I couldn't resist tinkering with \nit, to see if we can shave some more instructions from it:\n\n> +/* from https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord */\n> +#define HAS_ZERO(chunk) ( \\\n> +\t((chunk) - UINT64CONST(0x0101010101010101)) & \\\n> +\t ~(chunk) & \\\n> +\t UINT64CONST(0x8080808080808080))\n> +\n> +/* Verify a chunk of bytes for valid ASCII including a zero-byte check. */\n> +static inline int\n> +check_ascii(const unsigned char *s, int len)\n> +{\n> +\tuint64\t\thalf1,\n> +\t\t\t\thalf2,\n> +\t\t\t\thighbits_set;\n> +\n> +\tif (len >= 2 * sizeof(uint64))\n> +\t{\n> +\t\tmemcpy(&half1, s, sizeof(uint64));\n> +\t\tmemcpy(&half2, s + sizeof(uint64), sizeof(uint64));\n> +\n> +\t\t/* If there are zero bytes, bail and let the slow path handle it. */\n> +\t\tif (HAS_ZERO(half1) || HAS_ZERO(half2))\n> +\t\t\treturn 0;\n> +\n> +\t\t/* Check if any bytes in this chunk have the high bit set. */\n> +\t\thighbits_set = ((half1 | half2) & UINT64CONST(0x8080808080808080));\n> +\n> +\t\tif (!highbits_set)\n> +\t\t\treturn 2 * sizeof(uint64);\n> +\t\telse\n> +\t\t\treturn 0;\n> +\t}\n> +\telse\n> +\t\treturn 0;\n> +}\n\nSome ideas:\n\n1. Better to check if any high bits are set first. We care more about \nthe speed of that than of detecting zero bytes, because input with high \nbits is valid but zeros are an error.\n\n2. Since we check that there are no high bits, we can do the zero-checks \nwith fewer instructions like this:\n\n/* NB: this is only correct if 'chunk' doesn't have any high bits set */\n#define HAS_ZERO(chunk) ( \\\n ((chunk) + \\\n UINT64CONST(0x7f7f7f7f7f7f7f7f)) & \\\n UINT64CONST(0x8080808080808080) == UINT64CONST(0x8080808080808080))\n\n3. It's probably cheaper perform the HAS_ZERO check just once on (half1 \n| half2). We have to compute (half1 | half2) anyway.\n\n\nPutting all that together:\n\n/* Verify a chunk of bytes for valid ASCII including a zero-byte check. */\nstatic inline int\ncheck_ascii(const unsigned char *s, int len)\n{\n\tuint64\t\thalf1,\n\t\t\t\thalf2,\n\t\t\t\thighbits_set;\n\tuint64\t\tx;\n\n\tif (len >= 2 * sizeof(uint64))\n\t{\n\t\tmemcpy(&half1, s, sizeof(uint64));\n\t\tmemcpy(&half2, s + sizeof(uint64), sizeof(uint64));\n\n\t\t/* Check if any bytes in this chunk have the high bit set. */\n\t\thighbits_set = ((half1 | half2) & UINT64CONST(0x8080808080808080));\n\t\tif (highbits_set)\n\t\t\treturn 0;\n\n\t\t/*\n\t\t * Check if there are any zero bytes in this chunk. This is only correct\n\t\t * if there are no high bits set, but we checked that already.\n\t\t */\n\t\tx = (half1 | half2) + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n\t\tx &= UINT64CONST(0x8080808080808080);\n\t\tif (x != UINT64CONST(0x8080808080808080))\n\t\t\treturn 0;\n\n\t\treturn 2 * sizeof(uint64);\n\t}\n\telse\n\t\treturn 0;\n}\n\nIn quick testing, that indeed compiles into fewer instructions. With \nGCC, there's no measurable difference in performance. But with clang, \nthis version is much faster than the original, because the original \nversion is much slower than when compiled with GCC. In other words, this \nversion seems to avoid some clang misoptimization. I tested only with \nASCII input, I haven't tried other cases.\n\nWhat test set have you been using for performance testing this? I'd like \nto know how this version compares, and I could also try running it on my \nold raspberry pi, which is more strict about alignmnt.\n\n> 0002 adds the SSE4 implementation on x86-64, and is equally fast on all \n> input, at the cost of greater complexity.\n\nDidn't look closely, but seems reasonable at a quick glance.\n\n- Heikki\n\n\n", "msg_date": "Thu, 3 Jun 2021 16:15:59 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "> 3. It's probably cheaper perform the HAS_ZERO check just once on (half1\n| half2). We have to compute (half1 | half2) anyway.\n\nWouldn't you have to check (half1 & half2) ?\n\n\n", "msg_date": "Thu, 3 Jun 2021 10:33:59 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I haven't looked at the surrounding code. Are we processing all the\nCOPY data in one long stream or processing each field individually? If\nwe're processing much more than 128 bits and happy to detect NUL\nerrors only at the end after wasting some work then you could hoist\nthat has_zero check entirely out of the loop (removing the branch\nthough it's probably a correctly predicted branch anyways).\n\nDo something like:\n\nzero_accumulator = zero_accumulator & next_chunk\n\nin the loop and then only at the very end check for zeros in that.\n\n\n", "msg_date": "Thu, 3 Jun 2021 10:41:41 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Thu, Jun 3, 2021 at 10:42 AM Greg Stark <stark@mit.edu> wrote:\n>\n> I haven't looked at the surrounding code. Are we processing all the\n> COPY data in one long stream or processing each field individually?\n\nIt happens on 64kB chunks.\n\n> If\n> we're processing much more than 128 bits and happy to detect NUL\n> errors only at the end after wasting some work then you could hoist\n> that has_zero check entirely out of the loop (removing the branch\n> though it's probably a correctly predicted branch anyways).\n>\n> Do something like:\n>\n> zero_accumulator = zero_accumulator & next_chunk\n>\n> in the loop and then only at the very end check for zeros in that.\n\nThat's the approach taken in the SSE4 patch, and in fact that's the logical\nway to do it there. I hadn't considered doing it that way in the pure C\ncase, but I think it's worth trying.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jun 3, 2021 at 10:42 AM Greg Stark <stark@mit.edu> wrote:>> I haven't looked at the surrounding code. Are we processing all the> COPY data in one long stream or processing each field individually? It happens on 64kB chunks.> If> we're processing much more than 128 bits and happy to detect NUL> errors only at the end after wasting some work then you could hoist> that has_zero check entirely out of the loop (removing the branch> though it's probably a correctly predicted branch anyways).>> Do something like:>> zero_accumulator = zero_accumulator & next_chunk>> in the loop and then only at the very end check for zeros in that.That's the approach taken in the SSE4 patch, and in fact that's the logical way to do it there. I hadn't considered doing it that way in the pure C case, but I think it's worth trying.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Jun 2021 11:33:21 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I wrote:\n\n> On Thu, Jun 3, 2021 at 10:42 AM Greg Stark <stark@mit.edu> wrote:\n> >\n\n> > If\n> > we're processing much more than 128 bits and happy to detect NUL\n> > errors only at the end after wasting some work then you could hoist\n> > that has_zero check entirely out of the loop (removing the branch\n> > though it's probably a correctly predicted branch anyways).\n> >\n> > Do something like:\n> >\n> > zero_accumulator = zero_accumulator & next_chunk\n> >\n> > in the loop and then only at the very end check for zeros in that.\n>\n> That's the approach taken in the SSE4 patch, and in fact that's the\nlogical way to do it there. I hadn't considered doing it that way in the\npure C case, but I think it's worth trying.\n\nActually, I spoke too quickly. We can't have an error accumulator in the C\ncase because we need to return how many bytes were valid. In fact, in the\nSSE case, it checks the error vector at the end and then reruns with the\nfallback case to count the valid bytes.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> On Thu, Jun 3, 2021 at 10:42 AM Greg Stark <stark@mit.edu> wrote:> >> > If> > we're processing much more than 128 bits and happy to detect NUL> > errors only at the end after wasting some work then you could hoist> > that has_zero check entirely out of the loop (removing the branch> > though it's probably a correctly predicted branch anyways).> >> > Do something like:> >> > zero_accumulator = zero_accumulator & next_chunk> >> > in the loop and then only at the very end check for zeros in that.>> That's the approach taken in the SSE4 patch, and in fact that's the logical way to do it there. I hadn't considered doing it that way in the pure C case, but I think it's worth trying.Actually, I spoke too quickly. We can't have an error accumulator in the C case because we need to return how many bytes were valid. In fact, in the SSE case, it checks the error vector at the end and then reruns with the fallback case to count the valid bytes.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Jun 2021 11:42:51 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Thu, Jun 3, 2021 at 9:16 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> Some ideas:\n>\n> 1. Better to check if any high bits are set first. We care more about\n> the speed of that than of detecting zero bytes, because input with high\n> bits is valid but zeros are an error.\n>\n> 2. Since we check that there are no high bits, we can do the zero-checks\n> with fewer instructions like this:\n\nBoth ideas make sense, and I like the shortcut we can take with the zero\ncheck. I think Greg is right that the zero check needs “half1 & half2”, so\nI tested with that (updated patches attached).\n\n> What test set have you been using for performance testing this? I'd like\n\nThe microbenchmark is the same one you attached to [1], which I extended\nwith a 95% multibyte case. With the new zero check:\n\nclang 12.0.5 / MacOS:\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 981 | 688 | 371\n\n0001:\n\n chinese | mixed | ascii\n---------+-------+-------\n 932 | 548 | 110\n\nplus optimized zero check:\n\n chinese | mixed | ascii\n---------+-------+-------\n 689 | 573 | 59\n\nIt makes sense that the Chinese text case is faster since the zero check is\nskipped.\n\ngcc 4.8.5 / Linux:\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 2561 | 1493 | 825\n\n0001:\n\n chinese | mixed | ascii\n---------+-------+-------\n 2968 | 1035 | 158\n\nplus optimized zero check:\n\n chinese | mixed | ascii\n---------+-------+-------\n 2413 | 1078 | 137\n\nThe second machine is a bit older and has an old compiler, but there is\nstill a small speed increase. In fact, without Heikki's tweaks, 0001\nregresses on multibyte.\n\n(Note: I'm not seeing the 7x improvement I claimed for 0001 here, but that\nwas from memory and I think that was a different machine and newer gcc. We\ncan report a range of results as we proceed.)\n\n[1]\nhttps://www.postgresql.org/message-id/06d45421-61b8-86dd-e765-f1ce527a5a2f@iki.fi\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jun 3, 2021 at 9:16 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:> Some ideas:>> 1. Better to check if any high bits are set first. We care more about> the speed of that than of detecting zero bytes, because input with high> bits is valid but zeros are an error.>> 2. Since we check that there are no high bits, we can do the zero-checks> with fewer instructions like this:Both ideas make sense, and I like the shortcut we can take with the zero check. I think Greg is right that the zero check needs “half1 & half2”, so I tested with that (updated patches attached).> What test set have you been using for performance testing this? I'd likeThe microbenchmark is the same one you attached to [1], which I extended with a 95% multibyte case. With the new zero check:clang 12.0.5 / MacOS:master: chinese | mixed | ascii---------+-------+-------     981 |   688 |   3710001: chinese | mixed | ascii---------+-------+-------     932 |   548 |   110plus optimized zero check: chinese | mixed | ascii---------+-------+-------     689 |   573 |    59It makes sense that the Chinese text case is faster since the zero check is skipped.gcc 4.8.5 / Linux:master: chinese | mixed | ascii---------+-------+-------    2561 |  1493 |   8250001: chinese | mixed | ascii---------+-------+-------    2968 |  1035 |   158plus optimized zero check: chinese | mixed | ascii---------+-------+-------    2413 |  1078 |   137The second machine is a bit older and has an old compiler, but there is still a small speed increase. In fact, without Heikki's tweaks, 0001 regresses on multibyte.(Note: I'm not seeing the 7x improvement I claimed for 0001 here, but that was from memory and I think that was a different machine and newer gcc. We can report a range of results as we proceed.)[1] https://www.postgresql.org/message-id/06d45421-61b8-86dd-e765-f1ce527a5a2f@iki.fi--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Jun 2021 14:58:57 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On 03/06/2021 17:33, Greg Stark wrote:\n>> 3. It's probably cheaper perform the HAS_ZERO check just once on (half1\n> | half2). We have to compute (half1 | half2) anyway.\n> \n> Wouldn't you have to check (half1 & half2) ?\n\nAh, you're right of course. But & is not quite right either, it will \ngive false positives. That's ok from a correctness point of view here, \nbecause we then fall back to checking byte by byte, but I don't think \nit's a good tradeoff.\n\nI think this works, however:\n\n/* Verify a chunk of bytes for valid ASCII including a zero-byte check. */\nstatic inline int\ncheck_ascii(const unsigned char *s, int len)\n{\n\tuint64\t\thalf1,\n\t\t\t\thalf2,\n\t\t\t\thighbits_set;\n\tuint64\t\tx1,\n\t\t\t\tx2;\n\tuint64\t\tx;\n\n\tif (len >= 2 * sizeof(uint64))\n\t{\n\t\tmemcpy(&half1, s, sizeof(uint64));\n\t\tmemcpy(&half2, s + sizeof(uint64), sizeof(uint64));\n\n\t\t/* Check if any bytes in this chunk have the high bit set. */\n\t\thighbits_set = ((half1 | half2) & UINT64CONST(0x8080808080808080));\n\t\tif (highbits_set)\n\t\t\treturn 0;\n\n\t\t/*\n\t\t * Check if there are any zero bytes in this chunk.\n\t\t *\n\t\t * First, add 0x7f to each byte. This sets the high bit in each byte,\n\t\t * unless it was a zero. We already checked that none of the bytes had\n\t\t * the high bit set previously, so the max value each byte can have\n\t\t * after the addition is 0x7f + 0x7f = 0xfe, and we don't need to\n\t\t * worry about carrying over to the next byte.\n\t\t */\n\t\tx1 = half1 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n\t\tx2 = half2 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n\n\t\t/* then check that the high bit is set in each byte. */\n\t\tx = (x1 | x2);\n\t\tx &= UINT64CONST(0x8080808080808080);\n\t\tif (x != UINT64CONST(0x8080808080808080))\n\t\t\treturn 0;\n\n\t\treturn 2 * sizeof(uint64);\n\t}\n\telse\n\t\treturn 0;\n}\n\n- Heikki\n\n\n", "msg_date": "Thu, 3 Jun 2021 22:08:57 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Thu, Jun 3, 2021 at 3:08 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 03/06/2021 17:33, Greg Stark wrote:\n> >> 3. It's probably cheaper perform the HAS_ZERO check just once on (half1\n> > | half2). We have to compute (half1 | half2) anyway.\n> >\n> > Wouldn't you have to check (half1 & half2) ?\n>\n> Ah, you're right of course. But & is not quite right either, it will\n> give false positives. That's ok from a correctness point of view here,\n> because we then fall back to checking byte by byte, but I don't think\n> it's a good tradeoff.\n\nAh, of course.\n\n> /*\n> * Check if there are any zero bytes in this chunk.\n> *\n> * First, add 0x7f to each byte. This sets the high bit\nin each byte,\n> * unless it was a zero. We already checked that none of\nthe bytes had\n> * the high bit set previously, so the max value each\nbyte can have\n> * after the addition is 0x7f + 0x7f = 0xfe, and we don't\nneed to\n> * worry about carrying over to the next byte.\n> */\n> x1 = half1 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n> x2 = half2 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n>\n> /* then check that the high bit is set in each byte. */\n> x = (x1 | x2);\n> x &= UINT64CONST(0x8080808080808080);\n> if (x != UINT64CONST(0x8080808080808080))\n> return 0;\n\nThat seems right, I'll try that and update the patch. (Forgot to attach\nearlier anyway)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jun 3, 2021 at 3:08 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> On 03/06/2021 17:33, Greg Stark wrote:> >> 3. It's probably cheaper perform the HAS_ZERO check just once on (half1> > | half2). We have to compute (half1 | half2) anyway.> >> > Wouldn't you have to check (half1 & half2) ?>> Ah, you're right of course. But & is not quite right either, it will> give false positives. That's ok from a correctness point of view here,> because we then fall back to checking byte by byte, but I don't think> it's a good tradeoff.Ah, of course.>                 /*>                  * Check if there are any zero bytes in this chunk.>                  *>                  * First, add 0x7f to each byte. This sets the high bit in each byte,>                  * unless it was a zero. We already checked that none of the bytes had>                  * the high bit set previously, so the max value each byte can have>                  * after the addition is 0x7f + 0x7f = 0xfe, and we don't need to>                  * worry about carrying over to the next byte.>                  */>                 x1 = half1 + UINT64CONST(0x7f7f7f7f7f7f7f7f);>                 x2 = half2 + UINT64CONST(0x7f7f7f7f7f7f7f7f);>>                 /* then check that the high bit is set in each byte. */>                 x = (x1 | x2);>                 x &= UINT64CONST(0x8080808080808080);>                 if (x != UINT64CONST(0x8080808080808080))>                         return 0;That seems right, I'll try that and update the patch. (Forgot to attach earlier anyway)--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Jun 2021 15:10:35 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On 03/06/2021 22:10, John Naylor wrote:\n> On Thu, Jun 3, 2021 at 3:08 PM Heikki Linnakangas <hlinnaka@iki.fi \n> <mailto:hlinnaka@iki.fi>> wrote:\n> >                 x1 = half1 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n> >                 x2 = half2 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n> >\n> >                 /* then check that the high bit is set in each byte. */\n> >                 x = (x1 | x2);\n> >                 x &= UINT64CONST(0x8080808080808080);\n> >                 if (x != UINT64CONST(0x8080808080808080))\n> >                         return 0;\n> \n> That seems right, I'll try that and update the patch. (Forgot to attach \n> earlier anyway)\n\nUgh, actually that has the same issue as before. If one of the bytes is \nin one half is zero, but not in the other half, this fail to detect it. \nSorry for the noise..\n\n- Heikki\n\n\n", "msg_date": "Thu, 3 Jun 2021 22:16:04 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On 03/06/2021 22:16, Heikki Linnakangas wrote:\n> On 03/06/2021 22:10, John Naylor wrote:\n>> On Thu, Jun 3, 2021 at 3:08 PM Heikki Linnakangas <hlinnaka@iki.fi\n>> <mailto:hlinnaka@iki.fi>> wrote:\n>> >                 x1 = half1 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n>> >                 x2 = half2 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n>> >\n>> >                 /* then check that the high bit is set in each byte. */\n>> >                 x = (x1 | x2);\n>> >                 x &= UINT64CONST(0x8080808080808080);\n>> >                 if (x != UINT64CONST(0x8080808080808080))\n>> >                         return 0;\n>>\n>> That seems right, I'll try that and update the patch. (Forgot to attach\n>> earlier anyway)\n> \n> Ugh, actually that has the same issue as before. If one of the bytes is\n> in one half is zero, but not in the other half, this fail to detect it.\n> Sorry for the noise..\n\nIf you replace (x1 | x2) with (x1 & x2) above, I think it's correct.\n\n- Heikki\n\n\n", "msg_date": "Thu, 3 Jun 2021 22:22:15 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Thu, Jun 3, 2021 at 3:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 03/06/2021 22:16, Heikki Linnakangas wrote:\n> > On 03/06/2021 22:10, John Naylor wrote:\n> >> On Thu, Jun 3, 2021 at 3:08 PM Heikki Linnakangas <hlinnaka@iki.fi\n> >> <mailto:hlinnaka@iki.fi>> wrote:\n> >> > x1 = half1 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n> >> > x2 = half2 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\n> >> >\n> >> > /* then check that the high bit is set in each\nbyte. */\n> >> > x = (x1 | x2);\n> >> > x &= UINT64CONST(0x8080808080808080);\n> >> > if (x != UINT64CONST(0x8080808080808080))\n> >> > return 0;\n\n> If you replace (x1 | x2) with (x1 & x2) above, I think it's correct.\n\nAfter looking at it again with fresh eyes, I agree this is correct. I\nmodified the regression tests to pad the input bytes with ascii so that the\ncode path that works on 16-bytes at a time is tested. I use both UTF-8\ninput tables for some of the additional tests. There is a de facto\nrequirement that the descriptions are unique across both of the input\ntables. That could be done more elegantly, but I wanted to keep things\nsimple for now.\n\nv11-0001 is an improvement over v10:\n\nclang 12.0.5 / MacOS:\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 975 | 686 | 369\n\nv10-0001:\n\n chinese | mixed | ascii\n---------+-------+-------\n 930 | 549 | 109\n\nv11-0001:\n\n chinese | mixed | ascii\n---------+-------+-------\n 687 | 440 | 64\n\n\ngcc 4.8.5 / Linux (older machine)\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 2559 | 1495 | 825\n\nv10-0001:\n\n chinese | mixed | ascii\n---------+-------+-------\n 2966 | 1034 | 156\n\nv11-0001:\n\n chinese | mixed | ascii\n---------+-------+-------\n 2242 | 824 | 140\n\nPrevious testing on POWER8 and Arm64 leads me to expect similar results\nthere as well.\n\nI also looked again at 0002 and decided I wasn't quite happy with the test\ncoverage. Previously, the code padded out a short input with ascii so that\nthe 16-bytes-at-a-time code path was always exercised. However, that\nrequired some finicky complexity and still wasn't adequate. For v11, I\nripped that out and put the responsibility on the regression tests to make\nsure the various code paths are exercised.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 6 Jun 2021 15:21:51 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On 03/06/2021 21:58, John Naylor wrote:\n> \n> > What test set have you been using for performance testing this? I'd like\n> \n> The microbenchmark is the same one you attached to [1], which I extended \n> with a 95% multibyte case.\n\nCould you share the exact test you're using? I'd like to test this on my \nold raspberry pi, out of curiosity.\n\n- Heikki\n\n\n", "msg_date": "Mon, 7 Jun 2021 15:24:53 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Mon, Jun 7, 2021 at 8:24 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 03/06/2021 21:58, John Naylor wrote:\n> > The microbenchmark is the same one you attached to [1], which I extended\n> > with a 95% multibyte case.\n>\n> Could you share the exact test you're using? I'd like to test this on my\n> old raspberry pi, out of curiosity.\n\nSure, attached.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Jun 2021 08:39:40 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On 07/06/2021 15:39, John Naylor wrote:\n> On Mon, Jun 7, 2021 at 8:24 AM Heikki Linnakangas <hlinnaka@iki.fi \n> <mailto:hlinnaka@iki.fi>> wrote:\n> >\n> > On 03/06/2021 21:58, John Naylor wrote:\n> > > The microbenchmark is the same one you attached to [1], which I \n> extended\n> > > with a 95% multibyte case.\n> >\n> > Could you share the exact test you're using? I'd like to test this on my\n> > old raspberry pi, out of curiosity.\n> \n> Sure, attached.\n> \n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n> \nResults from chipmunk, my first generation Raspberry Pi:\n\nMaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 25392 | 16287 | 10295\n(1 row)\n\nv11-0001-Rewrite-pg_utf8_verifystr-for-speed.patch:\n\n chinese | mixed | ascii\n---------+-------+-------\n 17739 | 10854 | 4121\n(1 row)\n\nSo that's good.\n\nWhat is the worst case scenario for this algorithm? Something where the \nnew fast ASCII check never helps, but is as fast as possible with the \nold code. For that, I added a repeating pattern of '123456789012345ä' to \nthe test set (these results are from my Intel laptop, not the raspberry pi):\n\nMaster:\n\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 1333 | 757 | 410 | 573\n(1 row)\n\nv11-0001-Rewrite-pg_utf8_verifystr-for-speed.patch:\n\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 942 | 470 | 66 | 1249\n(1 row)\n\nSo there's a regression with that input. Maybe that's acceptable, this \nis the worst case, after all. Or you could tweak check_ascii for a \ndifferent performance tradeoff, by checking the two 64-bit words \nseparately and returning \"8\" if the failure happens in the second word. \nAnd I haven't tried the SSE patch yet, maybe that compensates for this.\n\n- Heikki\n\n\n", "msg_date": "Wed, 9 Jun 2021 14:02:02 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Wed, Jun 9, 2021 at 7:02 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> What is the worst case scenario for this algorithm? Something where the\n> new fast ASCII check never helps, but is as fast as possible with the\n> old code. For that, I added a repeating pattern of '123456789012345ä' to\n> the test set (these results are from my Intel laptop, not the raspberry\npi):\n>\n> Master:\n>\n> chinese | mixed | ascii | mixed2\n> ---------+-------+-------+--------\n> 1333 | 757 | 410 | 573\n> (1 row)\n>\n> v11-0001-Rewrite-pg_utf8_verifystr-for-speed.patch:\n>\n> chinese | mixed | ascii | mixed2\n> ---------+-------+-------+--------\n> 942 | 470 | 66 | 1249\n> (1 row)\n\nI get a much smaller regression on my laptop with clang 12:\n\nmaster:\n\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 978 | 685 | 370 | 452\n\nv11-0001:\n\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 686 | 438 | 64 | 595\n\n> So there's a regression with that input. Maybe that's acceptable, this\n> is the worst case, after all. Or you could tweak check_ascii for a\n> different performance tradeoff, by checking the two 64-bit words\n> separately and returning \"8\" if the failure happens in the second word.\n\nFor v12 (unformatted and without 0002 rebased) I tried the following:\n--\nhighbits_set = (half1) & UINT64CONST(0x8080808080808080);\nif (highbits_set)\n return 0;\n\nx1 = half1 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\nx1 &= UINT64CONST(0x8080808080808080);\nif (x1 != UINT64CONST(0x8080808080808080))\n return 0;\n\n/* now we know we have at least 8 bytes of valid ascii, so if any of these\ntests fails, return that */\n\nhighbits_set = (half2) & UINT64CONST(0x8080808080808080);\nif (highbits_set)\n return sizeof(uint64);\n\nx2 = half2 + UINT64CONST(0x7f7f7f7f7f7f7f7f);\nx2 &= UINT64CONST(0x8080808080808080);\nif (x2 != UINT64CONST(0x8080808080808080))\n return sizeof(uint64);\n\nreturn 2 * sizeof(uint64);\n--\nand got this:\n\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 674 | 499 | 170 | 421\n\nPure ascii is significantly slower, but the regression is gone.\n\nI used the string repeat('123456789012345ä', 3647) to match the ~62000\nbytes in the other strings (62000 / 17 = 3647)\n\n> And I haven't tried the SSE patch yet, maybe that compensates for this.\n\nI would expect that this case is identical to all-multibyte. The worst case\nfor SSE might be alternating 16-byte chunks of ascii-only and chunks of\nmultibyte, since that's one of the few places it branches. In simdjson,\nthey check ascii on 64 byte blocks at a time ((c1 | c2) | (c3 | c4)) and\ncheck only the previous block's \"chunk 4\" for incomplete sequences at the\nend. It's a bit messier, so I haven't done it, but it's an option.\n\nAlso, if SSE is accepted into the tree, then the C fallback is only\nimportant on platforms like PowerPC64 and Arm64, so we can make\nthe tradeoff by testing those more carefully. I'll test on PowerPC soon.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 10 Jun 2021 08:45:01 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I wrote:\n\n> Also, if SSE is accepted into the tree, then the C fallback is only\nimportant on platforms like PowerPC64 and Arm64, so we can make the\ntradeoff by testing those more carefully. I'll test on PowerPC soon.\n\nI got around to testing on POWER8 / Linux / gcc 4.8.5 and found a\nregression in the mixed2 case in v11. v12 improves that at the cost of some\nimprovement in the ascii case (5x vs. 8x).\n\nmaster:\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 2966 | 1525 | 871 | 1474\n\nv11-0001:\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 1030 | 644 | 102 | 1760\n\nv12-0001:\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 977 | 632 | 168 | 1113\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> Also, if SSE is accepted into the tree, then the C fallback is only important on platforms like PowerPC64 and Arm64, so we can make the tradeoff by testing those more carefully. I'll test on PowerPC soon.I got around to testing on POWER8 / Linux / gcc 4.8.5 and found a regression in the mixed2 case in v11. v12 improves that at the cost of some improvement in the ascii case (5x vs. 8x).master: chinese | mixed | ascii | mixed2---------+-------+-------+--------    2966 |  1525 |   871 |   1474v11-0001: chinese | mixed | ascii | mixed2---------+-------+-------+--------    1030 |   644 |   102 |   1760v12-0001: chinese | mixed | ascii | mixed2---------+-------+-------+--------     977 |   632 |   168 |   1113--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 10 Jun 2021 20:36:14 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I still wasn't quite happy with the churn in the regression tests, so for\nv13 I gave up on using both the existing utf8 table and my new one for the\n\"padded input\" tests, and instead just copied the NUL byte test into the\nnew table. Also added a primary key to make sure the padded test won't give\nweird results if a new entry has a duplicate description.\n\nI came up with \"highbit_carry\" as a more descriptive variable name than\n\"x\", but that doesn't matter a whole lot.\n\nIt also occurred to me that if we're going to check one 8-byte chunk at a\ntime (like v12 does), maybe it's only worth it to load 8 bytes at a time.\nAn earlier version did this, but without the recent tweaks. The worst-case\nscenario now might be different from the one with 16-bytes, but for now\njust tested the previous worst case (mixed2). Only tested on ppc64le, since\nI'm hoping x86 will get the SIMD algorithm (I'm holding off rebasing 0002\nuntil 0001 settles down).\n\nPower8, Linux, gcc 4.8\n\nmaster:\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 2952 | 1520 | 871 | 1473\n\nv11:\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 1015 | 641 | 102 | 1636\n\nv12:\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 964 | 629 | 168 | 1069\n\nv13:\n chinese | mixed | ascii | mixed2\n---------+-------+-------+--------\n 954 | 643 | 202 | 1046\n\nv13 is not that much different from v12, but has the nice property of\nsimpler code. Both are not as nice as v11 for ascii, but don't regress for\nthe latter's worst case. I'm leaning towards v13 for the fallback.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 29 Jun 2021 07:20:38 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On 29/06/2021 14:20, John Naylor wrote:\n> I still wasn't quite happy with the churn in the regression tests, so \n> for v13 I gave up on using both the existing utf8 table and my new one \n> for the \"padded input\" tests, and instead just copied the NUL byte test \n> into the new table. Also added a primary key to make sure the padded \n> test won't give weird results if a new entry has a duplicate description.\n> \n> I came up with \"highbit_carry\" as a more descriptive variable name than \n> \"x\", but that doesn't matter a whole lot.\n> \n> It also occurred to me that if we're going to check one 8-byte chunk at \n> a time (like v12 does), maybe it's only worth it to load 8 bytes at a \n> time. An earlier version did this, but without the recent tweaks. The \n> worst-case scenario now might be different from the one with 16-bytes, \n> but for now just tested the previous worst case (mixed2).\n\nI tested the new worst case scenario on my laptop:\n\ngcc master:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1311 | 758 | 405 | 583 | 725\n\n\ngcc v13:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 956 | 472 | 160 | 572 | 939\n\n\nmixed16 is the same as \"mixed2\" in the previous rounds, with \n'123456789012345ä' as the repeating string, and mixed8 uses '1234567ä', \nwhich I believe is the worst case for patch v13. So v13 is somewhat \nslower than master in the worst case.\n\nHmm, there's one more simple trick we can do: We can have a separate \nfast-path version of the loop when there are at least 8 bytes of input \nleft, skipping all the length checks. With that:\n\ngcc v14:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 737 | 412 | 94 | 476 | 725\n\n\nAll the above numbers were with gcc 10.2.1. For completeness, with clang \n11.0.1-2 I got:\n\nclang master:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1044 | 724 | 403 | 930 | 603\n(1 row)\n\nclang v13:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 596 | 445 | 79 | 417 | 715\n(1 row)\n\n\nclang v14:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 600 | 337 | 93 | 318 | 511\n\nAttached is patch v14 with that optimization. It needs some cleanup, I \njust hacked it up quickly for performance testing.\n\n- Heikki", "msg_date": "Wed, 30 Jun 2021 14:18:32 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Wed, Jun 30, 2021 at 7:18 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> Hmm, there's one more simple trick we can do: We can have a separate\n> fast-path version of the loop when there are at least 8 bytes of input\n> left, skipping all the length checks. With that:\n\nGood idea, and the numbers look good on Power8 / gcc 4.8 as well:\n\nmaster:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 2951 | 1521 | 871 | 1473 | 1508\n\nv13:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 949 | 642 | 203 | 1046 | 1818\n\nv14:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 887 | 607 | 179 | 776 | 1325\n\n\nI don't think the new structuring will pose any challenges for rebasing\n0002, either. This might need some experimentation, though:\n\n+ * Subroutine of pg_utf8_verifystr() to check on char. Returns the length\nof the\n+ * character at *s in bytes, or 0 on invalid input or premature end of\ninput.\n+ *\n+ * XXX: could this be combined with pg_utf8_verifychar above?\n+ */\n+static inline int\n+pg_utf8_verify_one(const unsigned char *s, int len)\n\nIt seems like it would be easy to have pg_utf8_verify_one in my proposed\npg_utf8.h header and replace the body of pg_utf8_verifychar with it.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 30, 2021 at 7:18 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:> Hmm, there's one more simple trick we can do: We can have a separate> fast-path version of the loop when there are at least 8 bytes of input> left, skipping all the length checks. With that:Good idea, and the numbers look good on Power8 / gcc 4.8 as well:master: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------    2951 |  1521 |   871 |    1473 |   1508v13: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------     949 |   642 |   203 |    1046 |   1818v14: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------     887 |   607 |   179 |     776 |   1325I don't think the new structuring will pose any challenges for rebasing 0002, either. This might need some experimentation, though:+ * Subroutine of pg_utf8_verifystr() to check on char. Returns the length of the+ * character at *s in bytes, or 0 on invalid input or premature end of input.+ *+ * XXX: could this be combined with pg_utf8_verifychar above?+ */+static inline int+pg_utf8_verify_one(const unsigned char *s, int len)It seems like it would be easy to have pg_utf8_verify_one in my proposed pg_utf8.h header and replace the body of pg_utf8_verifychar with it.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 30 Jun 2021 12:54:23 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I wrote:\n\n> I don't think the new structuring will pose any challenges for rebasing\n0002, either. This might need some experimentation, though:\n>\n> + * Subroutine of pg_utf8_verifystr() to check on char. Returns the\nlength of the\n> + * character at *s in bytes, or 0 on invalid input or premature end of\ninput.\n> + *\n> + * XXX: could this be combined with pg_utf8_verifychar above?\n> + */\n> +static inline int\n> +pg_utf8_verify_one(const unsigned char *s, int len)\n>\n> It seems like it would be easy to have pg_utf8_verify_one in my proposed\npg_utf8.h header and replace the body of pg_utf8_verifychar with it.\n\n0001: I went ahead and tried this for v15, and also attempted some clean-up:\n\n- Rename pg_utf8_verify_one to pg_utf8_verifychar_internal.\n- Have pg_utf8_verifychar_internal return -1 for invalid input to match\nother functions in the file. We could also do this for check_ascii, but\nit's not quite the same thing, because the string could still have valid\nbytes in it, just not enough to advance the pointer by the stride length.\n- Remove hard-coded numbers (not wedded to this).\n\n- Use a call to pg_utf8_verifychar in the slow path.\n- Reduce pg_utf8_verifychar to thin wrapper around\npg_utf8_verifychar_internal.\n\nThe last two aren't strictly necessary, but it prevents bloating the binary\nin the slow path, and aids readability. For 0002, this required putting\npg_utf8_verifychar* in src/port. (While writing this I noticed I neglected\nto explain that with a comment, though)\n\nFeedback welcome on any of the above.\n\nSince by now it hardly resembles the simdjson (or Fuchsia for that matter)\nfallback that it took inspiration from, I've removed that mention from the\ncommit message.\n\n0002: Just a rebase to work with the above. One possible review point: We\ndon't really need to have separate control over whether to use special\ninstructions for CRC and UTF-8. It should probably be just one configure\nknob, but having them separate is perhaps easier to review.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 12 Jul 2021 15:45:39 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Tue, 13 Jul 2021 at 01:15, John Naylor <john.naylor@enterprisedb.com> wrote:\n> > It seems like it would be easy to have pg_utf8_verify_one in my proposed pg_utf8.h header and replace the body of pg_utf8_verifychar with it.\n>\n> 0001: I went ahead and tried this for v15, and also attempted some clean-up:\n>\n> - Rename pg_utf8_verify_one to pg_utf8_verifychar_internal.\n> - Have pg_utf8_verifychar_internal return -1 for invalid input to match other functions in the file. We could also do this for check_ascii, but it's not quite the same thing, because the string could still have valid bytes in it, just not enough to advance the pointer by the stride length.\n> - Remove hard-coded numbers (not wedded to this).\n>\n> - Use a call to pg_utf8_verifychar in the slow path.\n> - Reduce pg_utf8_verifychar to thin wrapper around pg_utf8_verifychar_internal.\n\n- check_ascii() seems to be used only for 64-bit chunks. So why not\nremove the len argument and the len <= sizeof(int64) checks inside the\nfunction. We can rename it to check_ascii64() for clarity.\n\n- I was thinking, why not have a pg_utf8_verify64() that processes\n64-bit chunks (or a 32-bit version). In check_ascii(), we anyway\nextract a 64-bit chunk from the string. We can use the same chunk to\nextract the required bits from a two byte char or a 4 byte char. This\nway we can avoid extraction of separate bytes like b1 = *s; b2 = s[1]\netc. More importantly, we can avoid the separate continuation-char\nchecks for each individual byte. Additionally, we can try to simplify\nthe subsequent overlong or surrogate char checks. Something like this\n:\n\nint pg_utf8_verifychar_32(uint32 chunk)\n{\n int len, l;\n\n for (len = sizeof(chunk); len > 0; (len -= l), (chunk = chunk << l))\n {\n /* Is 2-byte lead */\n if ((chunk & 0xF0000000) == 0xC0000000)\n {\n l = 2;\n /* ....... ....... */\n }\n /* Is 3-byte lead */\n else if ((chunk & 0xF0000000) == 0xE0000000)\n {\n l = 3;\n if (len < l)\n break;\n\n /* b2 and b3 should be continuation bytes */\n if ((chunk & 0x00C0C000) != 0x00808000)\n return sizeof(chunk) - len;\n\n switch (chunk & 0xFF200000)\n {\n /* check 3-byte overlong: 1110.0000 1001.xxxx 10xx.xxxx\n * i.e. (b1 == 0xE0 && b2 < 0xA0). We already know b2\nis of the form\n * 10xx since it's a continuation char. Additionally\ncondition b2 <=\n * 0x9F means it is of the form 100x.xxxx. i.e.\neither 1000.xxxx\n * or 1001.xxxx. So just verify that it is xx0x.xxxx\n */\n case 0xE0000000:\n return sizeof(chunk) - len;\n\n /* check surrogate: 1110.1101 101x.xxxx 10xx.xxxx\n * i.e. (b1 == 0xED && b2 > 0x9F): Here, > 0x9F means either\n * 1010.xxxx, 1011.xxxx, 1100.xxxx, or 1110.xxxx. Last\ntwo are not\n * possible because b2 is a continuation char. So it has to be\n * first two. So just verify that it is xx1x.xxxx\n */\n case 0xED200000:\n return sizeof(chunk) - len;\n default:\n ;\n }\n\n }\n /* Is 4-byte lead */\n else if ((chunk & 0xF0000000) == 0xF0000000)\n {\n /* ......... */\n l = 4;\n }\n else\n return sizeof(chunk) - len;\n }\n return sizeof(chunk) - len;\n}\n\n\n", "msg_date": "Thu, 15 Jul 2021 10:39:48 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Thu, Jul 15, 2021 at 1:10 AM Amit Khandekar <amitdkhan.pg@gmail.com>\nwrote:\n\n> - check_ascii() seems to be used only for 64-bit chunks. So why not\n> remove the len argument and the len <= sizeof(int64) checks inside the\n> function. We can rename it to check_ascii64() for clarity.\n\nThanks for taking a look!\n\nWell yes, but there's nothing so intrinsic to 64 bits that the name needs\nto reflect that. Earlier versions worked on 16 bytes at time. The compiler\nwill optimize away the len check, but we could replace with an assert\ninstead.\n\n> - I was thinking, why not have a pg_utf8_verify64() that processes\n> 64-bit chunks (or a 32-bit version). In check_ascii(), we anyway\n> extract a 64-bit chunk from the string. We can use the same chunk to\n> extract the required bits from a two byte char or a 4 byte char. This\n> way we can avoid extraction of separate bytes like b1 = *s; b2 = s[1]\n> etc.\n\nLoading bytes from L1 is really fast -- I wouldn't even call it\n\"extraction\".\n\n> More importantly, we can avoid the separate continuation-char\n> checks for each individual byte.\n\nOn a pipelined superscalar CPU, I wouldn't expect it to matter in the\nslightest.\n\n> Additionally, we can try to simplify\n> the subsequent overlong or surrogate char checks. Something like this\n\nMy recent experience with itemptrs has made me skeptical of this kind of\nthing, but the idea was interesting enough that I couldn't resist trying it\nout. I have two attempts, which are attached as v16*.txt and apply\nindependently. They are rough, and some comments are now lies. To simplify\nthe constants, I do shift down to uint32, and I didn't bother working\naround that. v16alpha regressed on worst-case input, so for v16beta I went\nback to earlier coding for the one-byte ascii check. That helped, but it's\nstill slower than v14.\n\nThat was not unexpected, but I was mildly shocked to find out that v15 is\nalso slower than the v14 that Heikki posted. The only non-cosmetic\ndifference is using pg_utf8_verifychar_internal within pg_utf8_verifychar.\nI'm not sure why it would make such a big difference here. The numbers on\nPower8 / gcc 4.8 (little endian):\n\nHEAD:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 2951 | 1521 | 871 | 1474 | 1508\n\nv14:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 885 | 607 | 179 | 774 | 1325\n\nv15:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1085 | 671 | 180 | 1032 | 1799\n\nv16alpha:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1268 | 822 | 180 | 1410 | 2518\n\nv16beta:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1096 | 654 | 182 | 814 | 1403\n\n\nAs it stands now, for v17 I'm inclined to go back to v15, but without the\nattempt at being clever that seems to have slowed it down from v14.\n\nAny interest in testing on 64-bit Arm?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 15 Jul 2021 14:12:43 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I wrote:\n\n> To simplify the constants, I do shift down to uint32, and I didn't bother\nworking around that. v16alpha regressed on worst-case input, so for v16beta\nI went back to earlier coding for the one-byte ascii check. That helped,\nbut it's still slower than v14.\n\nIt occurred to me that I could rewrite the switch test into simple\ncomparisons, like I already had for the 2- and 4-byte lead cases. While at\nit, I folded the leading byte and continuation tests into a single\noperation, like this:\n\n/* 3-byte lead with two continuation bytes */\nelse if ((chunk & 0xF0C0C00000000000) == 0xE080800000000000)\n\n...and also tried using 64-bit constants to avoid shifting. Still didn't\nquite beat v14, but got pretty close:\n\n> The numbers on Power8 / gcc 4.8 (little endian):\n>\n> HEAD:\n>\n> chinese | mixed | ascii | mixed16 | mixed8\n> ---------+-------+-------+---------+--------\n> 2951 | 1521 | 871 | 1474 | 1508\n>\n> v14:\n>\n> chinese | mixed | ascii | mixed16 | mixed8\n> ---------+-------+-------+---------+--------\n> 885 | 607 | 179 | 774 | 1325\n\nv16gamma:\n\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 952 | 632 | 180 | 800 | 1333\n\nA big-endian 64-bit platform just might shave enough cycles to beat v14\nthis way... or not.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 15 Jul 2021 18:00:05 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "Have you considered shift-based DFA for a portable implementation\nhttps://gist.github.com/pervognsen/218ea17743e1442e59bb60d29b1aa725 ?\n\nVladimir\n\n>\n\nHave you considered shift-based DFA for a portable implementation https://gist.github.com/pervognsen/218ea17743e1442e59bb60d29b1aa725 ?Vladimir", "msg_date": "Fri, 16 Jul 2021 08:44:06 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Fri, Jul 16, 2021 at 1:44 AM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n>\n> Have you considered shift-based DFA for a portable implementation\nhttps://gist.github.com/pervognsen/218ea17743e1442e59bb60d29b1aa725 ?\n\nI did consider some kind of DFA a while back and it was too slow.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Jul 16, 2021 at 1:44 AM Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:>> Have you considered shift-based DFA for a portable implementation https://gist.github.com/pervognsen/218ea17743e1442e59bb60d29b1aa725 ?I did consider some kind of DFA a while back and it was too slow.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 16 Jul 2021 06:02:49 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "My v16 experimental patches were a bit messy, so I've organized an\nexperimental series that applies cumulatively, to try to trace the effects\nof various things.\n\nv17-0001 is the same as v14. 0002 is a stripped-down implementation of\nAmit's chunk idea for multibyte, and it's pretty good on x86. On Power8,\nnot so much. 0003 and 0004 are shot-in-the-dark guesses to improve it on\nPower8, with some success, but end up making x86 weirdly slow, so I'm\nafraid that could happen on other platforms as well.\n\nv14 still looks like the safe bet for now. It also has the advantage of\nusing the same function both in and out of the fastpath, which will come in\nhandy when moving it to src/port as the fallback for SSE.\n\nPower8, gcc 4.8:\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 2944 | 1523 | 871 | 1473 | 1509\n\nv17-0001:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 888 | 607 | 179 | 777 | 1328\n\nv17-0002:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1017 | 718 | 156 | 1213 | 2138\n\nv17-0003:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1205 | 662 | 180 | 767 | 1256\n\nv17-0004:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1085 | 660 | 224 | 868 | 1369\n\n\nMacbook x86, clang 12:\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 974 | 691 | 370 | 456 | 526\n\nv17-0001:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 674 | 346 | 78 | 309 | 504\n\nv17-0002:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 516 | 324 | 78 | 331 | 544\n\nv17-0003:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 621 | 537 | 323 | 413 | 602\n\nv17-0004:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 576 | 439 | 154 | 557 | 915\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nMy v16 experimental patches were a bit messy, so I've organized an experimental series that applies cumulatively, to try to trace the effects of various things. v17-0001 is the same as v14. 0002 is a stripped-down implementation of Amit's chunk idea for multibyte, and it's pretty good on x86. On Power8, not so much. 0003 and 0004 are shot-in-the-dark guesses to improve it on Power8, with some success, but end up making x86 weirdly slow, so I'm afraid that could happen on other platforms as well.v14 still looks like the safe bet for now. It also has the advantage of using the same function both in and out of the fastpath, which will come in handy when moving it to src/port as the fallback for SSE.Power8, gcc 4.8:HEAD: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------    2944 |  1523 |   871 |    1473 |   1509v17-0001: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------     888 |   607 |   179 |     777 |   1328v17-0002: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------    1017 |   718 |   156 |    1213 |   2138v17-0003: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------    1205 |   662 |   180 |     767 |   1256v17-0004: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------    1085 |   660 |   224 |     868 |   1369Macbook x86, clang 12:HEAD: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------     974 |   691 |   370 |     456 |    526v17-0001: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------     674 |   346 |    78 |     309 |    504v17-0002: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------     516 |   324 |    78 |     331 |    544v17-0003: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------     621 |   537 |   323 |     413 |    602v17-0004: chinese | mixed | ascii | mixed16 | mixed8---------+-------+-------+---------+--------     576 |   439 |   154 |     557 |    915--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 16 Jul 2021 19:18:33 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "Forgot the attachments...\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 16 Jul 2021 20:02:33 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I wrote:\n\n> On Fri, Jul 16, 2021 at 1:44 AM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n> >\n> > Have you considered shift-based DFA for a portable implementation\nhttps://gist.github.com/pervognsen/218ea17743e1442e59bb60d29b1aa725 ?\n>\n> I did consider some kind of DFA a while back and it was too slow.\n\nI took a closer look at this \"shift-based DFA\", and it seemed pretty\nstraightforward to implement this on top of my DFA attempt from some months\nago. The DFA technique is not a great fit with our API, since we need to\nreturn how many bytes we found valid. On x86 (not our target for the\nfallback, but convenient to test) all my attempts were either worse than\nHEAD in multiple cases, or showed no improvement for the important ASCII\ncase. On Power8, it's more compelling, and competitive with v14, so I'll\ncharacterize it on that platform as I describe the patch series:\n\n0001 is a pure DFA, and has decent performance on multibyte, but terrible\non ascii.\n0002 dispatches on the leading byte category, unrolls the DFA loop\naccording to how many valid bytes we need, and only checks the DFA state\nafterwards. It's good on multibyte (3-byte, at least) but still terrible on\nascii.\n0003 adds a 1-byte ascii fast path -- while robust on all inputs, it still\nregresses a bit on ascii.\n0004 uses the same 8-byte ascii check as previous patches do.\n0005 and 0006 use combinations of 1- and 8-byte ascii checks similar to in\nv17.\n\n0005 seems the best on Power8, and is very close to v4. FWIW, v14's\nmeasurements seem lucky and fragile -- if I change any little thing, even\n\n- return -1;\n+ return 0;\n\nit easily loses 100-200ms on non-pure-ascii tests. That said, v14 still\nseems the logical choice, unless there is some further tweak on top of v17\nor v18 that gives some non-x86 platform a significant boost.\n\nPower8, gcc 4.8:\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 2944 | 1523 | 871 | 1473 | 1509\n\nv18-0001:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1257 | 1681 | 1385 | 1744 | 2018\n\nv18-0002:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 951 | 1381 | 1217 | 1469 | 1172\n\nv18-0003:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 911 | 1111 | 942 | 1112 | 865\n\nv18-0004:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 987 | 730 | 222 | 1325 | 2306\n\nv18-0005:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 962 | 664 | 180 | 928 | 1179\n\nv18-0006:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 908 | 663 | 244 | 1026 | 1464\n\nand for comparison,\n\nv14:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 888 | 607 | 179 | 777 | 1328\n\nv17-0003:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1205 | 662 | 180 | 767 | 1256\n\n\nMacbook, clang 12:\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 974 | 691 | 370 | 456 | 526\n\nv18-0001:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1334 | 2713 | 2802 | 2665 | 2541\n\nv18-0002:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 733 | 1212 | 1064 | 1034 | 1007\n\nv18-0003:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 653 | 560 | 370 | 420 | 465\n\nv18-0004:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 574 | 402 | 88 | 584 | 1033\n\nv18-0005:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1345 | 730 | 334 | 578 | 909\n\nv18-0006:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 674 | 485 | 153 | 594 | 989\n\nand for comparison,\n\nv14:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 674 | 346 | 78 | 309 | 504\n\nv17-0002:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 516 | 324 | 78 | 331 | 544\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 18 Jul 2021 21:26:47 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Sat, 17 Jul 2021 at 04:48, John Naylor <john.naylor@enterprisedb.com> wrote:\n> v17-0001 is the same as v14. 0002 is a stripped-down implementation of Amit's\n> chunk idea for multibyte, and it's pretty good on x86. On Power8, not so\n> much. 0003 and 0004 are shot-in-the-dark guesses to improve it on Power8,\n> with some success, but end up making x86 weirdly slow, so I'm afraid that\n> could happen on other platforms as well.\n\nThanks for trying the chunk approach. I tested your v17 versions on\nArm64. For the chinese characters, v17-0002 gave some improvement over\nv14. But for all the other character sets, there was around 10%\ndegradation w.r.t. v14. I thought maybe the hhton64 call and memcpy()\nfor each mb character might be the culprit, so I tried iterating over\nall the characters in the chunk within the same pg_utf8_verify_one()\nfunction by left-shifting the bits. But that worsened the figures. So\nI gave up that idea.\n\nHere are the numbers on Arm64 :\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1781 | 1095 | 628 | 944 | 1151\n\nv14:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 852 | 484 | 144 | 584 | 971\n\n\nv17-0001+2:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 731 | 520 | 152 | 645 | 1118\n\n\nHaven't looked at your v18 patch set yet.\n\n\n", "msg_date": "Mon, 19 Jul 2021 10:53:22 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "Thank you,\n\nIt looks like it is important to have shrx for x86 which appears only when\n-march=x86-64-v3 is used (see\nhttps://github.com/golang/go/issues/47120#issuecomment-877629712 ).\nJust in case: I know x86 wound not use fallback implementation, however,\nthe sole purpose of shift-based DFA is to fold all the data-dependent ops\ninto a single instruction.\n\nAn alternative idea: should we optimize for validation of **valid** inputs\nrather than optimizing the worst case?\nIn other words, what if the implementation processes all characters always\nand uses a slower method in case of validation failure?\nI would guess it is more important to be faster with accepting valid input\nrather than \"faster to reject invalid input\".\n\nIn shift-DFA approach, it would mean the validation loop would be simpler\nwith fewer branches (see https://godbolt.org/z/hhMxhT6cf ):\n\nstatic inline int\npg_is_valid_utf8(const unsigned char *s, const unsigned char *end) {\n uint64 class;\n uint64 state = BGN;\n while (s < end) { // clang unrolls the loop\n class = ByteCategory[*s++];\n state = class >> (state & DFA_MASK); // <-- note that AND is fused\ninto the shift operation\n }\n return (state & DFA_MASK) != ERR;\n}\n\nNote: GCC does not seem to unroll \"while(s<end)\" loop by default, so manual\nunroll might be worth trying:\n\nstatic inline int\npg_is_valid_utf8(const unsigned char *s, const unsigned char *end) {\n uint64 class;\n uint64 state = BGN;\n while(s < end + 4) {\n for(int i = 0; i < 4; i++) {\n class = ByteCategory[*s++];\n state = class >> (state & DFA_MASK);\n }\n }\n while(s < end) {\n class = ByteCategory[*s++];\n state = class >> (state & DFA_MASK);\n }\n return (state & DFA_MASK) != ERR;\n}\n\n----\n\nstatic int pg_utf8_verifystr2(const unsigned char *s, int len) {\n if (pg_is_valid_utf8(s, s+len)) { // fast path: if string is valid,\nthen just accept it\n return s + len;\n }\n // slow path: the string is not valid, perform a slower analysis\n return s + ....;\n}\n\nVladimir\n\nThank you,It looks like it is important to have shrx for x86 which appears only when -march=x86-64-v3 is used (see https://github.com/golang/go/issues/47120#issuecomment-877629712 ).Just in case: I know x86 wound not use fallback implementation, however, the sole purpose of shift-based DFA is to fold all the data-dependent ops into a single instruction.An alternative idea: should we optimize for validation of **valid** inputs rather than optimizing the worst case?In other words, what if the implementation processes all characters always and uses a slower method in case of validation failure?I would guess it is more important to be faster with accepting valid input rather than \"faster to reject invalid input\".In shift-DFA approach, it would mean the validation loop would be simpler with fewer branches (see https://godbolt.org/z/hhMxhT6cf ):static inline intpg_is_valid_utf8(const unsigned char *s, const unsigned char *end) {    uint64\t\tclass;    uint64\t\tstate = BGN;    while (s < end) { // clang unrolls the loop        class = ByteCategory[*s++];        state = class >> (state & DFA_MASK); // <-- note that AND is fused into the shift operation    }    return (state & DFA_MASK) != ERR;}Note: GCC does not seem to unroll \"while(s<end)\" loop by default, so manual unroll might be worth trying:static inline intpg_is_valid_utf8(const unsigned char *s, const unsigned char *end) {    uint64\t\tclass;    uint64\t\tstate = BGN;    while(s < end + 4) {        for(int i = 0; i < 4; i++) {            class = ByteCategory[*s++];            state = class >> (state & DFA_MASK);        }    }    while(s < end) {        class = ByteCategory[*s++];        state = class >> (state & DFA_MASK);    }    return (state & DFA_MASK) != ERR;}----static int pg_utf8_verifystr2(const unsigned char *s, int len) {    if (pg_is_valid_utf8(s, s+len)) { // fast path: if string is valid, then just accept it        return s + len;    }    // slow path: the string is not valid, perform a slower analysis\t    return s + ....;}Vladimir", "msg_date": "Mon, 19 Jul 2021 16:42:57 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Mon, Jul 19, 2021 at 9:43 AM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n\n> It looks like it is important to have shrx for x86 which appears only\nwhen -march=x86-64-v3 is used (see\nhttps://github.com/golang/go/issues/47120#issuecomment-877629712 ).\n> Just in case: I know x86 wound not use fallback implementation, however,\nthe sole purpose of shift-based DFA is to fold all the data-dependent ops\ninto a single instruction.\n\nI saw mention of that instruction, but didn't understand how important it\nwas, thanks.\n\n> An alternative idea: should we optimize for validation of **valid**\ninputs rather than optimizing the worst case?\n> In other words, what if the implementation processes all characters\nalways and uses a slower method in case of validation failure?\n> I would guess it is more important to be faster with accepting valid\ninput rather than \"faster to reject invalid input\".\n\n> static int pg_utf8_verifystr2(const unsigned char *s, int len) {\n> if (pg_is_valid_utf8(s, s+len)) { // fast path: if string is valid,\nthen just accept it\n> return s + len;\n> }\n> // slow path: the string is not valid, perform a slower analysis\n> return s + ....;\n> }\n\nThat might be workable. We have to be careful because in COPY FROM,\nvalidation is performed on 64kB chunks, and the boundary could fall in the\nmiddle of a multibyte sequence. In the SSE version, there is this comment:\n\n+ /*\n+ * NB: This check must be strictly greater-than, otherwise an invalid byte\n+ * at the end might not get detected.\n+ */\n+ while (len > sizeof(__m128i))\n\n...which should have more detail on this.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 19, 2021 at 9:43 AM Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:> It looks like it is important to have shrx for x86 which appears only when -march=x86-64-v3 is used (see https://github.com/golang/go/issues/47120#issuecomment-877629712 ).> Just in case: I know x86 wound not use fallback implementation, however, the sole purpose of shift-based DFA is to fold all the data-dependent ops into a single instruction.I saw mention of that instruction, but didn't understand how important it was, thanks.> An alternative idea: should we optimize for validation of **valid** inputs rather than optimizing the worst case?> In other words, what if the implementation processes all characters always and uses a slower method in case of validation failure?> I would guess it is more important to be faster with accepting valid input rather than \"faster to reject invalid input\".> static int pg_utf8_verifystr2(const unsigned char *s, int len) {>     if (pg_is_valid_utf8(s, s+len)) { // fast path: if string is valid, then just accept it>         return s + len;>     }>     // slow path: the string is not valid, perform a slower analysis>     return s + ....;> }That might be workable. We have to be careful because in COPY FROM, validation is performed on 64kB chunks, and the boundary could fall in the middle of a multibyte sequence. In the SSE version, there is this comment:+ /*+ * NB: This check must be strictly greater-than, otherwise an invalid byte+ * at the end might not get detected.+ */+ while (len > sizeof(__m128i))...which should have more detail on this.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 19 Jul 2021 11:07:15 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "> On Mon, Jul 19, 2021 at 9:43 AM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n\n> > An alternative idea: should we optimize for validation of **valid**\ninputs rather than optimizing the worst case?\n> > In other words, what if the implementation processes all characters\nalways and uses a slower method in case of validation failure?\n> > I would guess it is more important to be faster with accepting valid\ninput rather than \"faster to reject invalid input\".\n>\n> > static int pg_utf8_verifystr2(const unsigned char *s, int len) {\n> > if (pg_is_valid_utf8(s, s+len)) { // fast path: if string is valid,\nthen just accept it\n> > return s + len;\n> > }\n> > // slow path: the string is not valid, perform a slower analysis\n> > return s + ....;\n> > }\n\nThis turned out to be a really good idea (v19 attached):\n\nPower8, gcc 4.8:\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 2944 | 1523 | 871 | 1473 | 1509\n\nv14:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 888 | 607 | 179 | 777 | 1328\n\nv19:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 809 | 472 | 223 | 558 | 805\n\nx86 Macbook, clang 12:\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 974 | 691 | 370 | 456 | 526\n\nv14:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 674 | 346 | 78 | 309 | 504\n\nv19:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 379 | 181 | 94 | 219 | 376\n\nNote that the branchy code's worst case (mixed8) is here the same speed as\nmultibyte. With Vladimir's idea * , we call check_ascii only every 8 bytes\nof input, not every time we verify one multibyte character. Also, we only\nhave to check the DFA state every time we loop over 8 bytes, not every time\nwe step through the DFA. That means we have to walk backwards at the end to\nfind the last leading byte, but the SSE code already knew how to do that,\nso I used that logic here in the caller, which will allow some\nsimplification of how the SSE code returns.\n\nThe state check is likely why the ascii case is slightly slower than v14.\nWe could go back to checking ascii 16 bytes at a time, since there's little\npenalty for doing so.\n\n* (Greg was thinking the same thing upthread, but I don't think the branchy\ncode I posted at the time could have taken advantage of this)\n\nI'm pretty confident this improvement is architecture-independent. Next\nmonth I'll clean this up and rebase the SSE patch over this.\n\nI wrote:\n\n> + /*\n> + * NB: This check must be strictly greater-than, otherwise an invalid\nbyte\n> + * at the end might not get detected.\n> + */\n> + while (len > sizeof(__m128i))\n\nNote to self: I actually think this isn't needed anymore since I changed\nhow the SSE code deals with remainder sequences at the end.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 20 Jul 2021 17:24:33 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Sat, Mar 13, 2021 at 4:37 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> On Fri, Mar 12, 2021 at 9:14 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > I was not thinking about auto-vectorizing the code in\n> > pg_validate_utf8_sse42(). Rather, I was considering auto-vectorization\n> > inside the individual helper functions that you wrote, such as\n> > _mm_setr_epi8(), shift_right(), bitwise_and(), prev1(), splat(),\n>\n> If the PhD holders who came up with this algorithm thought it possible to do it that way, I'm sure they would have. In reality, simdjson has different files for SSE4, AVX, AVX512, NEON, and Altivec. We can incorporate any of those as needed. That's a PG15 project, though, and I'm not volunteering.\n\nJust for fun/experimentation, here's a quick (and probably too naive)\ntranslation of those helper functions to NEON, on top of the v15\npatch.", "msg_date": "Thu, 22 Jul 2021 03:29:21 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": ">I'm pretty confident this improvement is architecture-independent.\n\nThanks for testing it with different architectures.\n\nIt looks like the same utf8_advance function is good for both fast-path and\nfor the slow path.\nThen pg_utf8_verifychar could be removed altogether along with the\ncorresponding IS_*_BYTE_LEAD macros.\n\nVladimir\n\n>I'm pretty confident this improvement is architecture-independent.Thanks for testing it with different architectures.It looks like the same utf8_advance function is good for both fast-path and for the slow path.Then pg_utf8_verifychar could be removed altogether along with the corresponding IS_*_BYTE_LEAD macros.Vladimir", "msg_date": "Wed, 21 Jul 2021 19:13:07 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Wed, Jul 21, 2021 at 12:13 PM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n> It looks like the same utf8_advance function is good for both fast-path\nand for the slow path.\n> Then pg_utf8_verifychar could be removed altogether along with the\ncorresponding IS_*_BYTE_LEAD macros.\n\npg_utf8_verifychar() is a public function usually called\nthrough pg_wchar_table[], so it needs to remain in any case.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 21, 2021 at 12:13 PM Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:> It looks like the same utf8_advance function is good for both fast-path and for the slow path.> Then pg_utf8_verifychar could be removed altogether along with the corresponding IS_*_BYTE_LEAD macros.pg_utf8_verifychar() is a public function usually called through pg_wchar_table[], so it needs to remain in any case.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 21 Jul 2021 12:41:53 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Wed, Jul 21, 2021 at 11:29 AM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n> Just for fun/experimentation, here's a quick (and probably too naive)\n> translation of those helper functions to NEON, on top of the v15\n> patch.\n\nNeat! It's good to make it more architecture-agnostic, and I'm sure we can\nuse quite a bit of this. I don't know enough about NEON to comment\nintelligently, but a quick glance through the simdjson source show a couple\ndifferences that might be worth a look:\n\n to_bool(const pg_u8x16_t v)\n {\n+#if defined(USE_NEON)\n+ return vmaxvq_u32((uint32x4_t) v) != 0;\n\n--> return vmaxvq_u8(*this) != 0;\n\n vzero()\n {\n+#if defined(USE_NEON)\n+ return vmovq_n_u8(0);\n\n--> return vdupq_n_u8(0); // or equivalently, splat(0)\n\nis_highbit_set(const pg_u8x16_t v)\n {\n+#if defined(USE_NEON)\n+ return to_bool(bitwise_and(v, vmovq_n_u8(0x80)));\n\n--> return vmaxq_u8(v) > 0x7F\n\n(Technically, their convention is: is_ascii(v) { return vmaxq_u8(v) < 0x80;\n} , but same effect)\n\n+#if defined(USE_NEON)\n+static pg_attribute_always_inline pg_u8x16_t\n+vset(uint8 v0, uint8 v1, uint8 v2, uint8 v3,\n+ uint8 v4, uint8 v5, uint8 v6, uint8 v7,\n+ uint8 v8, uint8 v9, uint8 v10, uint8 v11,\n+ uint8 v12, uint8 v13, uint8 v14, uint8 v15)\n+{\n+ uint8 pg_attribute_aligned(16) values[16] = {\n+ v0, v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15\n+ };\n+ return vld1q_u8(values);\n+}\n\n--> They have this strange beast instead:\n\n // Doing a load like so end ups generating worse code.\n // uint8_t array[16] = {x1, x2, x3, x4, x5, x6, x7, x8,\n // x9, x10,x11,x12,x13,x14,x15,x16};\n // return vld1q_u8(array);\n uint8x16_t x{};\n // incredibly, Visual Studio does not allow x[0] = x1\n x = vsetq_lane_u8(x1, x, 0);\n x = vsetq_lane_u8(x2, x, 1);\n x = vsetq_lane_u8(x3, x, 2);\n...\n x = vsetq_lane_u8(x15, x, 14);\n x = vsetq_lane_u8(x16, x, 15);\n return x;\n\nSince you aligned the array, that might not have the problem alluded to\nabove, and it looks nicer.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 21, 2021 at 11:29 AM Thomas Munro <thomas.munro@gmail.com> wrote:> Just for fun/experimentation, here's a quick (and probably too naive)> translation of those helper functions to NEON, on top of the v15> patch.Neat! It's good to make it more architecture-agnostic, and I'm sure we can use quite a bit of this. I don't know enough about NEON to comment intelligently, but a quick glance through the simdjson source show a couple differences that might be worth a look: to_bool(const pg_u8x16_t v) {+#if defined(USE_NEON)+\treturn vmaxvq_u32((uint32x4_t) v) != 0;--> return vmaxvq_u8(*this) != 0; vzero() {+#if defined(USE_NEON)+\treturn vmovq_n_u8(0);--> return vdupq_n_u8(0); // or equivalently, splat(0)is_highbit_set(const pg_u8x16_t v) {+#if defined(USE_NEON)+ return to_bool(bitwise_and(v, vmovq_n_u8(0x80)));--> return vmaxq_u8(v) > 0x7F(Technically, their convention is: is_ascii(v) { return vmaxq_u8(v) < 0x80; } , but same effect)+#if defined(USE_NEON)+static pg_attribute_always_inline pg_u8x16_t+vset(uint8 v0, uint8 v1, uint8 v2, uint8 v3,+\t uint8 v4, uint8 v5, uint8 v6, uint8 v7,+\t uint8 v8, uint8 v9, uint8 v10, uint8 v11,+\t uint8 v12, uint8 v13, uint8 v14, uint8 v15)+{+\tuint8 pg_attribute_aligned(16) values[16] = {+\t\tv0, v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15+\t};+\treturn vld1q_u8(values);+}--> They have this strange beast instead:  // Doing a load like so end ups generating worse code.  // uint8_t array[16] = {x1, x2, x3, x4, x5, x6, x7, x8,  //                     x9, x10,x11,x12,x13,x14,x15,x16};  // return vld1q_u8(array);  uint8x16_t x{};  // incredibly, Visual Studio does not allow x[0] = x1  x = vsetq_lane_u8(x1, x, 0);  x = vsetq_lane_u8(x2, x, 1);  x = vsetq_lane_u8(x3, x, 2);...   x = vsetq_lane_u8(x15, x, 14);  x = vsetq_lane_u8(x16, x, 15);  return x;Since you aligned the array, that might not have the problem alluded to above, and it looks nicer.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 21 Jul 2021 14:16:38 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Thu, Jul 22, 2021 at 6:16 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Neat! It's good to make it more architecture-agnostic, and I'm sure we can use quite a bit of this.\n\nOne question is whether this \"one size fits all\" approach will be\nextensible to wider SIMD.\n\n> to_bool(const pg_u8x16_t v)\n> {\n> +#if defined(USE_NEON)\n> + return vmaxvq_u32((uint32x4_t) v) != 0;\n>\n> --> return vmaxvq_u8(*this) != 0;\n\nI chose that lane width because I saw an unsubstantiated claim\nsomewhere that it might be faster, but I have no idea if it matters.\nThe u8 code looks more natural anyway. Changed.\n\n> vzero()\n> {\n> +#if defined(USE_NEON)\n> + return vmovq_n_u8(0);\n>\n> --> return vdupq_n_u8(0); // or equivalently, splat(0)\n\nI guess it doesn't make a difference which builtin you use here, but I\nwas influenced by the ARM manual which says the vdupq form is\ngenerated for immediate values.\n\n> is_highbit_set(const pg_u8x16_t v)\n> {\n> +#if defined(USE_NEON)\n> + return to_bool(bitwise_and(v, vmovq_n_u8(0x80)));\n>\n> --> return vmaxq_u8(v) > 0x7F\n\nAh, of course. Much nicer!\n\n> +#if defined(USE_NEON)\n> +static pg_attribute_always_inline pg_u8x16_t\n> +vset(uint8 v0, uint8 v1, uint8 v2, uint8 v3,\n> + uint8 v4, uint8 v5, uint8 v6, uint8 v7,\n> + uint8 v8, uint8 v9, uint8 v10, uint8 v11,\n> + uint8 v12, uint8 v13, uint8 v14, uint8 v15)\n> +{\n> + uint8 pg_attribute_aligned(16) values[16] = {\n> + v0, v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15\n> + };\n> + return vld1q_u8(values);\n> +}\n>\n> --> They have this strange beast instead:\n>\n> // Doing a load like so end ups generating worse code.\n> // uint8_t array[16] = {x1, x2, x3, x4, x5, x6, x7, x8,\n> // x9, x10,x11,x12,x13,x14,x15,x16};\n> // return vld1q_u8(array);\n> uint8x16_t x{};\n> // incredibly, Visual Studio does not allow x[0] = x1\n> x = vsetq_lane_u8(x1, x, 0);\n> x = vsetq_lane_u8(x2, x, 1);\n> x = vsetq_lane_u8(x3, x, 2);\n> ...\n> x = vsetq_lane_u8(x15, x, 14);\n> x = vsetq_lane_u8(x16, x, 15);\n> return x;\n>\n> Since you aligned the array, that might not have the problem alluded to above, and it looks nicer.\n\nStrange indeed. We should probably poke around in the assember and\nsee... it might be that MSVC doesn't like it, and I was just\ncargo-culting the alignment. I don't expect the generated code to\nreally \"load\" anything of course, it should ideally be some kind of\nimmediate mov...\n\nFWIW here are some performance results from my humble RPI4:\n\nmaster:\n\n chinese | mixed | ascii\n---------+-------+-------\n 4172 | 2763 | 1823\n(1 row)\n\nYour v15 patch:\n\n chinese | mixed | ascii\n---------+-------+-------\n 2267 | 1248 | 399\n(1 row)\n\nYour v15 patch set + the NEON patch, configured with USE_UTF8_SIMD=1:\n\n chinese | mixed | ascii\n---------+-------+-------\n 909 | 620 | 318\n(1 row)\n\nIt's so good I wonder if it's producing incorrect results :-)\n\nI also tried to do a quick and dirty AltiVec patch to see if it could\nfit into the same code \"shape\", with less immediate success: it works\nout slower than the fallback code on the POWER7 machine I scrounged an\naccount on. I'm not sure what's wrong there, but maybe it's a uesful\nstart (I'm probably confused about endianness, or the encoding of\nboolean vectors which may be different (is true 0x01or 0xff, does it\nmatter?), or something else, and it's falling back on errors all the\ntime?).", "msg_date": "Thu, 22 Jul 2021 12:07:26 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "On Wed, Jul 21, 2021 at 8:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Jul 22, 2021 at 6:16 AM John Naylor\n\n> One question is whether this \"one size fits all\" approach will be\n> extensible to wider SIMD.\n\nSure, it'll just take a little more work and complexity. For one, 16-byte\nSIMD can operate on 32-byte chunks with a bit of repetition:\n\n- __m128i input;\n+ __m128i input1;\n+ __m128i input2;\n\n-#define SIMD_STRIDE_LENGTH (sizeof(__m128i))\n+#define SIMD_STRIDE_LENGTH 32\n\n while (len >= SIMD_STRIDE_LENGTH)\n {\n- input = vload(s);\n+ input1 = vload(s);\n+ input2 = vload(s + sizeof(input1));\n\n- check_for_zeros(input, &error);\n+ check_for_zeros(input1, &error);\n+ check_for_zeros(input2, &error);\n\n /*\n * If the chunk is all ASCII, we can skip the full UTF-8\ncheck, but we\n@@ -460,17 +463,18 @@ pg_validate_utf8_sse42(const unsigned char *s, int\nlen)\n * sequences at the end. We only update prev_incomplete if\nthe chunk\n * contains non-ASCII, since the error is cumulative.\n */\n- if (is_highbit_set(input))\n+ if (is_highbit_set(bitwise_or(input1, input2)))\n {\n- check_utf8_bytes(prev, input, &error);\n- prev_incomplete = is_incomplete(input);\n+ check_utf8_bytes(prev, input1, &error);\n+ check_utf8_bytes(input1, input2, &error);\n+ prev_incomplete = is_incomplete(input2);\n }\n else\n {\n error = bitwise_or(error, prev_incomplete);\n }\n\n- prev = input;\n+ prev = input2;\n s += SIMD_STRIDE_LENGTH;\n len -= SIMD_STRIDE_LENGTH;\n }\n\nSo with a few #ifdefs, we can accommodate two sizes if we like.\n\nFor another, the prevN() functions would need to change, at least on x86 --\nthat would require replacing _mm_alignr_epi8() with _mm256_alignr_epi8()\nplus _mm256_permute2x128_si256(). Also, we might have to do something with\nthe vector typedef.\n\nThat said, I think we can punt on that until we have an application that's\nmuch more compute-intensive. As it is with SSE4, COPY FROM WHERE <selective\npredicate> already pushes the utf8 validation way down in profiles.\n\n> FWIW here are some performance results from my humble RPI4:\n>\n> master:\n>\n> chinese | mixed | ascii\n> ---------+-------+-------\n> 4172 | 2763 | 1823\n> (1 row)\n>\n> Your v15 patch:\n>\n> chinese | mixed | ascii\n> ---------+-------+-------\n> 2267 | 1248 | 399\n> (1 row)\n>\n> Your v15 patch set + the NEON patch, configured with USE_UTF8_SIMD=1:\n>\n> chinese | mixed | ascii\n> ---------+-------+-------\n> 909 | 620 | 318\n> (1 row)\n>\n> It's so good I wonder if it's producing incorrect results :-)\n\nNice! If it passes regression tests, it *should* be fine, but stress\ntesting would be welcome on any platform.\n\n> I also tried to do a quick and dirty AltiVec patch to see if it could\n> fit into the same code \"shape\", with less immediate success: it works\n> out slower than the fallback code on the POWER7 machine I scrounged an\n> account on. I'm not sure what's wrong there, but maybe it's a uesful\n> start (I'm probably confused about endianness, or the encoding of\n> boolean vectors which may be different (is true 0x01or 0xff, does it\n> matter?), or something else, and it's falling back on errors all the\n> time?).\n\nHmm, I have access to a power8 machine to play with, but I also don't mind\nhaving some type of server-class hardware that relies on the recent nifty\nDFA fallback, which performs even better on powerpc64le than v15.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 21, 2021 at 8:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:>> On Thu, Jul 22, 2021 at 6:16 AM John Naylor> One question is whether this \"one size fits all\" approach will be> extensible to wider SIMD.Sure, it'll just take a little more work and complexity. For one, 16-byte SIMD can operate on 32-byte chunks with a bit of repetition:-       __m128i         input;+       __m128i         input1;+       __m128i         input2;-#define SIMD_STRIDE_LENGTH (sizeof(__m128i))+#define SIMD_STRIDE_LENGTH 32        while (len >= SIMD_STRIDE_LENGTH)        {-               input = vload(s);+               input1 = vload(s);+               input2 = vload(s + sizeof(input1));-               check_for_zeros(input, &error);+               check_for_zeros(input1, &error);+               check_for_zeros(input2, &error);                /*                 * If the chunk is all ASCII, we can skip the full UTF-8 check, but we@@ -460,17 +463,18 @@ pg_validate_utf8_sse42(const unsigned char *s, int len)                 * sequences at the end. We only update prev_incomplete if the chunk                 * contains non-ASCII, since the error is cumulative.                 */-               if (is_highbit_set(input))+               if (is_highbit_set(bitwise_or(input1, input2)))                {-                       check_utf8_bytes(prev, input, &error);-                       prev_incomplete = is_incomplete(input);+                       check_utf8_bytes(prev, input1, &error);+                       check_utf8_bytes(input1, input2, &error);+                       prev_incomplete = is_incomplete(input2);                }                else                {                        error = bitwise_or(error, prev_incomplete);                }-               prev = input;+               prev = input2;                s += SIMD_STRIDE_LENGTH;                len -= SIMD_STRIDE_LENGTH;        }So with a few #ifdefs, we can accommodate two sizes if we like. For another, the prevN() functions would need to change, at least on x86 -- that would require replacing _mm_alignr_epi8() with _mm256_alignr_epi8() plus _mm256_permute2x128_si256(). Also, we might have to do something with the vector typedef.That said, I think we can punt on that until we have an application that's much more compute-intensive. As it is with SSE4, COPY FROM WHERE <selective predicate> already pushes the utf8 validation way down in profiles.> FWIW here are some performance results from my humble RPI4:>> master:>>  chinese | mixed | ascii> ---------+-------+------->     4172 |  2763 |  1823> (1 row)>> Your v15 patch:>>  chinese | mixed | ascii> ---------+-------+------->     2267 |  1248 |   399> (1 row)>> Your v15 patch set + the NEON patch, configured with USE_UTF8_SIMD=1:>>  chinese | mixed | ascii> ---------+-------+------->      909 |   620 |   318> (1 row)>> It's so good I wonder if it's producing incorrect results :-)Nice! If it passes regression tests, it *should* be fine, but stress testing would be welcome on any platform.> I also tried to do a quick and dirty AltiVec patch to see if it could> fit into the same code \"shape\", with less immediate success: it works> out slower than the fallback code on the POWER7 machine I scrounged an> account on.  I'm not sure what's wrong there, but maybe it's a uesful> start (I'm probably confused about endianness, or the encoding of> boolean vectors which may be different (is true 0x01or 0xff, does it> matter?), or something else, and it's falling back on errors all the> time?).Hmm, I have access to a power8 machine to play with, but I also don't mind having some type of server-class hardware that relies on the recent nifty DFA fallback, which performs even better on powerpc64le than v15.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 22 Jul 2021 07:38:50 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [POC] verifying UTF-8 using SIMD instructions" }, { "msg_contents": "Attached is v20, which has a number of improvements:\n\n1. Cleaned up and explained DFA coding.\n2. Adjusted check_ascii to return bool (now called is_valid_ascii) and to\nproduce an optimized loop, using branch-free accumulators. That way, it\ndoesn't need to be rewritten for different input lengths. I also think it's\na bit easier to understand this way.\n3. Put SSE helper functions in their own file.\n4. Mostly-cosmetic edits to the configure detection.\n5. Draft commit message.\n\nWith #2 above in place, I wanted to try different strides for the DFA, so\nmore measurements (hopefully not much more of these):\n\nPower8, gcc 4.8\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 2944 | 1523 | 871 | 1473 | 1509\n\nv20, 8-byte stride:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1189 | 550 | 246 | 600 | 936\n\nv20, 16-byte stride (in the actual patch):\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 981 | 440 | 134 | 791 | 820\n\nv20, 32-byte stride:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 857 | 481 | 141 | 834 | 839\n\nBased on the above, I decided that 16 bytes had the best overall balance.\nOther platforms may differ, but I don't expect it to make a huge amount of\ndifference.\n\nJust for fun, I was also a bit curious about what Vladimir mentioned\nupthread about x86-64-v3 offering a different shift instruction. Somehow,\nclang 12 refused to build with that target, even though the release notes\nsay it can, but gcc 11 was fine:\n\nx86 Macbook, gcc 11, USE_FALLBACK_UTF8=1:\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 1200 | 728 | 370 | 544 | 637\n\nv20:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 459 | 243 | 77 | 424 | 440\n\nv20, CFLAGS=\"-march=x86-64-v3 -O2\" :\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 390 | 215 | 77 | 303 | 323\n\nAnd, gcc does generate the desired shift here:\n\nobjdump -S src/port/pg_utf8_fallback.o | grep shrx\n 53: c4 e2 eb f7 d1 shrxq %rdx, %rcx, %rdx\n\nWhile it looks good, clang can do about as good by simply unrolling all 16\nshifts in the loop, which gcc won't do. To be clear, it's irrelevant, since\nx86-64-v3 includes AVX2, and if we had that we would just use it with the\nSIMD algorithm.\n\nMacbook x86, clang 12:\n\nHEAD:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 974 | 691 | 370 | 456 | 526\n\nv20, USE_FALLBACK_UTF8=1:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 351 | 172 | 88 | 349 | 350\n\nv20, with SSE4:\n chinese | mixed | ascii | mixed16 | mixed8\n---------+-------+-------+---------+--------\n 142 | 92 | 59 | 141 | 141\n\nI'm pretty happy with the patch at this point.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 26 Jul 2021 07:09:00 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "Just wondering, do you have the code in a GitHub/Gitlab branch?\n\n>+ utf8_advance(s, state, len);\n>+\n>+ /*\n>+ * If we saw an error during the loop, let the caller handle it. We treat\n>+ * all other states as success.\n>+ */\n>+ if (state == ERR)\n>+ return 0;\n\nDid you mean state = utf8_advance(s, state, len); there? (reassign state\nvariable)\n\n>I wanted to try different strides for the DFA\n\nDoes that (and \"len >= 32\" condition) mean the patch does not improve\nvalidation of the shorter strings (the ones less than 32 bytes)?\nIt would probably be nice to cover them as well (e.g. with 4 or 8-byte\nstrides)\n\nVladimir\n\nJust wondering, do you have the code in a GitHub/Gitlab branch?>+\tutf8_advance(s, state, len);>+>+\t/*>+\t * If we saw an error during the loop, let the caller handle it. We treat>+\t * all other states as success.>+\t */>+\tif (state == ERR)>+\t\treturn 0;Did you mean state = utf8_advance(s, state, len); there? (reassign state variable)>I wanted to try different strides for the DFADoes that (and \"len >= 32\" condition) mean the patch does not improve validation of the shorter strings (the ones less than 32 bytes)?It would probably be nice to cover them as well (e.g. with 4 or 8-byte strides)Vladimir", "msg_date": "Mon, 26 Jul 2021 14:55:29 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Mon, Jul 26, 2021 at 7:55 AM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n>\n> Just wondering, do you have the code in a GitHub/Gitlab branch?\n>\n> >+ utf8_advance(s, state, len);\n> >+\n> >+ /*\n> >+ * If we saw an error during the loop, let the caller handle it. We\ntreat\n> >+ * all other states as success.\n> >+ */\n> >+ if (state == ERR)\n> >+ return 0;\n>\n> Did you mean state = utf8_advance(s, state, len); there? (reassign state\nvariable)\n\nYep, that's a bug, thanks for catching!\n\n> >I wanted to try different strides for the DFA\n>\n> Does that (and \"len >= 32\" condition) mean the patch does not improve\nvalidation of the shorter strings (the ones less than 32 bytes)?\n\nRight. Also, the 32 byte threshold was just a temporary need for testing\n32-byte stride -- testing different thresholds wouldn't hurt. I'm not\nterribly concerned about short strings, though, as long as we don't\nregress. That said, Heikki had something in his v14 [1] that we could use:\n\n+/*\n+ * Subroutine of pg_utf8_verifystr() to check on char. Returns the length\nof the\n+ * character at *s in bytes, or 0 on invalid input or premature end of\ninput.\n+ *\n+ * XXX: could this be combined with pg_utf8_verifychar above?\n+ */\n+static inline int\n+pg_utf8_verify_one(const unsigned char *s, int len)\n\nIt would be easy to replace pg_utf8_verifychar with this. It might even\nspeed up the SQL function length_in_encoding() -- that would be a better\nreason to do it.\n\n[1]\nhttps://www.postgresql.org/message-id/2f95e70d-4623-87d4-9f24-ca534155f179%40iki.fi\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 26, 2021 at 7:55 AM Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:>> Just wondering, do you have the code in a GitHub/Gitlab branch?>> >+ utf8_advance(s, state, len);> >+> >+ /*> >+ * If we saw an error during the loop, let the caller handle it. We treat> >+ * all other states as success.> >+ */> >+ if (state == ERR)> >+ return 0;>> Did you mean state = utf8_advance(s, state, len); there? (reassign state variable)Yep, that's a bug, thanks for catching!> >I wanted to try different strides for the DFA>> Does that (and \"len >= 32\" condition) mean the patch does not improve validation of the shorter strings (the ones less than 32 bytes)?Right. Also, the 32 byte threshold was just a temporary need for testing 32-byte stride -- testing different thresholds wouldn't hurt.  I'm not terribly concerned about short strings, though, as long as we don't regress.  That said, Heikki had something in his v14 [1] that we could use:+/*+ * Subroutine of pg_utf8_verifystr() to check on char. Returns the length of the+ * character at *s in bytes, or 0 on invalid input or premature end of input.+ *+ * XXX: could this be combined with pg_utf8_verifychar above?+ */+static inline int+pg_utf8_verify_one(const unsigned char *s, int len)It would be easy to replace pg_utf8_verifychar with this. It might even speed up the SQL function length_in_encoding() -- that would be a better reason to do it.[1] https://www.postgresql.org/message-id/2f95e70d-4623-87d4-9f24-ca534155f179%40iki.fi--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 26 Jul 2021 08:56:52 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Mon, Jul 26, 2021 at 7:55 AM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n>\n> Just wondering, do you have the code in a GitHub/Gitlab branch?\n\nSorry, I didn't see this earlier. No, I don't.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 26, 2021 at 7:55 AM Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote:>> Just wondering, do you have the code in a GitHub/Gitlab branch?Sorry, I didn't see this earlier. No, I don't.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 26 Jul 2021 08:58:37 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I wrote:\n\n> On Mon, Jul 26, 2021 at 7:55 AM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n> >\n> > >+ utf8_advance(s, state, len);\n> > >+\n> > >+ /*\n> > >+ * If we saw an error during the loop, let the caller handle it. We\ntreat\n> > >+ * all other states as success.\n> > >+ */\n> > >+ if (state == ERR)\n> > >+ return 0;\n> >\n> > Did you mean state = utf8_advance(s, state, len); there? (reassign\nstate variable)\n>\n> Yep, that's a bug, thanks for catching!\n\nFixed in v21, with a regression test added. Also, utf8_advance() now\ndirectly changes state by a passed pointer rather than returning a value.\nSome cosmetic changes:\n\ns/valid_bytes/non_error_bytes/ since the former is kind of misleading now.\n\nSome other var name and symbol changes. In my first DFA experiment, ASC\nconflicted with the parser or scanner somehow, but it doesn't here, so it's\nclearer to use this.\n\nRewrote a lot of comments about the state machine and regression tests.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 28 Jul 2021 14:12:11 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Mon, Jul 26, 2021 at 8:56 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> >\n> > Does that (and \"len >= 32\" condition) mean the patch does not improve\nvalidation of the shorter strings (the ones less than 32 bytes)?\n>\n> Right. Also, the 32 byte threshold was just a temporary need for testing\n32-byte stride -- testing different thresholds wouldn't hurt. I'm not\nterribly concerned about short strings, though, as long as we don't\nregress.\n\nI put together the attached quick test to try to rationalize the fast-path\nthreshold. (In case it isn't obvious, it must be at least 16 on all builds,\nsince wchar.c doesn't know which implementation it's calling, and SSE\nregister width sets the lower bound.) I changed the threshold first to 16,\nand then 100000, which will force using the byte-at-a-time code.\n\nIf we have only 16 bytes in the input, it still seems to be faster to use\nSSE, even though it's called through a function pointer on x86. I didn't\ntest the DFA path, but I don't think the conclusion would be different.\nI'll include the 16 threshold next time I need to update the patch.\n\nMacbook x86, clang 12:\n\nmaster + use 16:\n asc16 | asc32 | asc64 | mb16 | mb32 | mb64\n-------+-------+-------+------+------+------\n 270 | 279 | 282 | 291 | 296 | 304\n\nforce byte-at-a-time:\n asc16 | asc32 | asc64 | mb16 | mb32 | mb64\n-------+-------+-------+------+------+------\n 277 | 292 | 310 | 296 | 317 | 362\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 29 Jul 2021 21:12:33 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I wrote:\n> If we have only 16 bytes in the input, it still seems to be faster to use\nSSE, even though it's called through a function pointer on x86. I didn't\ntest the DFA path, but I don't think the conclusion would be different.\nI'll include the 16 threshold next time I need to update the patch.\n\nv22 attached, which changes the threshold to 16, with a few other cosmetic\nadjustments, mostly in the comments.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 4 Aug 2021 07:22:57 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "Naively, the shift-based DFA requires 64-bit integers to encode the\ntransitions, but I recently came across an idea from Dougall Johnson of\nusing the Z3 SMT solver to pack the transitions into 32-bit integers [1].\nThat halves the size of the transition table for free. I adapted that\neffort to the existing conventions in v22 and arrived at the attached\npython script. Running the script outputs the following:\n\n$ python dfa-pack-pg.py\noffsets: [0, 11, 16, 1, 5, 6, 20, 25, 30]\ntransitions:\n00000000000000000000000000000000 0x0\n00000000000000000101100000000000 0x5800\n00000000000000001000000000000000 0x8000\n00000000000000000000100000000000 0x800\n00000000000000000010100000000000 0x2800\n00000000000000000011000000000000 0x3000\n00000000000000001010000000000000 0xa000\n00000000000000001100100000000000 0xc800\n00000000000000001111000000000000 0xf000\n01000001000010110000000000100000 0x410b0020\n00000011000010110000000000100000 0x30b0020\n00000010000010110000010000100000 0x20b0420\n\nI'll include something like the attached text file diff in the next patch.\nSome comments are now outdated, but this is good enough for demonstration.\n\n[1] https://gist.github.com/dougallj/166e326de6ad4cf2c94be97a204c025f\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 24 Aug 2021 12:00:28 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I wrote:\n\n> Naively, the shift-based DFA requires 64-bit integers to encode the\ntransitions, but I recently came across an idea from Dougall Johnson of\nusing the Z3 SMT solver to pack the transitions into 32-bit integers [1].\nThat halves the size of the transition table for free. I adapted that\neffort to the existing conventions in v22 and arrived at the attached\npython script.\n> [...]\n> I'll include something like the attached text file diff in the next\npatch. Some comments are now outdated, but this is good enough for\ndemonstration.\n\nAttached is v23 incorporating the 32-bit transition table, with the\nnecessary comment adjustments.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 26 Aug 2021 11:35:54 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": ">Attached is v23 incorporating the 32-bit transition table, with the\nnecessary comment adjustments\n\n32bit table is nice.\n\n\nWould you please replace\nhttps://github.com/BobSteagall/utf_utils/blob/master/src/utf_utils.cpp URL\nwith\nhttps://github.com/BobSteagall/utf_utils/blob/6b7a465265de2f5fa6133d653df0c9bdd73bbcf8/src/utf_utils.cpp\nin the header of src/port/pg_utf8_fallback.c?\n\nIt would make the URL more stable in case the file gets renamed.\n\nVladimir\n\n>Attached is v23 incorporating the 32-bit transition table, with the necessary comment adjustments32bit table is nice.Would you please replace https://github.com/BobSteagall/utf_utils/blob/master/src/utf_utils.cpp URL withhttps://github.com/BobSteagall/utf_utils/blob/6b7a465265de2f5fa6133d653df0c9bdd73bbcf8/src/utf_utils.cppin the header of src/port/pg_utf8_fallback.c?It would make the URL more stable in case the file gets renamed.Vladimir", "msg_date": "Thu, 26 Aug 2021 19:08:54 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I've decided I'm not quite comfortable with the additional complexity in\nthe build system introduced by the SIMD portion of the previous patches. It\nwould make more sense if the pure C portion were unchanged, but with the\nshift-based DFA plus the bitwise ASCII check, we have a portable\nimplementation that's still a substantial improvement over the current\nvalidator. In v24, I've included only that much, and the diff is only about\n1/3 as many lines. If future improvements to COPY FROM put additional\npressure on this path, we can always add SIMD support later.\n\nOne thing not in this patch is a possible improvement to\npg_utf8_verifychar() that Heikki and I worked on upthread as part of\nearlier attempts to rewrite pg_utf8_verifystr(). That's worth looking into\nseparately.\n\nOn Thu, Aug 26, 2021 at 12:09 PM Vladimir Sitnikov <\nsitnikov.vladimir@gmail.com> wrote:\n>\n> >Attached is v23 incorporating the 32-bit transition table, with the\nnecessary comment adjustments\n>\n> 32bit table is nice.\n\nThanks for taking a look!\n\n> Would you please replace\nhttps://github.com/BobSteagall/utf_utils/blob/master/src/utf_utils.cpp URL\nwith\n>\nhttps://github.com/BobSteagall/utf_utils/blob/6b7a465265de2f5fa6133d653df0c9bdd73bbcf8/src/utf_utils.cpp\n> in the header of src/port/pg_utf8_fallback.c?\n>\n> It would make the URL more stable in case the file gets renamed.\n>\n> Vladimir\n>\n\nMakes sense, so done that way.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Oct 2021 17:42:40 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "It occurred to me that the DFA + ascii quick check approach could also\nbe adapted to speed up some cases where we currently walk a string\ncounting characters, like this snippet in\ntext_position_get_match_pos():\n\n/* Convert the byte position to char position. */\nwhile (state->refpoint < state->last_match)\n{\n state->refpoint += pg_mblen(state->refpoint);\n state->refpos++;\n}\n\nThis coding changed in 9556aa01c69 (Use single-byte\nBoyer-Moore-Horspool search even with multibyte encodings), in which I\nfound the majority of cases were faster, but some were slower. It\nwould be nice to regain the speed lost and do even better.\n\nIn the case of UTF-8, we could just run it through the DFA,\nincrementing a count of the states found. The number of END states\nshould be the number of characters. The ascii quick check would still\nbe applicable as well. I think all that is needed is to export some\nsymbols and add the counting function. That wouldn't materially affect\nthe current patch for input verification, and would be separate, but\nit would be nice to get the symbol visibility right up front. I've set\nthis to waiting on author while I experiment with that.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Dec 2021 14:11:46 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On 20/10/2021 00:42, John Naylor wrote:\n> I've decided I'm not quite comfortable with the additional complexity in \n> the build system introduced by the SIMD portion of the previous patches. \n> It would make more sense if the pure C portion were unchanged, but with \n> the shift-based DFA plus the bitwise ASCII check, we have a portable \n> implementation that's still a substantial improvement over the current \n> validator. In v24, I've included only that much, and the diff is only \n> about 1/3 as many lines. If future improvements to COPY FROM put \n> additional pressure on this path, we can always add SIMD support later.\n\n+1.\n\nI had another look at this now. Looks good, just a few minor comments below:\n\n> +/*\n> + * Verify a chunk of bytes for valid ASCII, including a zero-byte check.\n> + */\n> +static inline bool\n> +is_valid_ascii(const unsigned char *s, int len)\n> +{\n> +\tuint64\t\tchunk,\n> +\t\t\t\thighbit_cum = UINT64CONST(0),\n> +\t\t\t\tzero_cum = UINT64CONST(0x8080808080808080);\n> +\n> +\tAssert(len % sizeof(chunk) == 0);\n> +\n> +\twhile (len >= sizeof(chunk))\n> +\t{\n> +\t\tmemcpy(&chunk, s, sizeof(chunk));\n> +\n> +\t\t/*\n> +\t\t * Capture any zero bytes in this chunk.\n> +\t\t *\n> +\t\t * First, add 0x7f to each byte. This sets the high bit in each byte,\n> +\t\t * unless it was a zero. We will check later that none of the bytes in\n> +\t\t * the chunk had the high bit set, in which case the max value each\n> +\t\t * byte can have after the addition is 0x7f + 0x7f = 0xfe, and we\n> +\t\t * don't need to worry about carrying over to the next byte.\n> +\t\t *\n> +\t\t * If any resulting high bits are zero, the corresponding high bits in\n> +\t\t * the zero accumulator will be cleared.\n> +\t\t */\n> +\t\tzero_cum &= (chunk + UINT64CONST(0x7f7f7f7f7f7f7f7f));\n> +\n> +\t\t/* Capture any set bits in this chunk. */\n> +\t\thighbit_cum |= chunk;\n> +\n> +\t\ts += sizeof(chunk);\n> +\t\tlen -= sizeof(chunk);\n> +\t}\n\nThis function assumes that the input len is a multiple of 8. There's an \nassertion for that, but it would be good to also mention it in the \nfunction comment. I took me a moment to realize that.\n\nGiven that assumption, I wonder if \"while (len >= 0)\" would marginally \nfaster. Or compute \"s_end = s + len\" first, and check for \"while (s < \ns_end)\", so that you don't need to update 'len' in the loop.\n\nAlso would be good to mention what exactly the return value means. I.e \n\"returns false if the input contains any bytes with the high-bit set, or \nzeros\".\n\n> +\t/*\n> +\t * Check if any high bits in the zero accumulator got cleared.\n> +\t *\n> +\t * XXX: As noted above, the zero check is only valid if the chunk had no\n> +\t * high bits set. However, the compiler may perform these two checks in\n> +\t * any order. That's okay because if any high bits were set, we would\n> +\t * return false regardless, so invalid results from the zero check don't\n> +\t * matter.\n> +\t */\n> +\tif (zero_cum != UINT64CONST(0x8080808080808080))\n> +\t\treturn false;\n\nI don't understand the \"the compiler may perform these checks in any \norder\" comment. We trust the compiler to do the right thing, and only \nreorder things when it's safe to do so. What is special here, why is it \nworth mentioning here?\n\n> @@ -1721,7 +1777,7 @@ pg_gb18030_verifystr(const unsigned char *s, int len)\n> \treturn s - start;\n> }\n> \n> -static int\n> +static pg_noinline int\n> pg_utf8_verifychar(const unsigned char *s, int len)\n> {\n> \tint\t\t\tl;\n\nWhy force it to not be inlined?\n\n> + * In a shift-based DFA, the input byte is an index into array of integers\n> + * whose bit pattern encodes the state transitions. To compute the current\n> + * state, we simply right-shift the integer by the current state and apply a\n> + * mask. In this scheme, the address of the transition only depends on the\n> + * input byte, so there is better pipelining.\n\nShould be \"To compute the *next* state, ...\", I think.\n\nThe way the state transition table works is pretty inscrutable. That's \nunderstandable, because the values were found by an SMT solver, so I'm \nnot sure if anything can be done about it.\n\n- Heikki\n\n\n", "msg_date": "Fri, 10 Dec 2021 20:33:48 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": ">-----Original Message-----\r\n>From: Heikki Linnakangas <hlinnaka@iki.fi> \r\n>Sent: Friday, December 10, 2021 12:34 PM\r\n>To: John Naylor <john.naylor@enterprisedb.com>; Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\r\n>Cc: pgsql-hackers <pgsql-hackers@postgresql.org>; Amit Khandekar <amitdkhan.pg@gmail.com>; Thomas Munro <thomas.munro@gmail.com>; Greg Stark <stark@mit.edu>\r\n>Subject: [EXTERNAL] Re: speed up verifying UTF-8\r\n>\r\n>On 20/10/2021 00:42, John Naylor wrote:\r\n>> I've decided I'm not quite comfortable with the additional complexity \r\n>> in the build system introduced by the SIMD portion of the previous patches.\r\n>> It would make more sense if the pure C portion were unchanged, but \r\n>> with the shift-based DFA plus the bitwise ASCII check, we have a \r\n>> portable implementation that's still a substantial improvement over \r\n>> the current validator. In v24, I've included only that much, and the \r\n>> diff is only about 1/3 as many lines. If future improvements to COPY \r\n>> FROM put additional pressure on this path, we can always add SIMD support later.\r\n>\r\n>+1.\r\n>\r\n>I had another look at this now. Looks good, just a few minor comments below:\r\n>\r\n>> +/*\r\n>> + * Verify a chunk of bytes for valid ASCII, including a zero-byte check.\r\n>> + */\r\n>> +static inline bool\r\n>> +is_valid_ascii(const unsigned char *s, int len) {\r\n>> +\tuint64\t\tchunk,\r\n>> +\t\t\t\thighbit_cum = UINT64CONST(0),\r\n>> +\t\t\t\tzero_cum = UINT64CONST(0x8080808080808080);\r\n>> +\r\n>> +\tAssert(len % sizeof(chunk) == 0);\r\n>> +\r\n>> +\twhile (len >= sizeof(chunk))\r\n>> +\t{\r\n>> +\t\tmemcpy(&chunk, s, sizeof(chunk));\r\n>> +\r\n>> +\t\t/*\r\n>> +\t\t * Capture any zero bytes in this chunk.\r\n>> +\t\t *\r\n>> +\t\t * First, add 0x7f to each byte. This sets the high bit in each byte,\r\n>> +\t\t * unless it was a zero. We will check later that none of the bytes in\r\n>> +\t\t * the chunk had the high bit set, in which case the max value each\r\n>> +\t\t * byte can have after the addition is 0x7f + 0x7f = 0xfe, and we\r\n>> +\t\t * don't need to worry about carrying over to the next byte.\r\n>> +\t\t *\r\n>> +\t\t * If any resulting high bits are zero, the corresponding high bits in\r\n>> +\t\t * the zero accumulator will be cleared.\r\n>> +\t\t */\r\n>> +\t\tzero_cum &= (chunk + UINT64CONST(0x7f7f7f7f7f7f7f7f));\r\n>> +\r\n>> +\t\t/* Capture any set bits in this chunk. */\r\n>> +\t\thighbit_cum |= chunk;\r\n>> +\r\n>> +\t\ts += sizeof(chunk);\r\n>> +\t\tlen -= sizeof(chunk);\r\n>> +\t}\r\n>\r\n>This function assumes that the input len is a multiple of 8. There's an assertion for that, but it would be good to also mention it in the function comment. I took me a moment to realize that.\r\n>\r\n>Given that assumption, I wonder if \"while (len >= 0)\" would marginally faster. Or compute \"s_end = s + len\" first, and check for \"while (s < s_end)\", so that you don't need to update 'len' in the loop.\r\n>\r\n>Also would be good to mention what exactly the return value means. I.e \"returns false if the input contains any bytes with the high-bit set, or zeros\".\r\n>\r\n>> +\t/*\r\n>> +\t * Check if any high bits in the zero accumulator got cleared.\r\n>> +\t *\r\n>> +\t * XXX: As noted above, the zero check is only valid if the chunk had no\r\n>> +\t * high bits set. However, the compiler may perform these two checks in\r\n>> +\t * any order. That's okay because if any high bits were set, we would\r\n>> +\t * return false regardless, so invalid results from the zero check don't\r\n>> +\t * matter.\r\n>> +\t */\r\n>> +\tif (zero_cum != UINT64CONST(0x8080808080808080))\r\n>> +\t\treturn false;\r\n>\r\n>I don't understand the \"the compiler may perform these checks in any order\" comment. We trust the compiler to do the right thing, and only reorder things when it's safe to do so. What is special here, why is it worth mentioning here?\r\n>\r\n>> @@ -1721,7 +1777,7 @@ pg_gb18030_verifystr(const unsigned char *s, int len)\r\n>> \treturn s - start;\r\n>> }\r\n>> \r\n>> -static int\r\n>> +static pg_noinline int\r\n>> pg_utf8_verifychar(const unsigned char *s, int len) {\r\n>> \tint\t\t\tl;\r\n>\r\n>Why force it to not be inlined?\r\n>\r\n>> + * In a shift-based DFA, the input byte is an index into array of \r\n>> + integers\r\n>> + * whose bit pattern encodes the state transitions. To compute the \r\n>> + current\r\n>> + * state, we simply right-shift the integer by the current state and \r\n>> + apply a\r\n>> + * mask. In this scheme, the address of the transition only depends \r\n>> + on the\r\n>> + * input byte, so there is better pipelining.\r\n>\r\n>Should be \"To compute the *next* state, ...\", I think.\r\n>\r\n>The way the state transition table works is pretty inscrutable. That's understandable, because the values were found by an SMT solver, so I'm not sure if anything can be done about it.\r\n>\r\n>- Heikki\r\n>\r\n\r\nIf I remember correctly the shift instruction is very fast...\r\n", "msg_date": "Fri, 10 Dec 2021 18:49:05 +0000", "msg_from": "\"Godfrin, Philippe E\" <Philippe.Godfrin@nov.com>", "msg_from_op": false, "msg_subject": "RE: [EXTERNAL] Re: speed up verifying UTF-8" }, { "msg_contents": "On Fri, Dec 10, 2021 at 2:33 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> I had another look at this now. Looks good, just a few minor comments below:\n\nThanks for reviewing! I've attached v25 to address your points.\n\n> This function assumes that the input len is a multiple of 8. There's an\n> assertion for that, but it would be good to also mention it in the\n> function comment. I took me a moment to realize that.\n\nDone.\n\n> Given that assumption, I wonder if \"while (len >= 0)\" would marginally\n> faster. Or compute \"s_end = s + len\" first, and check for \"while (s <\n> s_end)\", so that you don't need to update 'len' in the loop.\n\nWith two chunks, gcc 4.8.5/11.2 and clang 12 will unroll the inner\nloop, so it doesn't matter:\n\nL51:\n mov rdx, QWORD PTR [rdi]\n mov rsi, QWORD PTR [rdi+8]\n lea rax, [rdx+rbx]\n lea rbp, [rsi+rbx]\n and rax, rbp\n and rax, r11\n cmp rax, r11\n jne .L66\n or rdx, rsi\n test rdx, r11\n jne .L66\n sub r8d, 16 ; refers to \"len\" in the caller\npg_utf8_verifystr()\n add rdi, 16\n cmp r8d, 15\n jg .L51\n\nI *think* these are the same instructions as from your version from\nsome time ago that handled two integers explicitly -- I rewrote it\nlike this to test different chunk sizes.\n\n(Aside on 32-byte strides: Four chunks was within the noise level of\ntwo chunks on the platform I tested. With 32 bytes, that increases the\nchance that a mixed input would have non-ascii and defeat this\noptimization, so should be significantly faster to make up for that.\nAlong those lines, in the future we could consider SSE2 (unrolled 2 x\n16 bytes) for this path. Since it's part of the spec for x86-64, we\nwouldn't need a runtime check -- just #ifdef it inline. And we could\npiggy-back on the CRC SSE4.2 configure test for intrinsic support, so\nthat would avoid adding a bunch of complexity.)\n\nThat said, I think your suggestions are better on code clarity\ngrounds. I'm on the fence about \"while(s < s_end)\", so I went with\n\"while (len > 0)\" because it matches the style in wchar.c.\n\n> Also would be good to mention what exactly the return value means. I.e\n> \"returns false if the input contains any bytes with the high-bit set, or\n> zeros\".\n\nDone.\n\n> > + /*\n> > + * Check if any high bits in the zero accumulator got cleared.\n> > + *\n> > + * XXX: As noted above, the zero check is only valid if the chunk had no\n> > + * high bits set. However, the compiler may perform these two checks in\n> > + * any order. That's okay because if any high bits were set, we would\n> > + * return false regardless, so invalid results from the zero check don't\n> > + * matter.\n> > + */\n> > + if (zero_cum != UINT64CONST(0x8080808080808080))\n> > + return false;\n\n> I don't understand the \"the compiler may perform these checks in any\n> order\" comment. We trust the compiler to do the right thing, and only\n> reorder things when it's safe to do so. What is special here, why is it\n> worth mentioning here?\n\nAh, that's a good question, and now that you mention it, the comment\nis silly. When looking at the assembly output a while back, I was a\nbit astonished that it didn't match my mental model of what was\nhappening, so I made this note. I've removed the whole XXX comment\nhere and expanded the first comment in the loop to:\n\n/*\n * Capture any zero bytes in this chunk.\n *\n * First, add 0x7f to each byte. This sets the high bit in each byte,\n * unless it was a zero. If any resulting high bits are zero, the\n * corresponding high bits in the zero accumulator will be cleared.\n *\n * If none of the bytes in the chunk had the high bit set, the max\n * value each byte can have after the addition is 0x7f + 0x7f = 0xfe,\n * and we don't need to worry about carrying over to the next byte. If\n * any input bytes did have the high bit set, it doesn't matter\n * because we check for those separately.\n */\n\n> > @@ -1721,7 +1777,7 @@ pg_gb18030_verifystr(const unsigned char *s, int len)\n> > return s - start;\n> > }\n> >\n> > -static int\n> > +static pg_noinline int\n> > pg_utf8_verifychar(const unsigned char *s, int len)\n> > {\n> > int l;\n>\n> Why force it to not be inlined?\n\nSince the only direct caller is now only using it for small inputs, I\nthought about saving space, but it's not enough to matter, so I'll go\nahead and leave it out. While at it, I removed the unnecessary\n\"inline\" declaration for utf8_advance(), since the compiler can do\nthat anyway.\n\n> > + * In a shift-based DFA, the input byte is an index into array of integers\n> > + * whose bit pattern encodes the state transitions. To compute the current\n> > + * state, we simply right-shift the integer by the current state and apply a\n> > + * mask. In this scheme, the address of the transition only depends on the\n> > + * input byte, so there is better pipelining.\n>\n> Should be \"To compute the *next* state, ...\", I think.\n\nFixed.\n\n> The way the state transition table works is pretty inscrutable. That's\n> understandable, because the values were found by an SMT solver, so I'm\n> not sure if anything can be done about it.\n\nDo you mean in general, or just the state values?\n\nLike any state machine, the code is simple, and the complexity is\nhidden in the data. Hopefully the first link I included in the comment\nis helpful.\n\nThe SMT solver was only needed to allow 32-bit (instead of 64-bit)\nentries in the transition table, so it's not strictly necessary. A\nlookup table that fits in 1kB is nice from a cache perspective,\nhowever.\n\nWith 64-bit, the state values are less weird-looking but they're still\njust arbitrary numbers. As long as ERR = 0 and the largest is at most\n9, it doesn't matter what they are, so I'm not sure it's much less\nmysterious. You can see the difference between 32-bit and 64-bit in\n[1].\n\n--\nIn addition to Heikki's. review points, I've made a couple small\nadditional changes from v24: I rewrote this part, so we don't need\nthese macros anymore:\n\n- if (!IS_HIGHBIT_SET(*s) ||\n- IS_UTF8_2B_LEAD(*s) ||\n- IS_UTF8_3B_LEAD(*s) ||\n- IS_UTF8_4B_LEAD(*s))\n+ if (!IS_HIGHBIT_SET(*s) || pg_utf_mblen(s) > 1)\n\nAnd I moved is_valid_ascii() to pg_wchar.h so it can be used\nelsewhere. I'm not sure there's a better place to put it. I tried\nusing this for text_position(), for which I'll start a new thread.\n\n[1] https://www.postgresql.org/message-id/attachment/125672/v22-addendum-32-bit-transitions.txt\n\n\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Dec 2021 11:39:37 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "I plan to push v25 early next week, unless there are further comments.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Dec 2021 09:29:48 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" }, { "msg_contents": "On Fri, Dec 17, 2021 at 9:29 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> I plan to push v25 early next week, unless there are further comments.\n\nPushed, thanks everyone!\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:24:40 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up verifying UTF-8" } ]
[ { "msg_contents": "Hello,\r\n I recently looked at what it would take to make a running autovacuum pick-up a change to either cost_delay or cost_limit. Users frequently will have a conservative value set, and then wish to change it when autovacuum initiates a freeze on a relation. Most users end up finding out they are in ‘to prevent wraparound’ after it has happened, this means that if they want the vacuum to take advantage of more I/O, they need to stop and then restart the currently running vacuum (after reloading the GUCs).\r\n\r\n Initially, my goal was to determine feasibility for making this dynamic. I added debug code to vacuum.c:vacuum_delay_point(void) and found that changes to cost_delay and cost_limit are already processed by a running vacuum. There was a bug preventing the cost_delay or cost_limit from being configured to allow higher throughput however.\r\n\r\n The current behavior is for vacuum to limit the maximum throughput of currently running vacuum processes to the cost_limit that was set when the vacuum process began.\r\n\r\nI changed this (see attached) to allow the cost_limit to be re-calculated up to the maximum allowable (currently 10,000). This has the effect of allowing users to reload a configuration change and an in-progress vacuum can be ‘sped-up’ by setting either the cost_limit or cost_delay.\r\n\r\nThe problematic piece is:\r\n\r\ndiff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\r\nindex c6ec657a93..d3c6b0d805 100644\r\n--- a/src/backend/postmaster/autovacuum.c\r\n+++ b/src/backend/postmaster/autovacuum.c\r\n@@ -1834,7 +1834,7 @@ autovac_balance_cost(void)\r\n * cost_limit to more than the base value.\r\n */\r\n worker->wi_cost_limit = Max(Min(limit,\r\n- worker->wi_cost_limit_base),\r\n+ MAXVACUUMCOSTLIMIT),\r\n 1);\r\n }\r\n\r\nWe limit the worker to the max cost_limit that was set at the beginning of the vacuum. I introduced the MAXVACUUMCOSTLIMIT constant (currently defined to 10000, 10000 is the currently max limit already defined) in miscadmin.h so that vacuum will now be able to adjust the cost_limit up to 10000 as the upper limit in a currently running vacuum.\r\n\r\n\r\nThe test’s that I’ve run show that the performance of an existing vacuum can be increased commensurate with the parameter change. Interestingly, autovac_balance_cost(void) is only updating the cost_limit, even if the cost_delay is modified. This is done correctly, it was just a surprise to see the behavior. A restart of autovacuum will pick up the new settings.\r\n\r\n\r\n2021-02-01 13:36:52.346 EST [37891] DEBUG: VACUUM Sleep: Delay: 20.000000, CostBalance: 207, CostLimit: 200, msec: 20.700000\r\n2021-02-01 13:36:52.346 EST [37891] CONTEXT: while scanning block 1824 of relation \"public.blah\"\r\n2021-02-01 13:36:52.362 EST [36460] LOG: received SIGHUP, reloading configuration files\r\n\r\n2021-02-01 13:36:52.364 EST [36460] LOG: parameter \"autovacuum_vacuum_cost_delay\" changed to \"2\"\r\n\\\r\n2021-02-01 13:36:52.365 EST [36463] DEBUG: checkpointer updated shared memory configuration values\r\n2021-02-01 13:36:52.366 EST [36466] DEBUG: autovac_balance_cost(pid=37891 db=13207, rel=16384, dobalance=yes cost_limit=2000, cost_limit_base=200, cost_delay=20)\r\n\r\n2021-02-01 13:36:52.366 EST [36467] DEBUG: received inquiry for database 0\r\n2021-02-01 13:36:52.366 EST [36467] DEBUG: writing stats file \"pg_stat_tmp/global.stat\"\r\n2021-02-01 13:36:52.366 EST [36467] DEBUG: writing stats file \"pg_stat_tmp/db_0.stat\"\r\n2021-02-01 13:36:52.388 EST [37891] DEBUG: VACUUM Sleep: Delay: 20.000000, CostBalance: 2001, CostLimit: 2000, msec: 20.010000", "msg_date": "Mon, 1 Feb 2021 18:46:11 +0000", "msg_from": "\"Mead, Scott\" <meads@amazon.com>", "msg_from_op": true, "msg_subject": "Running autovacuum dynamic update to cost_limit and delay" } ]
[ { "msg_contents": "\nAt long last I have just pushed Release 12 of the PostgreSQL Buildfarm\nclient. It's been about 16 months since the last release.\n\nApart from some minor fixes and code tidy up. this includes the\nfollowing more notable changes:\n\n * the TextUpgradeXVersion module is brought up to date with various\n core code changes\n * a module-neutral, animal-based save mechanism replaces the bespoke\n mechanism that was implemented in TestUpgradeXVersion. This will\n enable future development like testing openssl builds against NSS\n builds etc.\n * a standardized way of accumulating log files is implemented. This\n will enable some planned server side improvements.\n * requests are now signed using SHA256 nstead of SHA1.\n * there is a separate config section for settings to be used with\n valgrind. This makes it easier to turn valgrind on or off.\n * typedefs detection is improved for OSX\n * there is a setting for additional docs build targets, in addition to\n the standard html target.\n * use of the configure \"accache\" is made substantially more robust\n * timed out processes are more verbose\n\nDownloads are available at\n<https://github.com/PGBuildFarm/client-code/releases> and\n<https://buildfarm.postgresql.org/downloads>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 1 Feb 2021 17:20:09 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Announcing Release 12 of the PostgreSQL Buildfarm client" } ]
[ { "msg_contents": "I accidentally tried to populate a test case\nwhile auto_explain.log_min_duration was set to\nzero. auto_explain.log_nested_statements was also on.\n\ncreate or replace function gibberish(int) returns text language SQL as $_$\nselect left(string_agg(md5(random()::text),$$$$),$1) from\ngenerate_series(0,$1/32) $_$;\n\ncreate table j1 as select x, md5(random()::text) as t11, gibberish(1500) as\nt12 from generate_series(1,20e6) f(x);\n\nI got logorrhea of course, but I also got a memory leak into the SQL\nfunction context:\n\n TopPortalContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used\n PortalContext: 16384 total in 5 blocks; 5328 free (1 chunks); 11056\nused: <unnamed>\n ExecutorState: 4810120 total in 13 blocks; 4167160 free (74922\nchunks); 642960 used\n SQL function: 411058232 total in 60 blocks; 4916568 free (4\nchunks); 406141664 used: gibberish\n\nThe memory usage grew until OOM killer stepped in.\n\nCheers,\n\nJeff\n\nI accidentally tried to populate a test case while auto_explain.log_min_duration was set to zero.  auto_explain.log_nested_statements was also on.  create or replace function gibberish(int) returns text language SQL as $_$ select left(string_agg(md5(random()::text),$$$$),$1) from generate_series(0,$1/32) $_$;create table j1 as select x, md5(random()::text) as t11, gibberish(1500) as t12 from generate_series(1,20e6) f(x);I got logorrhea of course, but I also got a memory leak into the SQL function context:   TopPortalContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used    PortalContext: 16384 total in 5 blocks; 5328 free (1 chunks); 11056 used: <unnamed>      ExecutorState: 4810120 total in 13 blocks; 4167160 free (74922 chunks); 642960 used        SQL function: 411058232 total in 60 blocks; 4916568 free (4 chunks); 406141664 used: gibberishThe memory usage grew until OOM killer stepped in.Cheers,Jeff", "msg_date": "Mon, 1 Feb 2021 18:09:16 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "memory leak in auto_explain" }, { "msg_contents": "On Mon, Feb 1, 2021 at 6:09 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n\n>\n>\n> create or replace function gibberish(int) returns text language SQL as $_$\n> select left(string_agg(md5(random()::text),$$$$),$1) from\n> generate_series(0,$1/32) $_$;\n>\n> create table j1 as select x, md5(random()::text) as t11, gibberish(1500)\n> as t12 from generate_series(1,20e6) f(x);\n>\n\nI should have added, found it on HEAD, verified it also in 12.5.\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, Feb 1, 2021 at 6:09 PM Jeff Janes <jeff.janes@gmail.com> wrote:   create or replace function gibberish(int) returns text language SQL as $_$ select left(string_agg(md5(random()::text),$$$$),$1) from generate_series(0,$1/32) $_$;create table j1 as select x, md5(random()::text) as t11, gibberish(1500) as t12 from generate_series(1,20e6) f(x);I should have added, found it on HEAD, verified it also in 12.5.Cheers,Jeff", "msg_date": "Mon, 1 Feb 2021 18:12:38 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "Re: memory leak in auto_explain" }, { "msg_contents": "On Tue, 02 Feb 2021 at 07:12, Jeff Janes <jeff.janes@gmail.com> wrote:\n> On Mon, Feb 1, 2021 at 6:09 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n>\n>>\n>>\n>> create or replace function gibberish(int) returns text language SQL as $_$\n>> select left(string_agg(md5(random()::text),$$$$),$1) from\n>> generate_series(0,$1/32) $_$;\n>>\n>> create table j1 as select x, md5(random()::text) as t11, gibberish(1500)\n>> as t12 from generate_series(1,20e6) f(x);\n>>\n>\n> I should have added, found it on HEAD, verified it also in 12.5.\n>\n\nHere's my analysis:\n1) In the explain_ExecutorEnd(), it will create a ExplainState on SQL function\nmemory context, which is a long-lived, cause the memory grow up.\n\n /*\n * Switch to context in which the fcache lives. This ensures that our\n * tuplestore etc will have sufficient lifetime. The sub-executor is\n * responsible for deleting per-tuple information. (XXX in the case of a\n * long-lived FmgrInfo, this policy represents more memory leakage, but\n * it's not entirely clear where to keep stuff instead.)\n */\n oldcontext = MemoryContextSwitchTo(fcache->fcontext);\n\n2) I try to call pfree() to release ExplainState memory, however, it does not\nmake sence, I do not know why this does not work? So I try to create it in\nqueryDesc->estate->es_query_cxt memory context like queryDesc->totaltime, and\nit works.\n\nAttached fix the memory leakage in auto_explain. Any thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Tue, 02 Feb 2021 17:31:23 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: memory leak in auto_explain" }, { "msg_contents": "japin <japinli@hotmail.com> writes:\n> Here's my analysis:\n> 1) In the explain_ExecutorEnd(), it will create a ExplainState on SQL function\n> memory context, which is a long-lived, cause the memory grow up.\n\nYeah, agreed. I think people looking at this have assumed that the\nExecutorEnd hook would automatically be running in the executor's\nper-query context, but that's not so; we haven't yet entered\nstandard_ExecutorEnd where the context switch is. The caller's\ncontext is likely to be much longer-lived than the executor's.\n\nI think we should put the switch one level further out than you have\nit here, just to be sure that InstrEndLoop is covered too (that doesn't\nallocate memory, but auto_explain shouldn't assume that). Otherwise\nseems like a good fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Feb 2021 13:13:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: memory leak in auto_explain" }, { "msg_contents": "\nOn Wed, 03 Feb 2021 at 02:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> japin <japinli@hotmail.com> writes:\n>> Here's my analysis:\n>> 1) In the explain_ExecutorEnd(), it will create a ExplainState on SQL function\n>> memory context, which is a long-lived, cause the memory grow up.\n>\n> Yeah, agreed. I think people looking at this have assumed that the\n> ExecutorEnd hook would automatically be running in the executor's\n> per-query context, but that's not so; we haven't yet entered\n> standard_ExecutorEnd where the context switch is. The caller's\n> context is likely to be much longer-lived than the executor's.\n>\n> I think we should put the switch one level further out than you have\n> it here, just to be sure that InstrEndLoop is covered too (that doesn't\n> allocate memory, but auto_explain shouldn't assume that). Otherwise\n> seems like a good fix.\n>\n\nThanks for your review and clarification.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 03 Feb 2021 12:30:56 +0800", "msg_from": "japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: memory leak in auto_explain" } ]
[ { "msg_contents": "PSA a trivial patch to correct what seems like a typo in the tablesync comment.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 2 Feb 2021 10:38:31 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Typo in tablesync comment" }, { "msg_contents": "On Tue, Feb 02, 2021 at 10:38:31AM +1100, Peter Smith wrote:\n> PSA a trivial patch to correct what seems like a typo in the tablesync comment.\n\n- * subscribed tables and their state. Some transient state during data\n- * synchronization is kept in shared memory. The states SYNCWAIT and\n+ * subscribed tables and their state. Some transient states during data\n+ * synchronization are kept in shared memory. The states SYNCWAIT and\n\nThis stuff refers to SUBREL_STATE_* in pg_subscription_rel.h, and FWIW\nI find confusing the term \"transient\" in this context as a state may\nlast for a rather long time, depending on the time it takes to\nsynchronize the relation, no? I am wondering if we could do better\nhere, say:\n\"The state tracking the progress of the relation synchronization is\nadditionally stored in shared memory, with SYNCWAIT and CATCHUP only\nappearing in memory.\"\n--\nMichael", "msg_date": "Tue, 2 Feb 2021 14:19:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Typo in tablesync comment" }, { "msg_contents": "On Tue, Feb 2, 2021, at 2:19 AM, Michael Paquier wrote:\n> On Tue, Feb 02, 2021 at 10:38:31AM +1100, Peter Smith wrote:\n> > PSA a trivial patch to correct what seems like a typo in the tablesync comment.\n> \n> - * subscribed tables and their state. Some transient state during data\n> - * synchronization is kept in shared memory. The states SYNCWAIT and\n> + * subscribed tables and their state. Some transient states during data\n> + * synchronization are kept in shared memory. The states SYNCWAIT and\n> \n> This stuff refers to SUBREL_STATE_* in pg_subscription_rel.h, and FWIW\n> I find confusing the term \"transient\" in this context as a state may\n> last for a rather long time, depending on the time it takes to\n> synchronize the relation, no? I am wondering if we could do better\n> here, say:\n> \"The state tracking the progress of the relation synchronization is\n> additionally stored in shared memory, with SYNCWAIT and CATCHUP only\n> appearing in memory.\"\nWFM.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Feb 2, 2021, at 2:19 AM, Michael Paquier wrote:On Tue, Feb 02, 2021 at 10:38:31AM +1100, Peter Smith wrote:> PSA a trivial patch to correct what seems like a typo in the tablesync comment.- *       subscribed tables and their state.  Some transient state during data- *       synchronization is kept in shared memory.  The states SYNCWAIT and+ *       subscribed tables and their state.  Some transient states during data+ *       synchronization are kept in shared memory. The states SYNCWAIT andThis stuff refers to SUBREL_STATE_* in pg_subscription_rel.h, and FWIWI find confusing the term \"transient\" in this context as a state maylast for a rather long time, depending on the time it takes tosynchronize the relation, no?  I am wondering if we could do betterhere, say:\"The state tracking the progress of the relation synchronization isadditionally stored in shared memory, with SYNCWAIT and CATCHUP onlyappearing in memory.\"WFM.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 02 Feb 2021 08:27:54 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Typo in tablesync comment" }, { "msg_contents": "On Tue, Feb 2, 2021 at 10:49 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Feb 02, 2021 at 10:38:31AM +1100, Peter Smith wrote:\n> > PSA a trivial patch to correct what seems like a typo in the tablesync comment.\n>\n> - * subscribed tables and their state. Some transient state during data\n> - * synchronization is kept in shared memory. The states SYNCWAIT and\n> + * subscribed tables and their state. Some transient states during data\n> + * synchronization are kept in shared memory. The states SYNCWAIT and\n>\n> This stuff refers to SUBREL_STATE_* in pg_subscription_rel.h, and FWIW\n> I find confusing the term \"transient\" in this context as a state may\n> last for a rather long time, depending on the time it takes to\n> synchronize the relation, no?\n>\n\nThese in-memory states are used after the initial copy is done. So,\nthese are just for the time the tablesync worker is synced-up with\napply worker. In some cases, they could be for a longer period of time\nwhen apply worker is quite ahead of tablesync worker then we will be\nin the CATCHUP state for a long time but SYNCWAIT will still be for a\nshorter period of time.\n\n> I am wondering if we could do better\n> here, say:\n> \"The state tracking the progress of the relation synchronization is\n> additionally stored in shared memory, with SYNCWAIT and CATCHUP only\n> appearing in memory.\"\n>\n\nI don't mind changing to your proposed text but I think the current\nwording is also okay and seems clear to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 2 Feb 2021 19:23:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo in tablesync comment" }, { "msg_contents": "On Tue, Feb 02, 2021 at 07:23:37PM +0530, Amit Kapila wrote:\n> I don't mind changing to your proposed text but I think the current\n> wording is also okay and seems clear to me.\n\nReading that again, I still find the word \"transient\" to be misleading\nin this context. Any extra opinions?\n--\nMichael", "msg_date": "Wed, 3 Feb 2021 16:13:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Typo in tablesync comment" }, { "msg_contents": "On Wed, Feb 3, 2021 at 6:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Feb 02, 2021 at 07:23:37PM +0530, Amit Kapila wrote:\n> > I don't mind changing to your proposed text but I think the current\n> > wording is also okay and seems clear to me.\n>\n> Reading that again, I still find the word \"transient\" to be misleading\n> in this context. Any extra opinions?\n\nOTOH I thought \"additionally stored\" made it seem like those states\nwere in the catalog and \"additionally\" in shared memory.\n\nMaybe better to rewrite it more drastically?\n\ne.g\n-----\n * The catalog pg_subscription_rel is used to keep information about\n * subscribed tables and their state. The catalog holds all states\n * except SYNCWAIT and CATCHUP which are only in shared memory.\n-----\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 3 Feb 2021 18:52:56 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo in tablesync comment" }, { "msg_contents": "On Wed, Feb 03, 2021 at 06:52:56PM +1100, Peter Smith wrote:\n> OTOH I thought \"additionally stored\" made it seem like those states\n> were in the catalog and \"additionally\" in shared memory.\n\nGood point.\n\n> Maybe better to rewrite it more drastically?\n> \n> e.g\n> -----\n> * The catalog pg_subscription_rel is used to keep information about\n> * subscribed tables and their state. The catalog holds all states\n> * except SYNCWAIT and CATCHUP which are only in shared memory.\n> -----\n\nFine by me.\n--\nMichael", "msg_date": "Wed, 3 Feb 2021 17:04:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Typo in tablesync comment" }, { "msg_contents": "On Wed, Feb 3, 2021 at 1:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 03, 2021 at 06:52:56PM +1100, Peter Smith wrote:\n>\n> > Maybe better to rewrite it more drastically?\n> >\n> > e.g\n> > -----\n> > * The catalog pg_subscription_rel is used to keep information about\n> > * subscribed tables and their state. The catalog holds all states\n> > * except SYNCWAIT and CATCHUP which are only in shared memory.\n> > -----\n>\n> Fine by me.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Feb 2021 18:25:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo in tablesync comment" }, { "msg_contents": "On Wed, Feb 3, 2021 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 1:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Feb 03, 2021 at 06:52:56PM +1100, Peter Smith wrote:\n> >\n> > > Maybe better to rewrite it more drastically?\n> > >\n> > > e.g\n> > > -----\n> > > * The catalog pg_subscription_rel is used to keep information about\n> > > * subscribed tables and their state. The catalog holds all states\n> > > * except SYNCWAIT and CATCHUP which are only in shared memory.\n> > > -----\n> >\n> > Fine by me.\n> >\n>\n> +1.\n>\n\nOK. I attached an updated patch using this new text.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 4 Feb 2021 10:50:11 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo in tablesync comment" }, { "msg_contents": "On Thu, Feb 04, 2021 at 10:50:11AM +1100, Peter Smith wrote:\n> OK. I attached an updated patch using this new text.\n\nThanks, applied.\n--\nMichael", "msg_date": "Thu, 4 Feb 2021 16:08:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Typo in tablesync comment" } ]
[ { "msg_contents": "Hi,\n\nAt a customer we came across a curious plan (see attached testcase).\n\nGiven the testcase we see that the outer semi join tries to join the \nouter with the inner table id columns, even though the middle table id \ncolumn is also there. Is this expected behavior?\n\nThe reason i'm asking is two-fold:\n- the inner hash table now is bigger than i'd expect and has columns \nthat you would normally not select on.\n- the middle join now projects the inner as result, which is quite \nsuprising and seems invalid from a SQL standpoint.\n\nPlan:\n Finalize Aggregate\n Output: count(*)\n -> Gather\n Output: (PARTIAL count(*))\n Workers Planned: 4\n -> Partial Aggregate\n Output: PARTIAL count(*)\n -> Parallel Hash Semi Join\n Hash Cond: (_outer.id3 = _inner.id2)\n -> Parallel Seq Scan on public._outer\n Output: _outer.id3, _outer.extra1\n -> Parallel Hash\n Output: middle.id1, _inner.id2\n -> Parallel Hash Semi Join\n Output: middle.id1, _inner.id2\n Hash Cond: (middle.id1 = _inner.id2)\n -> Parallel Seq Scan on public.middle\n Output: middle.id1\n -> Parallel Hash\n Output: _inner.id2\n -> Parallel Seq Scan on \npublic._inner\n Output: _inner.id2\n\nKind regards,\nLuc\nSwarm64", "msg_date": "Tue, 2 Feb 2021 09:51:58 +0100", "msg_from": "Luc Vlaming <luc@swarm64.com>", "msg_from_op": true, "msg_subject": "join plan with unexpected var clauses" }, { "msg_contents": "Luc Vlaming <luc@swarm64.com> writes:\n> Given the testcase we see that the outer semi join tries to join the \n> outer with the inner table id columns, even though the middle table id \n> column is also there. Is this expected behavior?\n\nI don't see anything greatly wrong with it. The planner has concluded\nthat _inner.id2 and middle.id1 are part of an equivalence class, so it\ncan form the top-level join by equating _outer.id3 to either of them.\nAFAIR that choice is made at random --- there's certainly not any logic\nthat thinks about \"well, the intermediate join output could be a bit\nnarrower if we choose this one instead of that one\".\n\nI think \"made at random\" actually boils down to \"take the first usable\nmember of the equivalence class\". If I switch around the wording of\nthe first equality condition:\n\n ... select 1 from _inner where middle.id1 = _inner.id2\n\nthen I get a plan where the top join uses middle.id1. However,\nit's still propagating both middle.id1 and _inner.id2 up through\nthe bottom join, so that isn't buying anything efficiency-wise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Feb 2021 11:25:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: join plan with unexpected var clauses" } ]
[ { "msg_contents": "I had a bit of trouble parsing the error message \"every hash partition \nmodulus must be a factor of the next larger modulus\", so I went into the \ncode, added some comments and added errdetail messages for each case. I \nthink it's a bit clearer now.", "msg_date": "Tue, 2 Feb 2021 11:35:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Improve new hash partition bound check error messages" }, { "msg_contents": "On Tue, Feb 2, 2021 at 7:36 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I had a bit of trouble parsing the error message \"every hash partition\n> modulus must be a factor of the next larger modulus\", so I went into the\n> code, added some comments and added errdetail messages for each case. I\n> think it's a bit clearer now.\n\nThat is definitely an improvement, thanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Feb 2021 21:16:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve new hash partition bound check error messages" }, { "msg_contents": "On 02/02/2021 12:35, Peter Eisentraut wrote:\n> I had a bit of trouble parsing the error message \"every hash partition\n> modulus must be a factor of the next larger modulus\", so I went into the\n> code, added some comments and added errdetail messages for each case. I\n> think it's a bit clearer now.\n\nYeah, that error message is hard to understand. This is an improvement, \nbut it still took me a while to understand it.\n\nLet's look at the example in the regression test:\n\n-- check partition bound syntax for the hash partition\nCREATE TABLE hash_parted (\n a int\n) PARTITION BY HASH (a);\nCREATE TABLE hpart_1 PARTITION OF hash_parted FOR VALUES WITH (MODULUS \n10, REMAINDER 0);\nCREATE TABLE hpart_2 PARTITION OF hash_parted FOR VALUES WITH (MODULUS \n50, REMAINDER 1);\nCREATE TABLE hpart_3 PARTITION OF hash_parted FOR VALUES WITH (MODULUS \n200, REMAINDER 2);\n\nWith this patch, you get this:\n\nCREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS \n25, REMAINDER 3);\nERROR: every hash partition modulus must be a factor of the next larger \nmodulus\nDETAIL: The existing modulus 10 is not a factor of the new modulus 25.\n\nCREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS \n150, REMAINDER 3);\nERROR: every hash partition modulus must be a factor of the next larger \nmodulus\nDETAIL: The new modulus 150 is not factor of the existing modulus 200.\n\n\nHow about this?\n\nCREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS \n25, REMAINDER 3);\nERROR: every hash partition modulus must be a factor of the next larger \nmodulus\nDETAIL: 25 is not divisible by 10, the modulus of existing partition \n\"hpart_1\"\n\nCREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS \n150, REMAINDER 3);\nERROR: every hash partition modulus must be a factor of the next larger \nmodulus\nDETAIL: 150 is not a factor of 200, the modulus of existing partition \n\"hpart_3\"\n\nCalling the existing partition by name seems good. And this phrasing \nputs the focus on the new modulus in both variants; presumably the \nexisting partition is OK and the problem is in the new definition.\n\n- Heikki\n\n\n", "msg_date": "Tue, 2 Feb 2021 14:26:33 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Improve new hash partition bound check error messages" }, { "msg_contents": "On 2021-02-02 13:26, Heikki Linnakangas wrote:\n> How about this?\n> \n> CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS\n> 25, REMAINDER 3);\n> ERROR: every hash partition modulus must be a factor of the next larger\n> modulus\n> DETAIL: 25 is not divisible by 10, the modulus of existing partition\n> \"hpart_1\"\n\nI don't know if we can easily get the name of the existing partition. \nI'll have to check that.\n\nI'm worried that this phrasing requires the user to understand that \n\"divisible by\" is related to \"factor of\", which is of course correct but \nintroduces yet more complexity into this.\n\nI'll play around with this a bit more.\n\n\n\n", "msg_date": "Wed, 3 Feb 2021 15:52:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Improve new hash partition bound check error messages" }, { "msg_contents": "On 2021-02-03 15:52, Peter Eisentraut wrote:\n> On 2021-02-02 13:26, Heikki Linnakangas wrote:\n>> How about this?\n>>\n>> CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS\n>> 25, REMAINDER 3);\n>> ERROR: every hash partition modulus must be a factor of the next larger\n>> modulus\n>> DETAIL: 25 is not divisible by 10, the modulus of existing partition\n>> \"hpart_1\"\n> \n> I don't know if we can easily get the name of the existing partition.\n> I'll have to check that.\n> \n> I'm worried that this phrasing requires the user to understand that\n> \"divisible by\" is related to \"factor of\", which is of course correct but\n> introduces yet more complexity into this.\n> \n> I'll play around with this a bit more.\n\nHere is a new patch that implements the suggestions.", "msg_date": "Mon, 15 Feb 2021 17:45:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Improve new hash partition bound check error messages" }, { "msg_contents": "On 15.02.21 17:45, Peter Eisentraut wrote:\n> On 2021-02-03 15:52, Peter Eisentraut wrote:\n>> On 2021-02-02 13:26, Heikki Linnakangas wrote:\n>>> How about this?\n>>>\n>>> CREATE TABLE fail_part PARTITION OF hash_parted FOR VALUES WITH (MODULUS\n>>> 25, REMAINDER 3);\n>>> ERROR:  every hash partition modulus must be a factor of the next larger\n>>> modulus\n>>> DETAIL:  25 is not divisible by 10, the modulus of existing partition\n>>> \"hpart_1\"\n>>\n>> I don't know if we can easily get the name of the existing partition.\n>> I'll have to check that.\n>>\n>> I'm worried that this phrasing requires the user to understand that\n>> \"divisible by\" is related to \"factor of\", which is of course correct but\n>> introduces yet more complexity into this.\n>>\n>> I'll play around with this a bit more.\n> \n> Here is a new patch that implements the suggestions.\n\ncommitted\n\n\n\n", "msg_date": "Mon, 22 Feb 2021 08:09:46 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Improve new hash partition bound check error messages" } ]
[ { "msg_contents": "A race with KeepFileRestoredFromArchive() can cause a restartpoint to fail, as\nseen once on the buildfarm[1]. The attached patch adds a test case; it\napplies atop the \"stop events\" patch[2]. We have two systems for adding\nlong-term pg_wal directory entries. KeepFileRestoredFromArchive() adds them\nduring archive recovery, while InstallXLogFileSegment() does so at all times.\nUnfortunately, InstallXLogFileSegment() happens throughout archive recovery,\nvia the checkpointer recycling segments and calling PreallocXlogFiles().\nMultiple processes can run InstallXLogFileSegment(), which uses\nControlFileLock to represent the authority to modify the directory listing of\npg_wal. KeepFileRestoredFromArchive() just assumes it controls pg_wal.\n\nRecycling and preallocation are wasteful during archive recovery, because\nKeepFileRestoredFromArchive() unlinks every entry in its path. I propose to\nfix the race by adding an XLogCtl flag indicating which regime currently owns\nthe right to add long-term pg_wal directory entries. In the archive recovery\nregime, the checkpointer will not preallocate and will unlink old segments\ninstead of recycling them (like wal_recycle=off). XLogFileInit() will fail.\n\nNotable alternatives:\n\n- Release ControlFileLock at the end of XLogFileInit(), not at the end of\n InstallXLogFileSegment(). Add ControlFileLock acquisition to\n KeepFileRestoredFromArchive(). This provides adequate mutual exclusion, but\n XLogFileInit() could return a file descriptor for an unlinked file. That's\n fine for PreallocXlogFiles(), but it feels wrong.\n\n- During restartpoints, never preallocate or recycle segments. (Just delete\n obsolete WAL.) By denying those benefits, this presumably makes streaming\n recovery less efficient.\n\n- Make KeepFileRestoredFromArchive() call XLogFileInit() to open a segment,\n then copy bytes. This is simple, but it multiplies I/O. That might be\n negligible on account of caching, or it might not be. A variant, incurring\n extra fsyncs, would be to use durable_rename() to replace the segment we get\n from XLogFileInit().\n\n- Make KeepFileRestoredFromArchive() rename without first unlinking. This\n avoids checkpoint failure, but a race could trigger noise from the LOG\n message in InstallXLogFileSegment -> durable_rename_excl.\n\nDoes anyone prefer some alternative? It's probably not worth back-patching\nanything for a restartpoint failure this rare, because most restartpoint\noutcomes are not user-visible.\n\nThanks,\nnm\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2020-10-05%2023%3A02%3A17\n[2] https://postgr.es/m/CAPpHfdtSEOHX8dSk9Qp%2BZ%2B%2Bi4BGQoffKip6JDWngEA%2Bg7Z-XmQ%40mail.gmail.com", "msg_date": "Tue, 2 Feb 2021 07:14:16 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On Tue, Feb 02, 2021 at 07:14:16AM -0800, Noah Misch wrote:\n> Recycling and preallocation are wasteful during archive recovery, because\n> KeepFileRestoredFromArchive() unlinks every entry in its path. I propose to\n> fix the race by adding an XLogCtl flag indicating which regime currently owns\n> the right to add long-term pg_wal directory entries. In the archive recovery\n> regime, the checkpointer will not preallocate and will unlink old segments\n> instead of recycling them (like wal_recycle=off). XLogFileInit() will fail.\n\nHere's the implementation. Patches 1-4 suffice to stop the user-visible\nERROR. Patch 5 avoids a spurious LOG-level message and wasted filesystem\nwrites, and it provides some future-proofing.\n\nI was tempted to (but did not) just remove preallocation. Creating one file\nper checkpoint seems tiny relative to the max_wal_size=1GB default, so I\nexpect it's hard to isolate any benefit. Under the old checkpoint_segments=3\ndefault, a preallocated segment covered a respectable third of the next\ncheckpoint. Before commit 63653f7 (2002), preallocation created more files.", "msg_date": "Sat, 19 Jun 2021 13:39:18 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "Hi Noah,\n\nOn 6/19/21 16:39, Noah Misch wrote:\n> On Tue, Feb 02, 2021 at 07:14:16AM -0800, Noah Misch wrote:\n>> Recycling and preallocation are wasteful during archive recovery, because\n>> KeepFileRestoredFromArchive() unlinks every entry in its path. I propose to\n>> fix the race by adding an XLogCtl flag indicating which regime currently owns\n>> the right to add long-term pg_wal directory entries. In the archive recovery\n>> regime, the checkpointer will not preallocate and will unlink old segments\n>> instead of recycling them (like wal_recycle=off). XLogFileInit() will fail.\n> \n> Here's the implementation. Patches 1-4 suffice to stop the user-visible\n> ERROR. Patch 5 avoids a spurious LOG-level message and wasted filesystem\n> writes, and it provides some future-proofing.\n> \n> I was tempted to (but did not) just remove preallocation. Creating one file\n> per checkpoint seems tiny relative to the max_wal_size=1GB default, so I\n> expect it's hard to isolate any benefit. Under the old checkpoint_segments=3\n> default, a preallocated segment covered a respectable third of the next\n> checkpoint. Before commit 63653f7 (2002), preallocation created more files.\n\nThis also seems like it would fix the link issues we are seeing in [1].\n\nI wonder if that would make it worth a back patch?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CAKw-smBhLOGtRJTC5c%3DqKTPz8gz5%2BWPoVAXrHB6mY-1U4_N7-w%40mail.gmail.com\n\n\n", "msg_date": "Tue, 26 Jul 2022 07:21:29 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On Tue, Jul 26, 2022 at 07:21:29AM -0400, David Steele wrote:\n> On 6/19/21 16:39, Noah Misch wrote:\n> >On Tue, Feb 02, 2021 at 07:14:16AM -0800, Noah Misch wrote:\n> >>Recycling and preallocation are wasteful during archive recovery, because\n> >>KeepFileRestoredFromArchive() unlinks every entry in its path. I propose to\n> >>fix the race by adding an XLogCtl flag indicating which regime currently owns\n> >>the right to add long-term pg_wal directory entries. In the archive recovery\n> >>regime, the checkpointer will not preallocate and will unlink old segments\n> >>instead of recycling them (like wal_recycle=off). XLogFileInit() will fail.\n> >\n> >Here's the implementation. Patches 1-4 suffice to stop the user-visible\n> >ERROR. Patch 5 avoids a spurious LOG-level message and wasted filesystem\n> >writes, and it provides some future-proofing.\n> >\n> >I was tempted to (but did not) just remove preallocation. Creating one file\n> >per checkpoint seems tiny relative to the max_wal_size=1GB default, so I\n> >expect it's hard to isolate any benefit. Under the old checkpoint_segments=3\n> >default, a preallocated segment covered a respectable third of the next\n> >checkpoint. Before commit 63653f7 (2002), preallocation created more files.\n> \n> This also seems like it would fix the link issues we are seeing in [1].\n> \n> I wonder if that would make it worth a back patch?\n\nPerhaps. It's sad to have multiple people deep-diving into something fixed on\nHEAD. On the other hand, I'm not eager to spend risk-of-backpatch points on\nthis. One alternative would be adding an errhint like \"This is known to\nhappen occasionally during archive recovery, where it is harmless.\" That has\nan unpolished look, but it's low-risk and may avoid deep-dive efforts.\n\n> [1] https://www.postgresql.org/message-id/flat/CAKw-smBhLOGtRJTC5c%3DqKTPz8gz5%2BWPoVAXrHB6mY-1U4_N7-w%40mail.gmail.com\n\n\n", "msg_date": "Sat, 30 Jul 2022 23:17:47 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On 7/31/22 02:17, Noah Misch wrote:\n> On Tue, Jul 26, 2022 at 07:21:29AM -0400, David Steele wrote:\n>> On 6/19/21 16:39, Noah Misch wrote:\n>>> On Tue, Feb 02, 2021 at 07:14:16AM -0800, Noah Misch wrote:\n>>>> Recycling and preallocation are wasteful during archive recovery, because\n>>>> KeepFileRestoredFromArchive() unlinks every entry in its path. I propose to\n>>>> fix the race by adding an XLogCtl flag indicating which regime currently owns\n>>>> the right to add long-term pg_wal directory entries. In the archive recovery\n>>>> regime, the checkpointer will not preallocate and will unlink old segments\n>>>> instead of recycling them (like wal_recycle=off). XLogFileInit() will fail.\n>>>\n>>> Here's the implementation. Patches 1-4 suffice to stop the user-visible\n>>> ERROR. Patch 5 avoids a spurious LOG-level message and wasted filesystem\n>>> writes, and it provides some future-proofing.\n>>>\n>>> I was tempted to (but did not) just remove preallocation. Creating one file\n>>> per checkpoint seems tiny relative to the max_wal_size=1GB default, so I\n>>> expect it's hard to isolate any benefit. Under the old checkpoint_segments=3\n>>> default, a preallocated segment covered a respectable third of the next\n>>> checkpoint. Before commit 63653f7 (2002), preallocation created more files.\n>>\n>> This also seems like it would fix the link issues we are seeing in [1].\n>>\n>> I wonder if that would make it worth a back patch?\n> \n> Perhaps. It's sad to have multiple people deep-diving into something fixed on\n> HEAD. On the other hand, I'm not eager to spend risk-of-backpatch points on\n> this. One alternative would be adding an errhint like \"This is known to\n> happen occasionally during archive recovery, where it is harmless.\" That has\n> an unpolished look, but it's low-risk and may avoid deep-dive efforts.\n\nI think in this case a HINT might be sufficient to at least keep people \nfrom wasting time tracking down a problem that has already been fixed.\n\nHowever, there is another issue [1] that might argue for a back patch if \nthis patch (as I believe) would fix the issue.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/CAHJZqBDxWfcd53jm0bFttuqpK3jV2YKWx%3D4W7KxNB4zzt%2B%2BqFg%40mail.gmail.com\n\n\n", "msg_date": "Tue, 2 Aug 2022 10:14:22 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On Tue, Aug 02, 2022 at 10:14:22AM -0400, David Steele wrote:\n> On 7/31/22 02:17, Noah Misch wrote:\n> >On Tue, Jul 26, 2022 at 07:21:29AM -0400, David Steele wrote:\n> >>On 6/19/21 16:39, Noah Misch wrote:\n> >>>On Tue, Feb 02, 2021 at 07:14:16AM -0800, Noah Misch wrote:\n> >>>>Recycling and preallocation are wasteful during archive recovery, because\n> >>>>KeepFileRestoredFromArchive() unlinks every entry in its path. I propose to\n> >>>>fix the race by adding an XLogCtl flag indicating which regime currently owns\n> >>>>the right to add long-term pg_wal directory entries. In the archive recovery\n> >>>>regime, the checkpointer will not preallocate and will unlink old segments\n> >>>>instead of recycling them (like wal_recycle=off). XLogFileInit() will fail.\n> >>>\n> >>>Here's the implementation. Patches 1-4 suffice to stop the user-visible\n> >>>ERROR. Patch 5 avoids a spurious LOG-level message and wasted filesystem\n> >>>writes, and it provides some future-proofing.\n> >>>\n> >>>I was tempted to (but did not) just remove preallocation. Creating one file\n> >>>per checkpoint seems tiny relative to the max_wal_size=1GB default, so I\n> >>>expect it's hard to isolate any benefit. Under the old checkpoint_segments=3\n> >>>default, a preallocated segment covered a respectable third of the next\n> >>>checkpoint. Before commit 63653f7 (2002), preallocation created more files.\n> >>\n> >>This also seems like it would fix the link issues we are seeing in [1].\n> >>\n> >>I wonder if that would make it worth a back patch?\n> >\n> >Perhaps. It's sad to have multiple people deep-diving into something fixed on\n> >HEAD. On the other hand, I'm not eager to spend risk-of-backpatch points on\n> >this. One alternative would be adding an errhint like \"This is known to\n> >happen occasionally during archive recovery, where it is harmless.\" That has\n> >an unpolished look, but it's low-risk and may avoid deep-dive efforts.\n> \n> I think in this case a HINT might be sufficient to at least keep people from\n> wasting time tracking down a problem that has already been fixed.\n> \n> However, there is another issue [1] that might argue for a back patch if\n> this patch (as I believe) would fix the issue.\n\n> [1] https://www.postgresql.org/message-id/CAHJZqBDxWfcd53jm0bFttuqpK3jV2YKWx%3D4W7KxNB4zzt%2B%2BqFg%40mail.gmail.com\n\nThat makes sense. Each iteration of the restartpoint recycle loop has a 1/N\nchance of failing. Recovery adds >N files between restartpoints. Hence, the\nWAL directory grows without bound. Is that roughly the theory in mind?\n\n\n", "msg_date": "Tue, 2 Aug 2022 07:37:27 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "\n\nOn 8/2/22 10:37, Noah Misch wrote:\n> On Tue, Aug 02, 2022 at 10:14:22AM -0400, David Steele wrote:\n>> On 7/31/22 02:17, Noah Misch wrote:\n>>> On Tue, Jul 26, 2022 at 07:21:29AM -0400, David Steele wrote:\n>>>> On 6/19/21 16:39, Noah Misch wrote:\n>>>>> On Tue, Feb 02, 2021 at 07:14:16AM -0800, Noah Misch wrote:\n>>>>>> Recycling and preallocation are wasteful during archive recovery, because\n>>>>>> KeepFileRestoredFromArchive() unlinks every entry in its path. I propose to\n>>>>>> fix the race by adding an XLogCtl flag indicating which regime currently owns\n>>>>>> the right to add long-term pg_wal directory entries. In the archive recovery\n>>>>>> regime, the checkpointer will not preallocate and will unlink old segments\n>>>>>> instead of recycling them (like wal_recycle=off). XLogFileInit() will fail.\n>>>>>\n>>>>> Here's the implementation. Patches 1-4 suffice to stop the user-visible\n>>>>> ERROR. Patch 5 avoids a spurious LOG-level message and wasted filesystem\n>>>>> writes, and it provides some future-proofing.\n>>>>>\n>>>>> I was tempted to (but did not) just remove preallocation. Creating one file\n>>>>> per checkpoint seems tiny relative to the max_wal_size=1GB default, so I\n>>>>> expect it's hard to isolate any benefit. Under the old checkpoint_segments=3\n>>>>> default, a preallocated segment covered a respectable third of the next\n>>>>> checkpoint. Before commit 63653f7 (2002), preallocation created more files.\n>>>>\n>>>> This also seems like it would fix the link issues we are seeing in [1].\n>>>>\n>>>> I wonder if that would make it worth a back patch?\n>>>\n>>> Perhaps. It's sad to have multiple people deep-diving into something fixed on\n>>> HEAD. On the other hand, I'm not eager to spend risk-of-backpatch points on\n>>> this. One alternative would be adding an errhint like \"This is known to\n>>> happen occasionally during archive recovery, where it is harmless.\" That has\n>>> an unpolished look, but it's low-risk and may avoid deep-dive efforts.\n>>\n>> I think in this case a HINT might be sufficient to at least keep people from\n>> wasting time tracking down a problem that has already been fixed.\n>>\n>> However, there is another issue [1] that might argue for a back patch if\n>> this patch (as I believe) would fix the issue.\n> \n>> [1] https://www.postgresql.org/message-id/CAHJZqBDxWfcd53jm0bFttuqpK3jV2YKWx%3D4W7KxNB4zzt%2B%2BqFg%40mail.gmail.com\n> \n> That makes sense. Each iteration of the restartpoint recycle loop has a 1/N\n> chance of failing. Recovery adds >N files between restartpoints. Hence, the\n> WAL directory grows without bound. Is that roughly the theory in mind?\n\nYes, though you have formulated it better than I had in my mind.\n\nLet's see if Don can confirm that he is seeing the \"could not link file\" \nmessages.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:01:19 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On Tue, Aug 2, 2022 at 10:01 AM David Steele <david@pgmasters.net> wrote:\n\n>\n> > That makes sense. Each iteration of the restartpoint recycle loop has a\n> 1/N\n> > chance of failing. Recovery adds >N files between restartpoints.\n> Hence, the\n> > WAL directory grows without bound. Is that roughly the theory in mind?\n>\n> Yes, though you have formulated it better than I had in my mind.\n>\n> Let's see if Don can confirm that he is seeing the \"could not link file\"\n> messages.\n\n\nDuring my latest incident, there was only one occurrence:\n\ncould not link file “pg_wal/xlogtemp.18799\" to\n> “pg_wal/000000010000D45300000010”: File exists\n\n\nWAL restore/recovery seemed to continue on just fine then. And it would\ncontinue on until the pg_wal volume ran out of space unless I was manually\nrm'ing already-recovered WAL files from the side.\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Tue, Aug 2, 2022 at 10:01 AM David Steele <david@pgmasters.net> wrote:\n> That makes sense.  Each iteration of the restartpoint recycle loop has a 1/N\n> chance of failing.  Recovery adds >N files between restartpoints.  Hence, the\n> WAL directory grows without bound.  Is that roughly the theory in mind?\n\nYes, though you have formulated it better than I had in my mind.\n\nLet's see if Don can confirm that he is seeing the \"could not link file\" \nmessages.During my latest incident, there was only one occurrence:could not link file “pg_wal/xlogtemp.18799\" to “pg_wal/000000010000D45300000010”: File existsWAL restore/recovery seemed to continue on just fine then. And it would continue on until the pg_wal volume ran out of space unless I was manually rm'ing already-recovered WAL files from the side.-- Don Seilerwww.seiler.us", "msg_date": "Tue, 2 Aug 2022 16:03:42 -0500", "msg_from": "Don Seiler <don@seiler.us>", "msg_from_op": false, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "At Tue, 2 Aug 2022 16:03:42 -0500, Don Seiler <don@seiler.us> wrote in \n> On Tue, Aug 2, 2022 at 10:01 AM David Steele <david@pgmasters.net> wrote:\n> \n> >\n> > > That makes sense. Each iteration of the restartpoint recycle loop has a\n> > 1/N\n> > > chance of failing. Recovery adds >N files between restartpoints.\n> > Hence, the\n> > > WAL directory grows without bound. Is that roughly the theory in mind?\n> >\n> > Yes, though you have formulated it better than I had in my mind.\n\nI'm not sure I understand it correctly, but isn't the cause of the\nissue in the other thread due to skipping many checkpoint records\nwithin the checkpoint_timeout? I remember that I proposed a GUC\nvariable to disable that checkpoint skipping. As another measure for\nthat issue, we could force replaying checkpoint if max_wal_size is\nalready filled up or known to be filled in the next checkpoint cycle.\nIf this is correct, this patch is irrelevant to the issue.\n\n> > Let's see if Don can confirm that he is seeing the \"could not link file\"\n> > messages.\n> \n> \n> During my latest incident, there was only one occurrence:\n> \n> could not link file “pg_wal/xlogtemp.18799\" to\n> > “pg_wal/000000010000D45300000010”: File exists\n\n(I noticed that the patch in the other thread is broken:()\n\nHmm. It seems like a race condition betwen StartupXLOG() and\nRemoveXlogFIle(). We need wider extent of ContolFileLock. Concretely\ntaking ControlFileLock before deciding the target xlog file name in\nRemoveXlogFile() seems to prevent this happening. (If this is correct\nthis is a live issue on the master branch.)\n\n> WAL restore/recovery seemed to continue on just fine then. And it would\n> continue on until the pg_wal volume ran out of space unless I was manually\n> rm'ing already-recovered WAL files from the side.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 03 Aug 2022 11:24:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On Wed, Aug 03, 2022 at 11:24:17AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 2 Aug 2022 16:03:42 -0500, Don Seiler <don@seiler.us> wrote in \n> > could not link file “pg_wal/xlogtemp.18799\" to\n> > > “pg_wal/000000010000D45300000010”: File exists\n\n> Hmm. It seems like a race condition betwen StartupXLOG() and\n> RemoveXlogFIle(). We need wider extent of ContolFileLock. Concretely\n> taking ControlFileLock before deciding the target xlog file name in\n> RemoveXlogFile() seems to prevent this happening. (If this is correct\n> this is a live issue on the master branch.)\n\nRemoveXlogFile() calls InstallXLogFileSegment() with find_free=true. The\nintent of find_free=true is to make it okay to pass a target xlog file that\nceases to be a good target. (InstallXLogFileSegment() searches for a good\ntarget while holding ControlFileLock.) Can you say more about how that proved\nto be insufficient?\n\n\n", "msg_date": "Wed, 3 Aug 2022 00:28:47 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "At Wed, 3 Aug 2022 00:28:47 -0700, Noah Misch <noah@leadboat.com> wrote in \n> On Wed, Aug 03, 2022 at 11:24:17AM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 2 Aug 2022 16:03:42 -0500, Don Seiler <don@seiler.us> wrote in \n> > > could not link file “pg_wal/xlogtemp.18799\" to\n> > > > “pg_wal/000000010000D45300000010”: File exists\n> \n> > Hmm. It seems like a race condition betwen StartupXLOG() and\n> > RemoveXlogFIle(). We need wider extent of ContolFileLock. Concretely\n> > taking ControlFileLock before deciding the target xlog file name in\n> > RemoveXlogFile() seems to prevent this happening. (If this is correct\n> > this is a live issue on the master branch.)\n> \n> RemoveXlogFile() calls InstallXLogFileSegment() with find_free=true. The\n> intent of find_free=true is to make it okay to pass a target xlog file that\n> ceases to be a good target. (InstallXLogFileSegment() searches for a good\n> target while holding ControlFileLock.) Can you say more about how that proved\n> to be insufficient?\n\nUg.. No. I can't. I was confused by something. Sorry.\n\nPreallocXlogFiles() and checkpointer are mutually excluded by the same\nlock, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 03 Aug 2022 17:15:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On Tue, Aug 02, 2022 at 07:37:27AM -0700, Noah Misch wrote:\n> On Tue, Aug 02, 2022 at 10:14:22AM -0400, David Steele wrote:\n> > On 7/31/22 02:17, Noah Misch wrote:\n> > >On Tue, Jul 26, 2022 at 07:21:29AM -0400, David Steele wrote:\n> > >>On 6/19/21 16:39, Noah Misch wrote:\n> > >>>On Tue, Feb 02, 2021 at 07:14:16AM -0800, Noah Misch wrote:\n> > >>>>Recycling and preallocation are wasteful during archive recovery, because\n> > >>>>KeepFileRestoredFromArchive() unlinks every entry in its path. I propose to\n> > >>>>fix the race by adding an XLogCtl flag indicating which regime currently owns\n> > >>>>the right to add long-term pg_wal directory entries. In the archive recovery\n> > >>>>regime, the checkpointer will not preallocate and will unlink old segments\n> > >>>>instead of recycling them (like wal_recycle=off). XLogFileInit() will fail.\n> > >>>\n> > >>>Here's the implementation. Patches 1-4 suffice to stop the user-visible\n> > >>>ERROR. Patch 5 avoids a spurious LOG-level message and wasted filesystem\n> > >>>writes, and it provides some future-proofing.\n> > >>>\n> > >>>I was tempted to (but did not) just remove preallocation. Creating one file\n> > >>>per checkpoint seems tiny relative to the max_wal_size=1GB default, so I\n> > >>>expect it's hard to isolate any benefit. Under the old checkpoint_segments=3\n> > >>>default, a preallocated segment covered a respectable third of the next\n> > >>>checkpoint. Before commit 63653f7 (2002), preallocation created more files.\n> > >>\n> > >>This also seems like it would fix the link issues we are seeing in [1].\n> > >>\n> > >>I wonder if that would make it worth a back patch?\n> > >\n> > >Perhaps. It's sad to have multiple people deep-diving into something fixed on\n> > >HEAD. On the other hand, I'm not eager to spend risk-of-backpatch points on\n> > >this. One alternative would be adding an errhint like \"This is known to\n> > >happen occasionally during archive recovery, where it is harmless.\" That has\n> > >an unpolished look, but it's low-risk and may avoid deep-dive efforts.\n> > \n> > I think in this case a HINT might be sufficient to at least keep people from\n> > wasting time tracking down a problem that has already been fixed.\n\nHere's a patch to add that HINT. I figure it's better to do this before next\nweek's minor releases. In the absence of objections, I'll push this around\n2022-08-05 14:00 UTC.\n\n> > However, there is another issue [1] that might argue for a back patch if\n> > this patch (as I believe) would fix the issue.\n> \n> > [1] https://www.postgresql.org/message-id/CAHJZqBDxWfcd53jm0bFttuqpK3jV2YKWx%3D4W7KxNB4zzt%2B%2BqFg%40mail.gmail.com\n> \n> That makes sense. Each iteration of the restartpoint recycle loop has a 1/N\n> chance of failing. Recovery adds >N files between restartpoints. Hence, the\n> WAL directory grows without bound. Is that roughly the theory in mind?\n\nOn further reflection, I don't expect it to happen that way. Each failure\nmessage is LOG-level, so the remaining recycles still happen. (The race\ncondition can yield an ERROR under PreallocXlogFiles(), but recycling is\nalready done at that point.)", "msg_date": "Wed, 3 Aug 2022 23:24:56 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "At Wed, 3 Aug 2022 23:24:56 -0700, Noah Misch <noah@leadboat.com> wrote in \n> > > I think in this case a HINT might be sufficient to at least keep people from\n> > > wasting time tracking down a problem that has already been fixed.\n> \n> Here's a patch to add that HINT. I figure it's better to do this before next\n> week's minor releases. In the absence of objections, I'll push this around\n> 2022-08-05 14:00 UTC.\n\n+1\n\n> > > However, there is another issue [1] that might argue for a back patch if\n> > > this patch (as I believe) would fix the issue.\n> > \n> > > [1] https://www.postgresql.org/message-id/CAHJZqBDxWfcd53jm0bFttuqpK3jV2YKWx%3D4W7KxNB4zzt%2B%2BqFg%40mail.gmail.com\n> > \n> > That makes sense. Each iteration of the restartpoint recycle loop has a 1/N\n> > chance of failing. Recovery adds >N files between restartpoints. Hence, the\n> > WAL directory grows without bound. Is that roughly the theory in mind?\n> \n> On further reflection, I don't expect it to happen that way. Each failure\n> message is LOG-level, so the remaining recycles still happen. (The race\n> condition can yield an ERROR under PreallocXlogFiles(), but recycling is\n> already done at that point.)\n\nI agree to this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 04 Aug 2022 17:06:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On 8/4/22 04:06, Kyotaro Horiguchi wrote:\n> At Wed, 3 Aug 2022 23:24:56 -0700, Noah Misch <noah@leadboat.com> wrote in\n>>>> I think in this case a HINT might be sufficient to at least keep people from\n>>>> wasting time tracking down a problem that has already been fixed.\n>>\n>> Here's a patch to add that HINT. I figure it's better to do this before next\n>> week's minor releases. In the absence of objections, I'll push this around\n>> 2022-08-05 14:00 UTC.\n> \n> +1\n\nLooks good to me as well.\n\n>>>> However, there is another issue [1] that might argue for a back patch if\n>>>> this patch (as I believe) would fix the issue.\n>>>\n>>>> [1] https://www.postgresql.org/message-id/CAHJZqBDxWfcd53jm0bFttuqpK3jV2YKWx%3D4W7KxNB4zzt%2B%2BqFg%40mail.gmail.com\n>>>\n>>> That makes sense. Each iteration of the restartpoint recycle loop has a 1/N\n>>> chance of failing. Recovery adds >N files between restartpoints. Hence, the\n>>> WAL directory grows without bound. Is that roughly the theory in mind?\n>>\n>> On further reflection, I don't expect it to happen that way. Each failure\n>> message is LOG-level, so the remaining recycles still happen. (The race\n>> condition can yield an ERROR under PreallocXlogFiles(), but recycling is\n>> already done at that point.)\n> \n> I agree to this.\n\nHmmm, OK. We certainly have a fairly serious issue here, i.e. pg_wal \ngrowing without bound. Even if we are not sure what is causing it, how \nconfident are we that the patches applied to v15 would fix it?\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 4 Aug 2022 06:32:38 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" }, { "msg_contents": "On Thu, Aug 04, 2022 at 06:32:38AM -0400, David Steele wrote:\n> On 8/4/22 04:06, Kyotaro Horiguchi wrote:\n> >At Wed, 3 Aug 2022 23:24:56 -0700, Noah Misch <noah@leadboat.com> wrote in\n> >>>>I think in this case a HINT might be sufficient to at least keep people from\n> >>>>wasting time tracking down a problem that has already been fixed.\n> >>\n> >>Here's a patch to add that HINT. I figure it's better to do this before next\n> >>week's minor releases. In the absence of objections, I'll push this around\n> >>2022-08-05 14:00 UTC.\n> >\n> >+1\n> \n> Looks good to me as well.\n\nThanks for reviewing.\n\n> >>>>However, there is another issue [1] that might argue for a back patch if\n> >>>>this patch (as I believe) would fix the issue.\n> >>>\n> >>>>[1] https://www.postgresql.org/message-id/CAHJZqBDxWfcd53jm0bFttuqpK3jV2YKWx%3D4W7KxNB4zzt%2B%2BqFg%40mail.gmail.com\n> >>>\n> >>>That makes sense. Each iteration of the restartpoint recycle loop has a 1/N\n> >>>chance of failing. Recovery adds >N files between restartpoints. Hence, the\n> >>>WAL directory grows without bound. Is that roughly the theory in mind?\n> >>\n> >>On further reflection, I don't expect it to happen that way. Each failure\n> >>message is LOG-level, so the remaining recycles still happen. (The race\n> >>condition can yield an ERROR under PreallocXlogFiles(), but recycling is\n> >>already done at that point.)\n> >\n> >I agree to this.\n> \n> Hmmm, OK. We certainly have a fairly serious issue here, i.e. pg_wal growing\n> without bound. Even if we are not sure what is causing it, how confident are\n> we that the patches applied to v15 would fix it?\n\nI'll say 7%; certainly too low to stop investigating. The next things I'd want\nto see are:\n\n- select name, setting, source from pg_settings where setting <> boot_val;\n- log_checkpoints log entries, and other log entries having the same PID\n\nIf the theory about checkpoint skipping holds, there should be a period where\nthe volume of WAL replayed greatly exceeds max_wal_size, yet 0-1 restartpoints\nbegin and 0 restartpoints complete.\n\n\n", "msg_date": "Thu, 4 Aug 2022 07:21:10 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: Race between KeepFileRestoredFromArchive() and restartpoint" } ]
[ { "msg_contents": "Hi\n\nWhen I fixed one plpgsql_check issue, I found another plpgsql issue. Now we\nhave field nstatements that hold a number of plpgsql statements in\nfunction. Unfortunately I made an error when I wrote this functionality and\nfor FOR statements, this counter is incremented 2x. Higher number than a\nreal number is better than a lesser number, but it can be a source of\nproblems too (inside plpgsql_check I iterate over 0 .. nstatements stmtid,\nand due this bug I had a problem with missing statements).\n\nAttached patch is pretty simple:\n\nRegards\n\nPavel", "msg_date": "Tue, 2 Feb 2021 18:20:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "bugfix - plpgsql - statement counter is incremented 2x for FOR stmt" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> When I fixed one plpgsql_check issue, I found another plpgsql issue. Now we\n> have field nstatements that hold a number of plpgsql statements in\n> function. Unfortunately I made an error when I wrote this functionality and\n> for FOR statements, this counter is incremented 2x. Higher number than a\n> real number is better than a lesser number, but it can be a source of\n> problems too (inside plpgsql_check I iterate over 0 .. nstatements stmtid,\n> and due this bug I had a problem with missing statements).\n\n> Attached patch is pretty simple:\n\nRight, done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Feb 2021 14:36:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bugfix - plpgsql - statement counter is incremented 2x for FOR\n stmt" }, { "msg_contents": "út 2. 2. 2021 v 20:36 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > When I fixed one plpgsql_check issue, I found another plpgsql issue. Now\n> we\n> > have field nstatements that hold a number of plpgsql statements in\n> > function. Unfortunately I made an error when I wrote this functionality\n> and\n> > for FOR statements, this counter is incremented 2x. Higher number than a\n> > real number is better than a lesser number, but it can be a source of\n> > problems too (inside plpgsql_check I iterate over 0 .. nstatements\n> stmtid,\n> > and due this bug I had a problem with missing statements).\n>\n> > Attached patch is pretty simple:\n>\n> Right, done.\n>\n\nThank you for commit\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\nút 2. 2. 2021 v 20:36 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> When I fixed one plpgsql_check issue, I found another plpgsql issue. Now we\n> have field nstatements that hold a number of plpgsql statements in\n> function. Unfortunately I made an error when I wrote this functionality and\n> for FOR statements, this counter is incremented 2x. Higher number than a\n> real number is better than a lesser number, but it can be a source of\n> problems too (inside plpgsql_check I iterate over 0 .. nstatements stmtid,\n> and due this bug I had a problem with missing statements).\n\n> Attached patch is pretty simple:\n\nRight, done.Thank you for commitRegardsPavel\n\n                        regards, tom lane", "msg_date": "Tue, 2 Feb 2021 20:54:47 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: bugfix - plpgsql - statement counter is incremented 2x for FOR\n stmt" } ]
[ { "msg_contents": "Hi,\n\nCan we have a new function, say pg_postgres_pid(), to return\npostmaster PID similar to pg_backend_pid()? At times, it will be\ndifficult to use OS level commands to get the postmaster pid of a\nbackend to which it is connected. It's even worse if we have multiple\npostgres server instances running on the same system. I'm not sure\nwhether it's safe to expose postmaster pid this way, but it will be\nuseful at least for debugging purposes on say Windows or other\nnon-Linux platforms where it's a bit difficult to get process id.\nUsers can also look at the postmaster.pid file to figure out what's\nthe current postmaster pid, if not using OS level commands, but having\na SQL callable function makes life easier.\n\nThe function can look like this:\nDatum\npg_postgres_pid(PG_FUNCTION_ARGS)\n{\n PG_RETURN_INT32(PostmasterPid);\n}\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Feb 2021 11:42:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Can we have a new SQL callable function to get Postmaster PID?" }, { "msg_contents": "On Wed, Feb 3, 2021, at 3:12 AM, Bharath Rupireddy wrote:\n> Can we have a new function, say pg_postgres_pid(), to return\n> postmaster PID similar to pg_backend_pid()?\nIt is not that difficult to read the postmaster PID using existing functions.\n\npostgres=# SELECT (regexp_match(pg_read_file('postmaster.pid'), '\\d+'))[1];\nregexp_match\n--------------\n13496\n(1 row)\n\nWhile investigating an issue, you are probably interested in a backend PID or\none of the auxiliary processes. In both cases, it is easier to obtain the PIDs.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 3, 2021, at 3:12 AM, Bharath Rupireddy wrote:Can we have a new function, say pg_postgres_pid(), to returnpostmaster PID similar to pg_backend_pid()?It is not that difficult to read the postmaster PID using existing functions.postgres=# SELECT (regexp_match(pg_read_file('postmaster.pid'), '\\d+'))[1];regexp_match--------------13496(1 row)While investigating an issue, you are probably interested in a backend PID orone of the auxiliary processes. In both cases, it is easier to obtain the PIDs.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 03 Feb 2021 11:24:40 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Can we have a new SQL callable function to get Postmaster PID?" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Can we have a new function, say pg_postgres_pid(), to return\n> postmaster PID similar to pg_backend_pid()?\n\nI'm disinclined to think that this is a good idea from a security\nperspective. Maybe if it's superuser-only it'd be ok (since a\nsuperuser would have other routes to discovering the value anyway).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Feb 2021 10:08:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we have a new SQL callable function to get Postmaster PID?" }, { "msg_contents": "On 2/3/21 7:12 AM, Bharath Rupireddy wrote:\n> Hi,\n> \n> Can we have a new function, say pg_postgres_pid(), to return\n> postmaster PID similar to pg_backend_pid()? At times, it will be\n> difficult to use OS level commands to get the postmaster pid of a\n> backend to which it is connected. It's even worse if we have multiple\n> postgres server instances running on the same system. I'm not sure\n> whether it's safe to expose postmaster pid this way, but it will be\n> useful at least for debugging purposes on say Windows or other\n> non-Linux platforms where it's a bit difficult to get process id.\n> Users can also look at the postmaster.pid file to figure out what's\n> the current postmaster pid, if not using OS level commands, but having\n> a SQL callable function makes life easier.\n> \n> The function can look like this:\n> Datum\n> pg_postgres_pid(PG_FUNCTION_ARGS)\n> {\n> PG_RETURN_INT32(PostmasterPid);\n> }\n> \n> Thoughts?\n> \n\nCurious question - why do you actually need PID of the postmaster? For\ndebugging, I'd say it's not quite necessary - you can just attach a\ndebugger to the backend and print the PostmasterPid directly. Or am I\nmissing something?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Feb 2021 22:09:47 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Can we have a new SQL callable function to get Postmaster PID?" }, { "msg_contents": "On 2/3/21 4:08 PM, Tom Lane wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>> Can we have a new function, say pg_postgres_pid(), to return\n>> postmaster PID similar to pg_backend_pid()?\n> \n> I'm disinclined to think that this is a good idea from a security\n> perspective. Maybe if it's superuser-only it'd be ok (since a\n> superuser would have other routes to discovering the value anyway).\n> \n\nIs the postmaster PID really sensitive? Users with OS access can just\nlist the processes, and for users without OS access / privileges it's\nmostly useless, no?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Feb 2021 22:13:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Can we have a new SQL callable function to get Postmaster PID?" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 2/3/21 4:08 PM, Tom Lane wrote:\n>> I'm disinclined to think that this is a good idea from a security\n>> perspective. Maybe if it's superuser-only it'd be ok (since a\n>> superuser would have other routes to discovering the value anyway).\n\n> Is the postmaster PID really sensitive? Users with OS access can just\n> list the processes, and for users without OS access / privileges it's\n> mostly useless, no?\n\nWe disallow ordinary users from finding out the data directory location,\neven though that should be equally useless to unprivileged users. The\npostmaster PID seems like the same sort of information. It does not\nseem like a non-administrator could have any but nefarious use for that\nvalue. (Admittedly, this argument is somewhat weakened by exposing\nchild processes' PIDs ... but you can't take down the whole installation\nby zapping a child process.)\n\nI'm basically in the same place you are in your other response: the\nquestion to ask is not \"why not allow this?\", but \"why SHOULD we allow\nthis?\"\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Feb 2021 16:57:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we have a new SQL callable function to get Postmaster PID?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 2:39 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/3/21 7:12 AM, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > Can we have a new function, say pg_postgres_pid(), to return\n> > postmaster PID similar to pg_backend_pid()? At times, it will be\n>\n> Curious question - why do you actually need PID of the postmaster? For\n> debugging, I'd say it's not quite necessary - you can just attach a\n> debugger to the backend and print the PostmasterPid directly. Or am I\n> missing something?\n\nBut sometimes we may also have to debug postmaster code, on different\nplatforms maybe. I don't know how the postmaster pid from the user\nperspective will be useful in customer environments and I can't think\nof other usages of the pg_postgres_pid().\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 11:30:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we have a new SQL callable function to get Postmaster PID?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 3:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > On 2/3/21 4:08 PM, Tom Lane wrote:\n> >> I'm disinclined to think that this is a good idea from a security\n> >> perspective. Maybe if it's superuser-only it'd be ok (since a\n> >> superuser would have other routes to discovering the value anyway).\n>\n> > Is the postmaster PID really sensitive? Users with OS access can just\n> > list the processes, and for users without OS access / privileges it's\n> > mostly useless, no?\n>\n> We disallow ordinary users from finding out the data directory location,\n> even though that should be equally useless to unprivileged users. The\n> postmaster PID seems like the same sort of information. It does not\n> seem like a non-administrator could have any but nefarious use for that\n> value. (Admittedly, this argument is somewhat weakened by exposing\n> child processes' PIDs ... but you can't take down the whole installation\n> by zapping a child process.)\n>\n> I'm basically in the same place you are in your other response: the\n> question to ask is not \"why not allow this?\", but \"why SHOULD we allow\n> this?\"\n\nIf we still think that the new function pg_postgres_pid() is useful in\nsome ways to the users or developers, then we can have it as a\nsuperuser only function and error out for non-super users.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 11:33:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we have a new SQL callable function to get Postmaster PID?" }, { "msg_contents": "On Thu, Feb 04, 2021 at 11:30:09AM +0530, Bharath Rupireddy wrote:\n> But sometimes we may also have to debug postmaster code, on different\n> platforms maybe. I don't know how the postmaster pid from the user\n> perspective will be useful in customer environments and I can't think\n> of other usages of the pg_postgres_pid().\n\nI had the same question as Tomas in mind when reading this thread, and\nthe use case you are mentioning sounds limited to me. Please note\nthat you can already get this information by using pg_read_file() on\npostmaster.pid so I see no need for an extra function.\n--\nMichael", "msg_date": "Thu, 4 Feb 2021 15:51:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Can we have a new SQL callable function to get Postmaster PID?" } ]