threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nApologies if this has already been discussed someplace, but I couldn't\nfind a previous discussion. It seems to me that base backups are\nbroken in the face of a concurrent truncation that reduces the number\nof segments in a relation.\n\nSuppose we have a relation that is 1.5GB in size, so that we have two\nfiles 23456, which is 1GB, and 23456.1, which is 0.5GB. We'll back\nthose files up in whichever order the directory scan finds them.\nSuppose we back up 23456.1 first. Then the relation is truncated to\n0.5GB, so 23456.1 is removed and 23456 gets a lot shorter. Next we\nback up the file 23456. Now our backup contains files 23456 and\n23456.1, each 0.5GB. But this breaks the invariant in md.c:\n\n * On disk, a relation must consist of consecutively numbered segment\n * files in the pattern\n * -- Zero or more full segments of exactly RELSEG_SIZE blocks each\n * -- Exactly one partial segment of size 0 <= size <\nRELSEG_SIZE blocks\n * -- Optionally, any number of inactive segments of size 0 blocks.\n\nbasebackup.c's theory about relation truncation is that it doesn't\nreally matter because WAL replay will fix things up. But in this case,\nI don't think it will, because WAL replay relies on the above\ninvariant holding. As mdnblocks says:\n\n /*\n * If segment is exactly RELSEG_SIZE, advance to next one.\n */\n segno++;\n\nSo I think what's going to happen is we're not going to notice 23456.1\nwhen we recover the backup. It will just sit there as an orphaned file\nforever, unless we extend 23456 back to a full 1GB, at which point we\nmight abruptly start considering that file part of the relation again.\n\nAssuming I'm not wrong about all of this, the question arises: whose\nfault is this, and what to do about it? It seems to me that it's a bit\nhard to blame basebackup.c, because if you used pg_backup_start() and\npg_backup_stop() and copied the directory yourself, you'd have exactly\nthe same situation, and while we could (and perhaps should) teach\nbasebackup.c to do something smarter, it doesn't seem realistic to\nimpose complex constraints on the user's choice of file copy tool.\nFurthermore, I think that the problem could arise without performing a\nbackup at all: say that the server crashes on the OS level in\nmid-truncation, and the truncation of segment 0 reaches disk but the\nremoval of segment 1 does not.\n\nSo I think the problem is with md.c assuming that its invariant must\nhold on a cluster that's not guaranteed to be in a consistent state.\nBut mdnblocks() clearly can't try to open every segment up to whatever\nthe maximum theoretical possible segment number is every time it's\ninvoked, because that would be wicked expensive. An idea that occurs\nto me is to remove all segment files following the first partial\nsegment during startup, before we begin WAL replay. If that state\noccurs at startup, then either we have a scenario involving\ntruncation, like those above, or a scenario involving relation\nextension, where we added a new segment and that made it to disk but\nthe prior extension of the previous last segment file to maximum\nlength did not. But in that case, WAL replay should, I think, fix\nthings up. However, I'm not completely sure that there isn't some hole\nin this theory, and this way forward also doesn't sound particularly\ncheap. Nonetheless I don't have another idea right now.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Apr 2023 09:42:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "base backup vs. concurrent truncation" }, { "msg_contents": "Hi,\n\nJust a quick observation:\n\n> basebackup.c's theory about relation truncation is that it doesn't\n> really matter because WAL replay will fix things up. But in this case,\n> I don't think it will, because WAL replay relies on the above\n> invariant holding.\n\nAssuming this is the case perhaps we can reduce the scenario and\nconsider this simpler one:\n\n1. The table is truncated\n2. The DBMS is killed before making a checkpoint\n3. We are in recovery and presumably see a pair of 0.5 Gb segments\n\nOr can't we?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 21 Apr 2023 17:50:34 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Hi again,\n\n> Assuming this is the case perhaps we can reduce the scenario and\n> consider this simpler one:\n>\n> 1. The table is truncated\n> 2. The DBMS is killed before making a checkpoint\n> 3. We are in recovery and presumably see a pair of 0.5 Gb segments\n>\n> Or can't we?\n\nOh, I see. If the process will be killed this perhaps is not going to\nhappen. Whether this can happen if we pull the plug from the machine\nis probably a design implementation of the particular filesystem and\nwhether it's journaled.\n\nHm...\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 21 Apr 2023 17:56:46 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "On Fri, Apr 21, 2023 at 10:56 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > Assuming this is the case perhaps we can reduce the scenario and\n> > consider this simpler one:\n> >\n> > 1. The table is truncated\n> > 2. The DBMS is killed before making a checkpoint\n> > 3. We are in recovery and presumably see a pair of 0.5 Gb segments\n> >\n> > Or can't we?\n>\n> Oh, I see. If the process will be killed this perhaps is not going to\n> happen. Whether this can happen if we pull the plug from the machine\n> is probably a design implementation of the particular filesystem and\n> whether it's journaled.\n\nRight. I mentioned that scenario in the original email:\n\n\"Furthermore, I think that the problem could arise without performing a\nbackup at all: say that the server crashes on the OS level in\nmid-truncation, and the truncation of segment 0 reaches disk but the\nremoval of segment 1 does not.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Apr 2023 11:03:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Hi,\n\n> > Oh, I see. If the process will be killed this perhaps is not going to\n> > happen. Whether this can happen if we pull the plug from the machine\n> > is probably a design implementation of the particular filesystem and\n> > whether it's journaled.\n>\n> Right. I mentioned that scenario in the original email:\n>\n> \"Furthermore, I think that the problem could arise without performing a\n> backup at all: say that the server crashes on the OS level in\n> mid-truncation, and the truncation of segment 0 reaches disk but the\n> removal of segment 1 does not.\"\n\nRight. I didn't fully understand this scenario at first.\n\nI tried to reproduce it to see what actually happens. Here is what I did.\n\n```\ncreate table truncateme(id integer, val varchar(1024));\nalter table truncateme set (autovacuum_enabled = off);\n\n-- takes ~30 seconds\ninsert into truncateme\n select id,\n (\n select string_agg(chr((33+random()*(126-33)) :: integer), '')\n from generate_series(1,1000)\n )\n from generate_series(1,2*1024*1024) as id;\n\ncheckpoint;\n\nselect relfilenode from pg_class where relname = 'truncateme';\n relfilenode\n-------------\n 32778\n```\n\nWe got 3 segments (and no VM fork since I disabled VACUUM):\n\n```\n$ cd ~/pginstall/data-master/base/16384/\n$ ls -lah 32778*\n-rw------- 1 eax staff 1.0G Apr 21 23:47 32778\n-rw------- 1 eax staff 1.0G Apr 21 23:48 32778.1\n-rw------- 1 eax staff 293M Apr 21 23:48 32778.2\n-rw------- 1 eax staff 608K Apr 21 23:48 32778_fsm\n```\n\nLet's backup the last segment:\n\n```\n$ cp 32778.2 ~/temp/32778.2_backup\n```\n\nTruncate the table:\n\n```\ndelete from truncateme where id > 1024*1024;\nvacuum truncateme;\n```\n\nAnd kill Postgres:\n\n```\n$ pkill -9 postgres\n```\n\n... before it finishes another checkpoint:\n\n```\nLOG: checkpoints are occurring too frequently (4 seconds apart)\nHINT: Consider increasing the configuration parameter \"max_wal_size\".\nLOG: checkpoint starting: wal\n[and there was no \"checkpoint complete\"]\n```\n\nNext:\n\n```\n$ ls -lah 32778*\n-rw------- 1 eax staff 1.0G Apr 21 23:50 32778\n-rw------- 1 eax staff 146M Apr 21 23:50 32778.1\n-rw------- 1 eax staff 0B Apr 21 23:50 32778.2\n-rw------- 1 eax staff 312K Apr 21 23:50 32778_fsm\n-rw------- 1 eax staff 40K Apr 21 23:50 32778_vm\n\n$ cp ~/temp/32778.2_backup ./32778.2\n```\n\nI believe this simulates the case when pg_basebackup did a checkpoint,\ncopied 32778.2 before the rest of the segments, Postgres did a\ntruncate, and pg_basebackup received the rest of the data including\nWAL. The WAL contains the record about the truncation, see\nRelationTruncate(). Just as an observation: we keep zero sized\nsegments instead of deleting them.\n\nSo if I start Postgres now I expect it to return to a consistent\nstate, ideally the same state it had before the crash in terms of the\nsegments.\n\nWhat I actually get is:\n\n```\nLOG: database system was interrupted; last known up at 2023-04-21 23:50:08 MSK\nLOG: database system was not properly shut down; automatic recovery in progress\nLOG: redo starts at 9/C4035B18\nLOG: invalid record length at 9/E8FCBF10: expected at least 24, got 0\nLOG: redo done at 9/E8FCBEB0 system usage: CPU: user: 1.58 s, system:\n0.96 s, elapsed: 2.61 s\nLOG: checkpoint starting: end-of-recovery immediate wait\nLOG: checkpoint complete: wrote 10 buffers (0.0%); 0 WAL file(s)\nadded, 0 removed, 36 recycled; write=0.005 s, sync=0.003 s,\ntotal=0.016 s; sync files=9, longest=0.002 s, average=0.001 s;\ndistance=605784 kB, estimate=605784 kB; lsn=9/E8FCBF10, redo\nlsn=9/E8FCBF10\nLOG: database system is ready to accept connections\n```\n\n... and:\n\n```\n-rw------- 1 eax staff 1.0G Apr 21 23:50 32778\n-rw------- 1 eax staff 146M Apr 21 23:53 32778.1\n-rw------- 1 eax staff 0B Apr 21 23:53 32778.2\n```\n\nNaturally the database is consistent too. So it looks like the\nrecovery protocol worked as expected after all, at least in the\nupcoming PG16 and this specific scenario.\n\nDid I miss anything? If not, perhaps we should just update the comments a bit?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 22 Apr 2023 00:19:36 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "On Fri, Apr 21, 2023 at 5:19 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Naturally the database is consistent too. So it looks like the\n> recovery protocol worked as expected after all, at least in the\n> upcoming PG16 and this specific scenario.\n>\n> Did I miss anything? If not, perhaps we should just update the comments a bit?\n\nI was confused about what was happening here but after trying to\nreproduce I think I figured it out. In my test, I saw the .1 file grow\nto a full 1GB and then the truncate happened afterward.\nUnsurprisingly, that works. I believe that what's happening here is\nthat the DELETE statement triggers FPIs for all of the blocks it\ntouches. Therefore, when the DELETE records are replayed, those blocks\nget written back to the relation, extending it. That gets it up to the\nrequired 1GB size after which the .2 file is visible to the smgr\nagain, so the truncate works fine. I think that to reproduce the\nscenario, you want the truncate to happen in its own checkpoint cycle.\n\nI also found out from Andres that he's complained about pretty much\nthe same problem just a couple of months ago:\n\nhttps://www.postgresql.org/message-id/20230223010147.32oir7sb66slqnjk@awork3.anarazel.de\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Apr 2023 11:37:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Hi,\n\nOn 2023-04-21 09:42:57 -0400, Robert Haas wrote:\n> Apologies if this has already been discussed someplace, but I couldn't\n> find a previous discussion. It seems to me that base backups are\n> broken in the face of a concurrent truncation that reduces the number\n> of segments in a relation.\n\nI think\nhttps://www.postgresql.org/message-id/20230223010147.32oir7sb66slqnjk@awork3.anarazel.de\nand the commits + discussions referenced from there is relevant for the topic.\n\n\n> Suppose we have a relation that is 1.5GB in size, so that we have two\n> files 23456, which is 1GB, and 23456.1, which is 0.5GB. We'll back\n> those files up in whichever order the directory scan finds them.\n> Suppose we back up 23456.1 first. Then the relation is truncated to\n> 0.5GB, so 23456.1 is removed and 23456 gets a lot shorter. Next we\n> back up the file 23456. Now our backup contains files 23456 and\n> 23456.1, each 0.5GB. But this breaks the invariant in md.c:\n\n\n> * On disk, a relation must consist of consecutively numbered segment\n> * files in the pattern\n> * -- Zero or more full segments of exactly RELSEG_SIZE blocks each\n> * -- Exactly one partial segment of size 0 <= size <\n> RELSEG_SIZE blocks\n> * -- Optionally, any number of inactive segments of size 0 blocks.\n> \n> basebackup.c's theory about relation truncation is that it doesn't\n> really matter because WAL replay will fix things up. But in this case,\n> I don't think it will, because WAL replay relies on the above\n> invariant holding. As mdnblocks says:\n> \n> /*\n> * If segment is exactly RELSEG_SIZE, advance to next one.\n> */\n> segno++;\n> \n> So I think what's going to happen is we're not going to notice 23456.1\n> when we recover the backup. It will just sit there as an orphaned file\n> forever, unless we extend 23456 back to a full 1GB, at which point we\n> might abruptly start considering that file part of the relation again.\n\nOne important point is that I think 23456.1 at that point can contain\ndata. Which can lead to deleted rows reappearing, vacuum failing due to rows\nfrom before relfrozenxid existing, etc.\n\n\n> Assuming I'm not wrong about all of this, the question arises: whose\n> fault is this, and what to do about it? It seems to me that it's a bit\n> hard to blame basebackup.c, because if you used pg_backup_start() and\n> pg_backup_stop() and copied the directory yourself, you'd have exactly\n> the same situation, and while we could (and perhaps should) teach\n> basebackup.c to do something smarter, it doesn't seem realistic to\n> impose complex constraints on the user's choice of file copy tool.\n\nAgreed.\n\n\n> So I think the problem is with md.c assuming that its invariant must\n> hold on a cluster that's not guaranteed to be in a consistent state.\n> But mdnblocks() clearly can't try to open every segment up to whatever\n> the maximum theoretical possible segment number is every time it's\n> invoked, because that would be wicked expensive. An idea that occurs\n> to me is to remove all segment files following the first partial\n> segment during startup, before we begin WAL replay. If that state\n> occurs at startup, then either we have a scenario involving\n> truncation, like those above, or a scenario involving relation\n> extension, where we added a new segment and that made it to disk but\n> the prior extension of the previous last segment file to maximum\n> length did not. But in that case, WAL replay should, I think, fix\n> things up. However, I'm not completely sure that there isn't some hole\n> in this theory, and this way forward also doesn't sound particularly\n> cheap. Nonetheless I don't have another idea right now.\n\nWhat we've discussed somewhere in the past is to always truncate N+1 when\ncreating the first page in N. I.e. if we extend into 23456.1, we truncate\n23456.2 to 0 blocks. As far as I can tell, that'd solve this issue?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Apr 2023 17:03:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "On Mon, Apr 24, 2023 at 8:03 PM Andres Freund <andres@anarazel.de> wrote:\n> What we've discussed somewhere in the past is to always truncate N+1 when\n> creating the first page in N. I.e. if we extend into 23456.1, we truncate\n> 23456.2 to 0 blocks. As far as I can tell, that'd solve this issue?\n\nYeah, although leaving 23456.2 forever unless and until that happens\ndoesn't sound amazing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Apr 2023 11:42:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Hi,\n\nOn 2023-04-25 11:42:43 -0400, Robert Haas wrote:\n> On Mon, Apr 24, 2023 at 8:03 PM Andres Freund <andres@anarazel.de> wrote:\n> > What we've discussed somewhere in the past is to always truncate N+1 when\n> > creating the first page in N. I.e. if we extend into 23456.1, we truncate\n> > 23456.2 to 0 blocks. As far as I can tell, that'd solve this issue?\n> \n> Yeah, although leaving 23456.2 forever unless and until that happens\n> doesn't sound amazing.\n\nIt isn't - but the alternatives aren't great either. It's not that easy to hit\nthis scenario, so I think something along these lines is more palatable than\nadding a pass through the entire data directory.\n\nI think eventually we'll have to make the WAL logging bulletproof enough\nagainst this to avoid the risk of it. I think that is possible.\n\nI suspect we would need to prevent checkpoints from happening in the wrong\nmoment, if we were to go down that route.\n\nI guess that eventually we'll need to polish the infrastructure for\ndetermining restartpointsm so that delayChkptFlags doesn't actually prevent\ncheckpoints, just moves the restart to a safe LSN. Although I guess that\ntruncations aren't frequent enough (compared to transaction commits), for that\nto be required \"now\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Apr 2023 10:28:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Hi,\n\n> I think that to reproduce the scenario, you want the truncate to happen in\n> its own checkpoint cycle.\n\nOK, let's try this again.\n\nIn order to effectively disable the checkpointer I added the following lines\nto postgresql.conf:\n\n```\ncheckpoint_timeout = 3600\nmax_wal_size = 100G\n```\n\nI'm also keeping an eye on `logfile` in order to make sure the system doesn't\ndo anything unexpected.\n\nThen:\n\n```\n-- Just double-checking\nshow checkpoint_timeout;\n checkpoint_timeout\n--------------------\n 1h\n\nshow max_wal_size;\n max_wal_size\n--------------\n 100GB\n\ncreate table truncateme(id integer, val varchar(1024));\nalter table truncateme set (autovacuum_enabled = off);\nselect relfilenode from pg_class where relname = 'truncateme';\n\n relfilenode\n-------------\n 16385\n\n-- takes ~30 seconds\ninsert into truncateme\n select id,\n (\n select string_agg(chr((33+random()*(126-33)) :: integer), '')\n from generate_series(1,1000)\n )\n from generate_series(1,2*1024*1024) as id;\n\ndelete from truncateme where id > 1024*1024;\n\nselect count(*) from truncateme;\n count\n---------\n 1048576\n\n-- Making a checkpoint as pg_basebackup would do.\n-- Also, making sure truncate will happen in its own checkpoint cycle.\ncheckpoint;\n```\n\nAgain I see 3 segments:\n\n```\n$ ls -lah 16385*\n-rw------- 1 eax eax 1.0G May 1 19:24 16385\n-rw------- 1 eax eax 1.0G May 1 19:27 16385.1\n-rw------- 1 eax eax 293M May 1 19:27 16385.2\n-rw------- 1 eax eax 608K May 1 19:24 16385_fsm\n```\n\nMaking a backup of .2 as if I'm pg_basebackup:\n\n```\ncp 16385.2 ~/temp/16385.2\n```\n\nTruncating the table:\n\n```\nvacuum truncateme;\n```\n\n... and killing postgres:\n\n```\n$ pkill -9 postgres\n\n```\n\nNow I see:\n\n```\n$ ls -lah 16385*\n-rw------- 1 eax eax 1.0G May 1 19:30 16385\n-rw------- 1 eax eax 147M May 1 19:31 16385.1\n-rw------- 1 eax eax 0 May 1 19:31 16385.2\n-rw------- 1 eax eax 312K May 1 19:31 16385_fsm\n-rw------- 1 eax eax 40K May 1 19:31 16385_vm\n$ cp ~/temp/16385.2 ./16385.2\n```\n\nStarting postgres:\n\n```\nLOG: starting PostgreSQL 16devel on x86_64-linux, compiled by\ngcc-11.3.0, 64-bit\nLOG: listening on IPv4 address \"0.0.0.0\", port 5432\nLOG: listening on IPv6 address \"::\", port 5432\nLOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\nLOG: database system was interrupted; last known up at 2023-05-01 19:27:22 MSK\nLOG: database system was not properly shut down; automatic recovery in progress\nLOG: redo starts at 0/8AAB36B0\nLOG: invalid record length at 0/CE9BDE60: expected at least 24, got 0\nLOG: redo done at 0/CE9BDE28 system usage: CPU: user: 6.51 s, system:\n2.45 s, elapsed: 8.97 s\nLOG: checkpoint starting: end-of-recovery immediate wait\nLOG: checkpoint complete: wrote 10 buffers (0.0%); 0 WAL file(s)\nadded, 0 removed, 68 recycled; write=0.026 s, sync=1.207 s,\ntotal=1.769 s; sync files=10, longest=1.188 s, average=0.121 s;\ndistance=1113129 kB, estimate=1113129 kB; lsn=0/CE9BDE60, redo\nlsn=0/CE9BDE60\nLOG: database system is ready to accept connections\n```\n\n```\n$ ls -lah 16385*\n-rw------- 1 eax eax 1.0G May 1 19:33 16385\n-rw------- 1 eax eax 147M May 1 19:33 16385.1\n-rw------- 1 eax eax 0 May 1 19:33 16385.2\n-rw------- 1 eax eax 312K May 1 19:33 16385_fsm\n-rw------- 1 eax eax 40K May 1 19:33 16385_vm\n```\n\n```\nselect count(*) from truncateme;\n count\n---------\n 1048576\n ```\n\nSo I'm still unable to reproduce the described scenario, at least on PG16.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 1 May 2023 19:54:27 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "On Mon, May 1, 2023 at 12:54 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> So I'm still unable to reproduce the described scenario, at least on PG16.\n\nWell, that proves that either (1) the scenario that I described is\nimpossible for some unknown reason or (2) something is wrong with your\ntest scenario. I bet it's (2), but if it's (1), it would be nice to\nknow what the reason is. One can't feel good about code that appears\non the surface to be broken even if one knows that some unknown\nmagical thing is preventing disaster.\n\nI find it pretty hard to accept that there's no problem at all here,\nespecially in view of the fact that Andres independently posted about\nthe same issue on another thread. It's pretty clear from looking at\nthe code that mdnblocks() can't open any segments past the first one\nthat isn't of the maximum possible size. It's also fairly clear that a\ncrash or a base backup can create such situations. And finally it's\npretty clear that having an old partial segment be rediscovered due to\nthe relation be re-extended would be quite bad. So how can there not\nbe a problem? I admit I haven't done the legwork to nail down a test\ncase where everything comes together just right to show user-visible\nbreakage, but your success in finding one where it doesn't is no proof\nof anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 May 2023 08:57:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Hi,\n\nOn 2023-05-08 08:57:08 -0400, Robert Haas wrote:\n> On Mon, May 1, 2023 at 12:54 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> > So I'm still unable to reproduce the described scenario, at least on PG16.\n> \n> Well, that proves that either (1) the scenario that I described is\n> impossible for some unknown reason or (2) something is wrong with your\n> test scenario. I bet it's (2), but if it's (1), it would be nice to\n> know what the reason is. One can't feel good about code that appears\n> on the surface to be broken even if one knows that some unknown\n> magical thing is preventing disaster.\n\nIt seems pretty easy to create disconnected segments. You don't even need a\nbasebackup for it.\n\n\nTo make it easier, I rebuilt with segsize_blocks=16. This isn't required, it\njust makes it a lot cheaper to play around. To noones surprise: I'm not a\npatient person...\n\n\nStarted server with autovacuum=off.\n\nDROP TABLE IF EXISTS large;\nCREATE TABLE large AS SELECT generate_series(1, 100000);\nSELECT current_setting('data_directory') || '/' || pg_relation_filepath('large');\n\nls -l /srv/dev/pgdev-dev/base/5/24585*\nshows lots of segments.\n\nattach gdb, set breakpoint on truncate.\n\nDROP TABLE large;\n\nbreakpoint will fire. Continue once.\n\nIn concurrent session, trigger checkpoint. Due to the checkpoint we'll not\nreplay any WAL record. And the checkpoint will unlink the first segment.\n\nKill the server.\n\nAfter crash recovery, you end up with all but the first segment still\npresent. As the first segment doesn't exist anymore, nothing prevents that oid\nfrom being recycled in the future. Once it is recycled and the first segment\ngrows large enough, the later segments will suddenly re-appear.\n\n\nIt's not quite so trivial to reproduce issues with partial truncations /\nconcurrent base backups. The problem is that it's hard to guarantee the\niteration order of the base backup process. You'd just need to write a manual\nbase backup script though.\n\nConsider a script mimicking the filesystem returning directory entries in\n\"reverse name order\". Recipe includes two sessions. One (BB) doing a base\nbackup, the other (DT) running VACUUM making the table shorter.\n\nBB: Copy <relfilenode>.2\nBB: Copy <relfilenode>.1\nSS: Truncate relation to < SEGSIZE\nBB: Copy <relfilenode>\n\n\nThe replay of the smgrtruncate record will determine the relation size to\nfigure out what segments to remove. Because <relfilenode> is < SEGSIZE it'll\nonly truncate <relfilenode>, not <relfilenode>.N. And boom, a disconnected\nsegment.\n\n(I'll post a separate email about an evolved proposal about fixing this set of\nissues)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 May 2023 13:28:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Hi,\n\nOn 2023-04-25 10:28:58 -0700, Andres Freund wrote:\n> On 2023-04-25 11:42:43 -0400, Robert Haas wrote:\n> > On Mon, Apr 24, 2023 at 8:03 PM Andres Freund <andres@anarazel.de> wrote:\n> > > What we've discussed somewhere in the past is to always truncate N+1 when\n> > > creating the first page in N. I.e. if we extend into 23456.1, we truncate\n> > > 23456.2 to 0 blocks. As far as I can tell, that'd solve this issue?\n> > \n> > Yeah, although leaving 23456.2 forever unless and until that happens\n> > doesn't sound amazing.\n> \n> It isn't - but the alternatives aren't great either. It's not that easy to hit\n> this scenario, so I think something along these lines is more palatable than\n> adding a pass through the entire data directory.\n\n> I think eventually we'll have to make the WAL logging bulletproof enough\n> against this to avoid the risk of it. I think that is possible.\n\n\nI think we should extend my proposal above with improved WAL logging.\n\nRight now the replay of XLOG_SMGR_TRUNCATE records uses the normal\nsmgrtruncate() path - which essentially relies on smgrnblocks() to determine\nthe relation size, which in turn iterates over the segments until it finds one\n< SEGSIZE.\n\nThat's fine during normal running, where we are consistent. But it's bogus\nwhen we're not consistent - in case of a concurrent truncation, the base\nbackup might have missed intermediate segments, while copying later segments.\n\nWe should fix this by including not just the \"target\" length in the WAL\nrecord, but also the \"current\" length. Then during WAL replay of such records\nwe'd not look at the files currently present, we'd just stupidly truncate all\nthe segments mentioned in the range.\n\n\nI think we ought to do the same for mdunlinkfork().\n\n\n\n> I suspect we would need to prevent checkpoints from happening in the wrong\n> moment, if we were to go down that route.\n> \n> I guess that eventually we'll need to polish the infrastructure for\n> determining restartpointsm so that delayChkptFlags doesn't actually prevent\n> checkpoints, just moves the restart to a safe LSN. Although I guess that\n> truncations aren't frequent enough (compared to transaction commits), for that\n> to be required \"now\".\n> \n\nUnfortunately the current approach of smgrtruncate records is quite racy with\ncheckpoints. You can end up with a sequence of something like\n\n1) SMGRTRUNCATE record\n2) CHECKPOINT;\n3) Truncating of segment files\n\nif you crash anywhere in 3), we don't replay the SMGRTRUNCATE record. It's not\na great solution, but I suspect we'll just have to set delayChkptFlags during\ntruncations to prevent this.\n\n\nI also think that we need to fix the order in which mdunlink() operates. It\nseems *quite* bogus that we unlink in \"forward\" segment order, rather than in\nreverse order (like mdtruncate()). If we crash after unlinking segment.N,\nwe'll not redo unlinking the later segments after a crash. Nor in WAL\nreplay. While the improved WAL logging would address this as well, it still\nseems pointlessly risky to iterate forward, rather than backward.\n\n\nEven with those changes, I think we might still need something like the\n\"always truncate the next segment\" bit I described in my prior email though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 May 2023 13:41:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Hi Robert,\n\n> I admit I haven't done the legwork to nail down a test\n> case where everything comes together just right to show user-visible\n> breakage, but your success in finding one where it doesn't is no proof\n> of anything.\n\nRespectfully, what made you think this was my intention?\n\nQuite the opposite, personally I am inclined to think that the problem\ndoes exist. In order to fix it however we need a test that reliably\nreproduces it first. Otherwise there is no way to figure out whether\nthe fix was correct or not.\n\nWhat the experiment showed is that the test scenario you initially\ndescribed is probably the wrong one for reasons yet to be understood\nand we need to come up with a better one.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 9 May 2023 12:14:28 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Including the pre-truncation length in the wal record is the obviously\nsolid approach and I none of the below is a good substitution for it.\nBut....\n\nOn Tue, 25 Apr 2023 at 13:30, Andres Freund <andres@anarazel.de> wrote:\n>\n> It isn't - but the alternatives aren't great either. It's not that easy to hit\n> this scenario, so I think something along these lines is more palatable than\n> adding a pass through the entire data directory.\n\nDoing one pass through the entire data directory on startup before\ndeciding the directory is consistent doesn't sound like a crazy idea.\nIt's pretty easy to imagine bugs in backup software that leave out\nfiles in the middle of tables -- some of us don't even have to\nimagine...\n\nSimilarly checking for a stray next segment whenever extending a file\nto maximum segment size seems like a reasonable thing to check for\ntoo.\n\nThese kinds of checks are the kind of paranoia that catches filesystem\nbugs, backup software bugs, cron jobs, etc that we don't even know to\nwatch for.\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 10 May 2023 17:02:51 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "On Tue, May 9, 2023 at 5:14 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > I admit I haven't done the legwork to nail down a test\n> > case where everything comes together just right to show user-visible\n> > breakage, but your success in finding one where it doesn't is no proof\n> > of anything.\n>\n> Respectfully, what made you think this was my intention?\n\nHonestly I have no idea what your intention was and didn't mean to\njudge it. However, I don't think that troubleshooting the test case\nyou put together is the thing that I want to spend time on right now,\nand I hope that it will still be possible to make some progress on the\nunderlying issue despite that.\n\n> Quite the opposite, personally I am inclined to think that the problem\n> does exist. In order to fix it however we need a test that reliably\n> reproduces it first. Otherwise there is no way to figure out whether\n> the fix was correct or not.\n>\n> What the experiment showed is that the test scenario you initially\n> described is probably the wrong one for reasons yet to be understood\n> and we need to come up with a better one.\n\nHopefully what Andres posted will help in this regard.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 May 2023 12:18:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: base backup vs. concurrent truncation" }, { "msg_contents": "Greetings,\n\n* Greg Stark (stark@mit.edu) wrote:\n> Including the pre-truncation length in the wal record is the obviously\n> solid approach and I none of the below is a good substitution for it.\n\nI tend to agree with the items mentioned in Andres's recent email on\nthis thread too in terms of improving the WAL logging around this.\n\n> On Tue, 25 Apr 2023 at 13:30, Andres Freund <andres@anarazel.de> wrote:\n> > It isn't - but the alternatives aren't great either. It's not that easy to hit\n> > this scenario, so I think something along these lines is more palatable than\n> > adding a pass through the entire data directory.\n> \n> Doing one pass through the entire data directory on startup before\n> deciding the directory is consistent doesn't sound like a crazy idea.\n\nWe're already making a pass through the entire data directory on\ncrash-restart (and fsync'ing everything too), which includes when\nrestoring from backup. See src/backend/access/transam/xlog.c:5155\nExtending that to check for oddities like segments following a not-1GB\nsegment certainly seems like a good idea to me.\n\n> It's pretty easy to imagine bugs in backup software that leave out\n> files in the middle of tables -- some of us don't even have to\n> imagine...\n\nYup.\n\n> Similarly checking for a stray next segment whenever extending a file\n> to maximum segment size seems like a reasonable thing to check for\n> too.\n\nYeah, that definitely seems like a good idea. Extending a relation is\nalready expensive and we've taken steps to deal with that and so\ndetecting that the file we were expecting to create is already there\ncertainly seems like a good idea and I wouldn't expect (?) to add a lot\nof extra time in the normal case.\n\n> These kinds of checks are the kind of paranoia that catches filesystem\n> bugs, backup software bugs, cron jobs, etc that we don't even know to\n> watch for.\n\nAgreed, and would also help in cases where such a situation already\nexists out there somewhere and which no amount of new WAL records would\nmake go away..\n\nThanks,\n\nStephen", "msg_date": "Thu, 11 May 2023 14:48:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: base backup vs. concurrent truncation" } ]
[ { "msg_contents": "Hi,\r\n\r\nI recently noticed the following in the work_mem [1] documentation:\r\n\r\n“Note that for a complex query, several sort or hash operations might be running in parallel;”\r\n\r\nThe use of “parallel” here is misleading as this has nothing to do with parallel query, but\r\nrather several operations in a plan running simultaneously.\r\n\r\nThe use of parallel in this doc predates parallel query support, which explains the usage.\r\n\r\nI suggest a small doc fix:\r\n\r\n“Note that for a complex query, several sort or hash operations might be running simultaneously;”\r\n\r\nThis should also be backpatched to all supported versions docs.\r\n\r\nThoughts?\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n1. https://www.postgresql.org/docs/current/runtime-config-resource.html\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi,\n \nI recently noticed the following in the work_mem [1] documentation:\n \n“Note that for a complex query, several sort or hash operations might be running in parallel;”\n \nThe use of “parallel” here is misleading as this has nothing to do with parallel query, but\nrather several operations in a plan running simultaneously. \n\n \nThe use of parallel in this doc predates parallel query support, which explains the usage.\n \nI suggest a small doc fix:\n \n“Note that for a complex query, several sort or hash operations might be running simultaneously;”\n \nThis should also be backpatched to all supported versions docs.\n \nThoughts?\n \nRegards,\n \nSami Imseih\nAmazon Web Services (AWS)\n \n1. https://www.postgresql.org/docs/current/runtime-config-resource.html", "msg_date": "Fri, 21 Apr 2023 14:28:24 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Correct the documentation for work_mem" }, { "msg_contents": "On 21.04.23 16:28, Imseih (AWS), Sami wrote:\n> I recently noticed the following in the work_mem [1] documentation:\n> \n> “Note that for a complex query, several sort or hash operations might be \n> running in parallel;”\n> \n> The use of “parallel” here is misleading as this has nothing to do with \n> parallel query, but\n> \n> rather several operations in a plan running simultaneously.\n> \n> The use of parallel in this doc predates parallel query support, which \n> explains the usage.\n> \n> I suggest a small doc fix:\n> \n> “Note that for a complex query, several sort or hash operations might be \n> running simultaneously;”\n\nHere is a discussion of these terms: \nhttps://takuti.me/note/parallel-vs-concurrent/\n\nI think \"concurrently\" is the correct word here.\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 18:59:57 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 21.04.23 16:28, Imseih (AWS), Sami wrote:\n>> I suggest a small doc fix:\n>> “Note that for a complex query, several sort or hash operations might be \n>> running simultaneously;”\n\n> Here is a discussion of these terms: \n> https://takuti.me/note/parallel-vs-concurrent/\n\n> I think \"concurrently\" is the correct word here.\n\nProbably, but it'd do little to remove the confusion Sami is on about,\nespecially since the next sentence uses \"concurrently\" to describe the\nother case. I think we need a more thorough rewording, perhaps like\n\n- Note that for a complex query, several sort or hash operations might be\n- running in parallel; each operation will generally be allowed\n+ Note that a complex query may include several sort or hash\n+ operations; each such operation will generally be allowed\n to use as much memory as this value specifies before it starts\n to write data into temporary files. Also, several running\n sessions could be doing such operations concurrently.\n\nI also find this wording a bit further down to be poor:\n\n Hash-based operations are generally more sensitive to memory\n availability than equivalent sort-based operations. The\n memory available for hash tables is computed by multiplying\n <varname>work_mem</varname> by\n <varname>hash_mem_multiplier</varname>. This makes it\n\nI think \"available\" is not le mot juste, and it's also unclear from\nthis whether we're speaking of the per-hash-table limit or some\n(nonexistent) overall limit. How about\n\n- memory available for hash tables is computed by multiplying\n+ memory limit for a hash table is computed by multiplying\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:15:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "On Fri, Apr 21, 2023 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 21.04.23 16:28, Imseih (AWS), Sami wrote:\n> >> I suggest a small doc fix:\n> >> “Note that for a complex query, several sort or hash operations might be\n> >> running simultaneously;”\n>\n> > Here is a discussion of these terms:\n> > https://takuti.me/note/parallel-vs-concurrent/\n>\n> > I think \"concurrently\" is the correct word here.\n>\n> Probably, but it'd do little to remove the confusion Sami is on about,\n\n+1.\n\nWhen discussing this internally, Sami's proposal was in fact to use\nthe word 'concurrently'. But given that when it comes to computers and\nprogramming, it's common for someone to not understand the intricate\ndifference between the two terms, we thought it's best to not use any\nof those, and instead use a word not usually associated with\nprogramming and algorithms.\n\nAside: Another pair of words I see regularly used interchangeably,\nwhen in fact they mean different things: precise vs. accurate.\n\n> especially since the next sentence uses \"concurrently\" to describe the\n> other case. I think we need a more thorough rewording, perhaps like\n>\n> - Note that for a complex query, several sort or hash operations might be\n> - running in parallel; each operation will generally be allowed\n> + Note that a complex query may include several sort or hash\n> + operations; each such operation will generally be allowed\n\nThis wording doesn't seem to bring out the fact that there could be\nmore than one work_mem consumer running (in-progress) at the same\ntime. The reader to could mistake it to mean hashes and sorts in a\ncomplex query may happen one after the other.\n\n+ Note that a complex query may include several sort and hash operations, and\n+ more than one of these operations may be in progress simultaneously at any\n+ given time; each such operation will generally be allowed\n\nI believe the phrase \"several sort _and_ hash\" better describes the\npossible composition of a complex query, than does \"several sort _or_\nhash\".\n\n> I also find this wording a bit further down to be poor:\n>\n> Hash-based operations are generally more sensitive to memory\n> availability than equivalent sort-based operations. The\n> memory available for hash tables is computed by multiplying\n> <varname>work_mem</varname> by\n> <varname>hash_mem_multiplier</varname>. This makes it\n>\n> I think \"available\" is not le mot juste, and it's also unclear from\n> this whether we're speaking of the per-hash-table limit or some\n> (nonexistent) overall limit. How about\n>\n> - memory available for hash tables is computed by multiplying\n> + memory limit for a hash table is computed by multiplying\n\n+1\n\nBest regards,\nGurjeet https://Gurje.et\nPostgres Contributors Team, http://aws.amazon.com\n\n\n", "msg_date": "Fri, 21 Apr 2023 10:39:45 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "> > especially since the next sentence uses \"concurrently\" to describe the\r\n> > other case. I think we need a more thorough rewording, perhaps like\r\n> >\r\n> > - Note that for a complex query, several sort or hash operations might be\r\n> > - running in parallel; each operation will generally be allowed\r\n> > + Note that a complex query may include several sort or hash\r\n> > + operations; each such operation will generally be allowed\r\n\r\n> This wording doesn't seem to bring out the fact that there could be\r\n> more than one work_mem consumer running (in-progress) at the same\r\n> time. \r\n\r\nDo you mean, more than one work_mem consumer running at the same\r\ntime for a given query? If so, that is precisely the point we need to convey\r\nin the docs.\r\n\r\ni.e. if I have 2 sorts in a query that can use up to 4MB each, at some point\r\nin the query execution, I can have 8MB of memory allocated.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n", "msg_date": "Sat, 22 Apr 2023 03:36:15 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "Based on the feedback, here is a v1 of the suggested doc changes.\r\n\r\nI modified Gurjeets suggestion slightly to make it clear that a specific\r\nquery execution could have operations simultaneously using up to \r\nwork_mem.\r\n\r\nI also added the small hash table memory limit clarification.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Mon, 24 Apr 2023 16:20:08 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "On Tue, 25 Apr 2023 at 04:20, Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Based on the feedback, here is a v1 of the suggested doc changes.\n>\n> I modified Gurjeets suggestion slightly to make it clear that a specific\n> query execution could have operations simultaneously using up to\n> work_mem.\n\n> - Note that for a complex query, several sort or hash operations might be\n> - running in parallel; each operation will generally be allowed\n> + Note that a complex query may include several sort and hash operations,\n> + and more than one of these operations may be in progress simultaneously\n> + for a given query execution; each such operation will generally be allowed\n> to use as much memory as this value specifies before it starts\n> to write data into temporary files. Also, several running\n> sessions could be doing such operations concurrently.\n\nI'm wondering about adding \"and more than one of these operations may\nbe in progress simultaneously\". Are you talking about concurrent\nsessions running other queries which are using work_mem too? If so,\nisn't that already covered by the final sentence in the quoted text\nabove? if not, what is running simultaneously?\n\nI think Tom's suggestion looks fine. I'd maybe change \"sort or hash\"\nto \"sort and hash\" per the suggestion from Gurjeet above.\n\nDavid\n\n\n", "msg_date": "Thu, 13 Jul 2023 11:11:49 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nHello,\r\n\r\nI've reviewed and built the documentation for the updated patch. As it stands right now I think the documentation for this section is quite clear.\r\n\r\n> I'm wondering about adding \"and more than one of these operations may\r\n> be in progress simultaneously\". Are you talking about concurrent\r\n> sessions running other queries which are using work_mem too?\r\n\r\nThis appears to be referring to the \"sort and hash\" operations mentioned prior.\r\n\r\n> If so,\r\n> isn't that already covered by the final sentence in the quoted text\r\n> above? if not, what is running simultaneously?\r\n\r\nI believe the last sentence is referring to another session that is running its own sort and hash operations. So the first section you mention is describing how sort and hash operations can be in execution at the same time for a query, while the second refers to how sessions may overlap in their execution of sort and hash operations if I am understanding this correctly.\r\n\r\nI also agree that changing \"sort or hash\" to \"sort and hash\" is a better description.\r\n\r\nTristen", "msg_date": "Mon, 31 Jul 2023 20:44:21 +0000", "msg_from": "Tristen Raab <tristen.raab@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "Hi,\r\n\r\nSorry for the delay in response and thanks for the feedback!\r\n\r\n> I've reviewed and built the documentation for the updated patch. As it stands right now I think the documentation for this section is quite clear.\r\n\r\nSorry, I am not understanding. What is clear? The current documentation -or- the proposed documentation in the patch?\r\n\r\n>> I'm wondering about adding \"and more than one of these operations may\r\n>> be in progress simultaneously\". Are you talking about concurrent\r\n>> sessions running other queries which are using work_mem too?\r\n\r\n> This appears to be referring to the \"sort and hash\" operations mentioned prior.\r\n\r\nCorrect, this is not referring to multiple sessions, but a given execution could \r\nhave multiple operations that are each using up to work_mem simultaneously.\r\n\r\n> I also agree that changing \"sort or hash\" to \"sort and hash\" is a better description.\r\n\r\nThat is addressed in the last revision of the patch.\r\n\r\n- Note that for a complex query, several sort or hash operations might be\r\n- running in parallel; each operation will generally be allowed\r\n+ Note that a complex query may include several sort and hash operations,\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n\r\n", "msg_date": "Tue, 1 Aug 2023 23:59:12 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "On Fri, Apr 21, 2023 at 01:15:01PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 21.04.23 16:28, Imseih (AWS), Sami wrote:\n> >> I suggest a small doc fix:\n> >> “Note that for a complex query, several sort or hash operations might be \n> >> running simultaneously;”\n> \n> > Here is a discussion of these terms: \n> > https://takuti.me/note/parallel-vs-concurrent/\n> \n> > I think \"concurrently\" is the correct word here.\n> \n> Probably, but it'd do little to remove the confusion Sami is on about,\n> especially since the next sentence uses \"concurrently\" to describe the\n> other case. I think we need a more thorough rewording, perhaps like\n> \n> - Note that for a complex query, several sort or hash operations might be\n> - running in parallel; each operation will generally be allowed\n> + Note that a complex query may include several sort or hash\n> + operations; each such operation will generally be allowed\n> to use as much memory as this value specifies before it starts\n> to write data into temporary files. Also, several running\n> sessions could be doing such operations concurrently.\n> \n> I also find this wording a bit further down to be poor:\n> \n> Hash-based operations are generally more sensitive to memory\n> availability than equivalent sort-based operations. The\n> memory available for hash tables is computed by multiplying\n> <varname>work_mem</varname> by\n> <varname>hash_mem_multiplier</varname>. This makes it\n> \n> I think \"available\" is not le mot juste, and it's also unclear from\n> this whether we're speaking of the per-hash-table limit or some\n> (nonexistent) overall limit. How about\n> \n> - memory available for hash tables is computed by multiplying\n> + memory limit for a hash table is computed by multiplying\n\nAdjusted patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 7 Sep 2023 20:16:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "On Fri, 8 Sept 2023 at 15:24, Bruce Momjian <bruce@momjian.us> wrote:\n> Adjusted patch attached.\n\nThis looks mostly fine to me modulo \"sort or hash\". I do see many\ninstances of \"and/or\" in the docs. Maybe that would work better.\n\nDavid\n\n\n", "msg_date": "Fri, 8 Sep 2023 17:23:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "> This looks mostly fine to me modulo \"sort or hash\". I do see many\r\n> instances of \"and/or\" in the docs. Maybe that would work better.\r\n\r\n\"sort or hash operations at the same time\" is clear explanation IMO.\r\n\r\nThis latest version of the patch looks good to me.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Sat, 9 Sep 2023 02:25:30 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "On Sat, 9 Sept 2023 at 14:25, Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > This looks mostly fine to me modulo \"sort or hash\". I do see many\n> > instances of \"and/or\" in the docs. Maybe that would work better.\n>\n> \"sort or hash operations at the same time\" is clear explanation IMO.\n\nJust for anyone else following along that haven't seen the patch. The\nfull text in question is:\n\n+ Note that a complex query might perform several sort or hash\n+ operations at the same time, with each operation generally being\n\nIt's certainly not a show-stopper. I do believe the patch makes some\nimprovements. The reason I'd prefer to see either \"and\" or \"and/or\"\nin place of \"or\" is because the text is trying to imply that many of\nthese operations can run at the same time. I'm struggling to\nunderstand why, given that there could be many sorts and many hashes\ngoing on at once that we'd claim it could only be one *or* the other.\nIf we have 12 sorts and 4 hashes then that's not \"several sort or hash\noperations\", it's \"several sort and hash operations\". Of course, it\ncould just be sorts or just hashes, so \"and/or\" works fine for that.\n\nDavid\n\n\n", "msg_date": "Mon, 11 Sep 2023 22:02:55 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "On Mon, Sep 11, 2023 at 10:02:55PM +1200, David Rowley wrote:\n> On Sat, 9 Sept 2023 at 14:25, Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> >\n> > > This looks mostly fine to me modulo \"sort or hash\". I do see many\n> > > instances of \"and/or\" in the docs. Maybe that would work better.\n> >\n> > \"sort or hash operations at the same time\" is clear explanation IMO.\n> \n> Just for anyone else following along that haven't seen the patch. The\n> full text in question is:\n> \n> + Note that a complex query might perform several sort or hash\n> + operations at the same time, with each operation generally being\n> \n> It's certainly not a show-stopper. I do believe the patch makes some\n> improvements. The reason I'd prefer to see either \"and\" or \"and/or\"\n> in place of \"or\" is because the text is trying to imply that many of\n> these operations can run at the same time. I'm struggling to\n> understand why, given that there could be many sorts and many hashes\n> going on at once that we'd claim it could only be one *or* the other.\n> If we have 12 sorts and 4 hashes then that's not \"several sort or hash\n> operations\", it's \"several sort and hash operations\". Of course, it\n> could just be sorts or just hashes, so \"and/or\" works fine for that.\n\nYes, I see your point and went with \"and\", updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Mon, 11 Sep 2023 11:03:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "On Tue, 12 Sept 2023 at 03:03, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Sep 11, 2023 at 10:02:55PM +1200, David Rowley wrote:\n> > It's certainly not a show-stopper. I do believe the patch makes some\n> > improvements. The reason I'd prefer to see either \"and\" or \"and/or\"\n> > in place of \"or\" is because the text is trying to imply that many of\n> > these operations can run at the same time. I'm struggling to\n> > understand why, given that there could be many sorts and many hashes\n> > going on at once that we'd claim it could only be one *or* the other.\n> > If we have 12 sorts and 4 hashes then that's not \"several sort or hash\n> > operations\", it's \"several sort and hash operations\". Of course, it\n> > could just be sorts or just hashes, so \"and/or\" works fine for that.\n>\n> Yes, I see your point and went with \"and\", updated patch attached.\n\nLooks good to me.\n\nDavid\n\n\n", "msg_date": "Wed, 27 Sep 2023 02:05:44 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" }, { "msg_contents": "On Wed, Sep 27, 2023 at 02:05:44AM +1300, David Rowley wrote:\n> On Tue, 12 Sept 2023 at 03:03, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Mon, Sep 11, 2023 at 10:02:55PM +1200, David Rowley wrote:\n> > > It's certainly not a show-stopper. I do believe the patch makes some\n> > > improvements. The reason I'd prefer to see either \"and\" or \"and/or\"\n> > > in place of \"or\" is because the text is trying to imply that many of\n> > > these operations can run at the same time. I'm struggling to\n> > > understand why, given that there could be many sorts and many hashes\n> > > going on at once that we'd claim it could only be one *or* the other.\n> > > If we have 12 sorts and 4 hashes then that's not \"several sort or hash\n> > > operations\", it's \"several sort and hash operations\". Of course, it\n> > > could just be sorts or just hashes, so \"and/or\" works fine for that.\n> >\n> > Yes, I see your point and went with \"and\", updated patch attached.\n> \n> Looks good to me.\n\nPatch applied back to Postgres 11.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 26 Sep 2023 19:44:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Correct the documentation for work_mem" } ]
[ { "msg_contents": "A couple of days ago, our PostGIS PG16 bots started failing with order\nchanges in text.\nWe have our tests set to locale=c\n\nIt seems since April 20th, our tests that rely on sorting characters\nchanged.\nAs noted in this ticket:\n\nhttps://trac.osgeo.org/postgis/ticket/5375\n\nI'm assuming it's result of icu change:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fcb21b3ac\ndcb9a60313325618fd7080aa36f1626\n\nI suspect all our bots are compiling with icu enabled. But I haven't\nconfirmed.\n\nI'm assuming this is an expected change in behavior, but just want to\nconfirm.\n\nThanks,\nRegina\n\n\n\n\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 11:27:19 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "Order changes in PG16 since ICU introduction" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n> A couple of days ago, our PostGIS PG16 bots started failing with order\n> changes in text.\n> We have our tests set to locale=c\n\n> It seems since April 20th, our tests that rely on sorting characters\n> changed.\n> As noted in this ticket:\n\n> https://trac.osgeo.org/postgis/ticket/5375\n\n> I'm assuming it's result of icu change:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=fcb21b3ac\n> dcb9a60313325618fd7080aa36f1626\n\n> I suspect all our bots are compiling with icu enabled. But I haven't\n> confirmed.\n\nIf they actually are using locale C, I would say this is a bug.\nThat should designate memcmp sorting and nothing else.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Apr 2023 11:48:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 11:27 -0400, Regina Obe wrote:\n> A couple of days ago, our PostGIS PG16 bots started failing with\n> order\n> changes in text.\n> We have our tests set to locale=c\n\nAre you sure it's still using the C locale? The results seem to be\nexplainable if the locale switched from \"C\" to \"en-US-x-icu\":\n\nThe results of the following are the same in v15 and v16:\n\nselect 'RM(25)/nodes|+|21|1' collate \"C\" < 'RM(25)/nodes|-|21|' collate\n\"C\"; -- true\n\nselect 'RM(25)/nodes|+|21|1' collate \"en-US-x-icu\" < 'RM(25)/nodes|-\n|21|' collate \"en-US-x-icu\"; -- false\n\nI suspect when the initdb and configure defaults changed from libc to\nICU, then your locale changed.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 09:15:39 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, Apr 21, 2023 at 11:48:51AM -0400, Tom Lane wrote:\n> \"Regina Obe\" <lr@pcorp.us> writes:\n> \n> > https://trac.osgeo.org/postgis/ticket/5375\n> \n> If they actually are using locale C, I would say this is a bug.\n> That should designate memcmp sorting and nothing else.\n\nSounds like a bug to me. This is happening with a PostgreSQL cluster\ncreated and served by a build of commit c04c6c5d6f :\n\n =# select version();\n PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, 64-bit\n =# show lc_collate;\n C\n =# select '+' < '-';\n f\n =# select '+' < '-' collate \"C\";\n t\n\nI don't know if it should matter but also:\n\n =# show lc_messages;\n C\n\n--strk;\n\n\n", "msg_date": "Fri, 21 Apr 2023 19:09:14 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 21.04.23 19:09, Sandro Santilli wrote:\n> On Fri, Apr 21, 2023 at 11:48:51AM -0400, Tom Lane wrote:\n>> \"Regina Obe\" <lr@pcorp.us> writes:\n>>\n>>> https://trac.osgeo.org/postgis/ticket/5375\n>>\n>> If they actually are using locale C, I would say this is a bug.\n>> That should designate memcmp sorting and nothing else.\n> \n> Sounds like a bug to me. This is happening with a PostgreSQL cluster\n> created and served by a build of commit c04c6c5d6f :\n> \n> =# select version();\n> PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, 64-bit\n> =# show lc_collate;\n> C\n> =# select '+' < '-';\n> f\n\nIf the database is created with locale provider ICU, then lc_collate \ndoes not apply here, so the result might be correct (depending on what \nlocale you have set).\n\n> =# select '+' < '-' collate \"C\";\n> t\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 19:14:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 19:09 +0200, Sandro Santilli wrote:\n>   =# select version();\n>   PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n> 11.3.0-1ubuntu1~22.04) 11.3.0, 64-bit\n>   =# show lc_collate;\n>   C\n>   =# select '+' < '-';\n>   f\n\nWhat is the result of:\n\n select datlocprovider, datcollate, daticulocale\n from pg_database where datname=current_database();\n\n>   =# select '+' < '-' collate \"C\";\n>   t\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 10:27:49 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> If the database is created with locale provider ICU, then lc_collate \n> does not apply here, so the result might be correct (depending on what \n> locale you have set).\n\nFWIW, an installation created under LANG=C defaults to ICU locale\nen-US-u-va-posix for me (see psql \\l), and that still sorts as\nexpected on my RHEL8 box. We've not seen buildfarm problems either.\n\nI am wondering however whether this doesn't mean that all our carefully\ncoded fast paths for C locale just went down the drain. Does the ICU\ncode have any of that? Has any performance testing been done to see\nwhat impact this change had on C-locale installations? (The current\ncode coverage report for pg_locale.c is not encouraging.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:28:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > If the database is created with locale provider ICU, then lc_collate\n> > does not apply here, so the result might be correct (depending on what\n> > locale you have set).\n> \n> FWIW, an installation created under LANG=C defaults to ICU locale en-US-u-\n> va-posix for me (see psql \\l), and that still sorts as expected on my\nRHEL8 box.\n> We've not seen buildfarm problems either.\n> \n> I am wondering however whether this doesn't mean that all our carefully\n> coded fast paths for C locale just went down the drain. Does the ICU code\n> have any of that? Has any performance testing been done to see what\nimpact\n> this change had on C-locale installations? (The current code coverage\nreport\n> for pg_locale.c is not encouraging.)\n> \n> \t\t\tregards, tom lane\n\nJust another metric.\n\nOn my mingw64 setup, I built a test database on PG16 (built with icu\nsupport) and PG15 (no icu support)\n\nCREATE DATABASE test TEMPLATE=template0 ENCODING = 'UTF8' LC_COLLATE = 'C'\nLC_CTYPE = 'C';\n\nI think the above is the similar setup we have when testing.\n\nOn PG15 \n\nSELECT '+' < '-' ; returns true\n\nOn PG 16 returns false\n\nFor PG 16, to strk's point, you have to do: to get a true\nSELECT '+' COLLATE \"C\" < '-' COLLATE \"C\";\n\n\nI would expect since I'm initializing my db in collate C they would both\nbehave the same\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:37:25 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n> On my mingw64 setup, I built a test database on PG16 (built with icu\n> support) and PG15 (no icu support)\n\n> CREATE DATABASE test TEMPLATE=template0 ENCODING = 'UTF8' LC_COLLATE = 'C'\n> LC_CTYPE = 'C';\n\nAs has been pointed out already, setting LC_COLLATE/LC_CTYPE is\nmeaningless when the locale provider is ICU. You need to look\nat what ICU locale is being chosen, or force it with LOCALE = 'C'.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:46:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "> > CREATE DATABASE test TEMPLATE=template0 ENCODING = 'UTF8'\n> LC_COLLATE = 'C'\n> > LC_CTYPE = 'C';\n> \n> As has been pointed out already, setting LC_COLLATE/LC_CTYPE is\n> meaningless when the locale provider is ICU. You need to look at what ICU\n> locale is being chosen, or force it with LOCALE = 'C'.\n> \n> \t\t\tregards, tom lane\n\nOkay got it was on IRC with RhodiumToad and he suggested:\n\nCREATE DATABASE test2 TEMPLATE=template0 ENCODING = 'UTF8' LC_COLLATE = 'C'\nLC_CTYPE = 'C' ICU_LOCALE='C';\n\nWhich gives expected result:\nSELECT '+' < '-' ; -- true\n\n but gives me a notice:\nNOTICE: using standard form \"en-US-u-va-posix\" for locale \"C\"\n\n\n\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:56:14 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n> Okay got it was on IRC with RhodiumToad and he suggested:\n> CREATE DATABASE test2 TEMPLATE=template0 ENCODING = 'UTF8' LC_COLLATE = 'C'\n> LC_CTYPE = 'C' ICU_LOCALE='C';\n\n> Which gives expected result:\n> SELECT '+' < '-' ; -- true\n\n> but gives me a notice:\n> NOTICE: using standard form \"en-US-u-va-posix\" for locale \"C\"\n\nYeah. My recommendation is just LOCALE:\n\nregression=# CREATE DATABASE test1 TEMPLATE=template0 ENCODING = 'UTF8' LOCALE = 'C';\nCREATE DATABASE\nregression=# CREATE DATABASE test2 TEMPLATE=template0 ENCODING = 'UTF8' ICU_LOCALE = 'C';\nNOTICE: using standard form \"en-US-u-va-posix\" for locale \"C\"\nCREATE DATABASE\n\nI think it's probably intentional that ICU_LOCALE is stricter\nabout being given a real ICU locale name, but I didn't write\nany of that code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:59:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": ">>>>> \"Peter\" == Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n Peter> If the database is created with locale provider ICU, then\n Peter> lc_collate does not apply here,\n\nHaving lc_collate return a value which is silently being ignored seems\nto me rather hugely confusing.\n\nAlso, somewhere along the line someone broke initdb --no-locale, which\nshould result in C locale being the default everywhere, but when I just\ntested it it picked 'en' for an ICU locale, which is not the right\nthing.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Fri, 21 Apr 2023 19:00:35 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Peter\" == Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Peter> If the database is created with locale provider ICU, then\n> Peter> lc_collate does not apply here,\n\n> Having lc_collate return a value which is silently being ignored seems\n> to me rather hugely confusing.\n\nIt's not *completely* ignored --- there are bits of code that are not\nyet ICU-ified and will still use the libc facilities. So we can't\nget rid of those options yet, even in an ICU-based database.\n\n> Also, somewhere along the line someone broke initdb --no-locale, which\n> should result in C locale being the default everywhere, but when I just\n> tested it it picked 'en' for an ICU locale, which is not the right\n> thing.\n\nConfirmed:\n\n$ LANG=en_US.utf8 initdb --no-locale\nThe files belonging to this database system will be owned by user \"postgres\".\nThis user must also own the server process.\n\nUsing default ICU locale \"en_US\".\nUsing language tag \"en-US\" for ICU locale \"en_US\".\nThe database cluster will be initialized with this locale configuration:\n provider: icu\n ICU locale: en-US\n LC_COLLATE: C\n LC_CTYPE: C\n ...\n\nThat needs to be fixed: --no-locale should prevent any consideration\nof initdb's LANG/LC_foo environment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Apr 2023 14:06:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "> Yeah. My recommendation is just LOCALE:\n> \n> regression=# CREATE DATABASE test1 TEMPLATE=template0 ENCODING =\n> 'UTF8' LOCALE = 'C'; CREATE DATABASE regression=# CREATE DATABASE test2\n> TEMPLATE=template0 ENCODING = 'UTF8' ICU_LOCALE = 'C';\n> NOTICE: using standard form \"en-US-u-va-posix\" for locale \"C\"\n> CREATE DATABASE\n> \n> I think it's probably intentional that ICU_LOCALE is stricter about being\ngiven\n> a real ICU locale name, but I didn't write any of that code.\n> \n> \t\t\tregards, tom lane\n\nCREATE DATABASE test1 TEMPLATE=template0 ENCODING = 'UTF8' LOCALE = 'C';\n\nDoesn't seem to work at least not under mingw64 anyway.\n\nSELECT '+' < '-' ;\n\nReturns false\n\n\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 14:13:49 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n> CREATE DATABASE test1 TEMPLATE=template0 ENCODING = 'UTF8' LOCALE = 'C';\n> Doesn't seem to work at least not under mingw64 anyway.\n\nHmm, doesn't work for me either:\n\n$ LANG=en_US.utf8 initdb\nThe files belonging to this database system will be owned by user \"postgres\".\nThis user must also own the server process.\n\nUsing default ICU locale \"en_US\".\nUsing language tag \"en-US\" for ICU locale \"en_US\".\nThe database cluster will be initialized with this locale configuration:\n provider: icu\n ICU locale: en-US\n LC_COLLATE: en_US.utf8\n LC_CTYPE: en_US.utf8\n LC_MESSAGES: en_US.utf8\n LC_MONETARY: en_US.utf8\n LC_NUMERIC: en_US.utf8\n LC_TIME: en_US.utf8\n ...\n$ psql postgres\npsql (16devel)\nType \"help\" for help.\n\npostgres=# SELECT '+' < '-' ;\n ?column? \n----------\n f\n(1 row)\n\n(as expected, so far)\n\npostgres=# CREATE DATABASE test1 TEMPLATE=template0 ENCODING = 'UTF8' LOCALE = 'C';\nCREATE DATABASE\npostgres=# \\c test1\nYou are now connected to database \"test1\" as user \"postgres\".\ntest1=# SELECT '+' < '-' ;\n ?column? \n----------\n f\n(1 row)\n\n(wrong!)\n\ntest1=# \\l\n List of databases\n Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges \n-----------+----------+----------+-----------------+------------+------------+------------+-----------+-----------------------\n postgres | postgres | UTF8 | icu | en_US.utf8 | en_US.utf8 | en-US | | \n template0 | postgres | UTF8 | icu | en_US.utf8 | en_US.utf8 | en-US | | =c/postgres +\n | | | | | | | | postgres=CTc/postgres\n template1 | postgres | UTF8 | icu | en_US.utf8 | en_US.utf8 | en-US | | =c/postgres +\n | | | | | | | | postgres=CTc/postgres\n test1 | postgres | UTF8 | icu | C | C | en-US | | \n(4 rows)\n\nLooks like the \"pick en-US even when told not to\" problem exists here too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Apr 2023 14:23:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, Apr 21, 2023 at 07:14:13PM +0200, Peter Eisentraut wrote:\n> On 21.04.23 19:09, Sandro Santilli wrote:\n> > On Fri, Apr 21, 2023 at 11:48:51AM -0400, Tom Lane wrote:\n> > > \"Regina Obe\" <lr@pcorp.us> writes:\n> > > \n> > > > https://trac.osgeo.org/postgis/ticket/5375\n> > > \n> > > If they actually are using locale C, I would say this is a bug.\n> > > That should designate memcmp sorting and nothing else.\n> > \n> > Sounds like a bug to me. This is happening with a PostgreSQL cluster\n> > created and served by a build of commit c04c6c5d6f :\n> > \n> > =# select version();\n> > PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0, 64-bit\n> > =# show lc_collate;\n> > C\n> > =# select '+' < '-';\n> > f\n> \n> If the database is created with locale provider ICU, then lc_collate does\n> not apply here, so the result might be correct (depending on what locale you\n> have set).\n\nThe database is created by a perl script which starts like this:\n\n $ENV{\"LC_ALL\"} = \"C\";\n $ENV{\"LANG\"} = \"C\";\n\nAnd then runs:\n\n createdb --encoding=UTF-8 --template=template0 --lc-collate=C \n\nShould we tweak anything else to make the results predictable ?\n\n--strk;\n\n\n", "msg_date": "Fri, 21 Apr 2023 21:14:13 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> Also, somewhere along the line someone broke initdb --no-locale,\n >> which should result in C locale being the default everywhere, but\n >> when I just tested it it picked 'en' for an ICU locale, which is not\n >> the right thing.\n\n Tom> Confirmed:\n\n Tom> $ LANG=en_US.utf8 initdb --no-locale\n Tom> The files belonging to this database system will be owned by user \"postgres\".\n Tom> This user must also own the server process.\n\n Tom> Using default ICU locale \"en_US\".\n Tom> Using language tag \"en-US\" for ICU locale \"en_US\".\n Tom> The database cluster will be initialized with this locale configuration:\n Tom> provider: icu\n Tom> ICU locale: en-US\n Tom> LC_COLLATE: C\n Tom> LC_CTYPE: C\n Tom> ...\n\n Tom> That needs to be fixed: --no-locale should prevent any\n Tom> consideration of initdb's LANG/LC_foo environment.\n\nWould it also not make sense to also take into account any --locale and\n--lc-* options before choosing an ICU default locale? Right now if you\ndo, say, initdb --locale=fr_FR you get an ICU locale based on the\nenvironment but lc_* settings based on the option, which seems maximally\nconfusing.\n\nAlso, what happens now to lc_collate_is_c() when the provider is ICU? Am\nI missing something, or is it never true now, even if you specified C /\nPOSIX / en-US-u-va-posix as the ICU locale? This seems like it could be\nan important pessimization.\n\nAlso also, we now have the problem that it is much harder to create a\n'C' collation database within an existing cluster (e.g. for testing)\nwithout knowing whether the default provider is ICU. In the past one\nwould have done:\n\nCREATE DATABASE test TEMPLATE=template0 ENCODING = 'UTF8' LOCALE = 'C';\n\nbut now that creates a database that uses the same ICU locale as\ntemplate0 by default. If instead one tries:\n\nCREATE DATABASE test TEMPLATE=template0 ENCODING = 'UTF8' LOCALE = 'C' ICU_LOCALE='C';\n\nthen one gets an error if the default locale provider is _not_ ICU. The\nonly option now seems to be:\n\nCREATE DATABASE test TEMPLATE=template0 ENCODING = 'UTF8' LOCALE = 'C' LOCALE_PROVIDER = 'libc';\n\nwhich of course doesn't work in older pg versions.\n\n-- \nAndrew.\n\n\n", "msg_date": "Fri, 21 Apr 2023 20:14:20 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, Apr 21, 2023 at 10:27:49AM -0700, Jeff Davis wrote:\n> On Fri, 2023-04-21 at 19:09 +0200, Sandro Santilli wrote:\n> > � =# select version();\n> > � PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n> > 11.3.0-1ubuntu1~22.04) 11.3.0, 64-bit\n> > � =# show lc_collate;\n> > � C\n> > � =# select '+' < '-';\n> > � f\n> \n> What is the result of:\n> \n> select datlocprovider, datcollate, daticulocale\n> from pg_database where datname=current_database();\n\ndatlocprovider | i\ndatcollate | C\ndaticulocale | en-US\n\n--strk;\n\n\n", "msg_date": "Fri, 21 Apr 2023 21:17:25 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 14:23 -0400, Tom Lane wrote:\n> postgres=# CREATE DATABASE test1 TEMPLATE=template0 ENCODING = 'UTF8'\n> LOCALE = 'C';\n\n...\n\n>  test1     | postgres | UTF8     | icu             | C          |\n> C          | en-US      |           | \n> (4 rows)\n> \n> Looks like the \"pick en-US even when told not to\" problem exists here\n> too.\n\nBoth provider (ICU) and the icu locale (en-US) are inherited from\ntemplate0. The LOCALE parameter to CREATE DATABASE doesn't affect\neither of those things, because there's a separate parameter\nICU_LOCALE.\n\nThis happens the same way in v15, and although it matches the\ndocumentation technically, it is not a great user experience.\n\nI have a couple ideas:\n\n1. Introduce a \"none\" provider to separate the concept of C/POSIX\nlocales from the libc provider. It's not really using a provider\nanyway, it's just using memcmp(), and I think it causes confusion to\ncombine them. Saying \"LOCALE_PROVIDER=none\" is less error-prone than\n\"LOCALE_PROVIDER=libc LOCALE='C'\".\n\n2. Change the CREATE DATABASE syntax to catch these errors better at\nthe possible expense of backwards compatibility.\n\nI am also having second thoughts about accepting \"C\" or \"POSIX\" as an\nICU locale and transforming it to \"en-US-u-va-posix\" in v16. It's not\nterribly useful (why not just use memcmp()?), it's not fast in my\nmeasurements (en-US is faster), so maybe it's better to just throw an\nerror and tell the user to use C (or provider=none as I suggest\nabove)? \n\nObviously the user could manually type \"en-US-u-va-posix\" if that's the\nlocale they want. Throwing an error would be a backwards-compatibility\nissue, but in v15 an ICU locale of \"C\" just gives the root locale\nanyway, which is probably not what they want.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 12:25:00 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> I have a couple ideas:\n\n> 1. Introduce a \"none\" provider to separate the concept of C/POSIX\n> locales from the libc provider. It's not really using a provider\n> anyway, it's just using memcmp(), and I think it causes confusion to\n> combine them. Saying \"LOCALE_PROVIDER=none\" is less error-prone than\n> \"LOCALE_PROVIDER=libc LOCALE='C'\".\n\nI think I might like this idea, except for one thing: you're imagining\nthat the locale doesn't control anything except string comparisons.\nWhat about to_upper/to_lower, character classifications in regexes, etc?\n(I'm not sure whether those operations can get redirected to ICU today\nor whether they still always go to libc, but we'll surely want to fix\nit eventually if the latter is still true.)\n\nIn any case, that seems somewhat orthogonal to what we're on about here,\nwhich is making the behavior of CREATE DATABASE less surprising and more\nbackwards-compatible. I'm not sure that provider=none can help with that.\nAside from the user-surprise issues discussed up to now, pg_dump scripts\nemitted by pre-v15 pg_dump are not going to contain LOCALE_PROVIDER\nclauses in CREATE DATABASE, and people are going to be very unhappy\nif that means they suddenly get totally different locale semantics\nafter restoring into a new DB. I think we need some plan for mapping\nlibc-style locale specs into ICU locales so that we can make that\nmore nearly transparent.\n\n> 2. Change the CREATE DATABASE syntax to catch these errors better at\n> the possible expense of backwards compatibility.\n\nThat is the exact opposite of what I think we need. Backwards\ncompatibility isn't optional.\n\nMaybe this means we are not ready to do ICU-by-default in v16.\nIt certainly feels like there might be more here than we want to\nstart designing post-feature-freeze.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Apr 2023 16:00:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 21:14 +0200, Sandro Santilli wrote:\n> And then runs:\n> \n>   createdb --encoding=UTF-8 --template=template0 --lc-collate=C \n> \n> Should we tweak anything else to make the results predictable ?\n\nYou can specify --locale-provider=libc\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:03:19 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 13:28 -0400, Tom Lane wrote:\n> I am wondering however whether this doesn't mean that all our\n> carefully\n> coded fast paths for C locale just went down the drain.\n\nThe code still exists. You can test it by using the built-in collation\n\"C\" which is correctly specified with collprovider=libc and\ncollcollate=C.\n\nFor my test dataset, ICU 72, glibc 2.35:\n\n -- ~07s\n explain analyze select t from a order by t collate \"C\";\n\n -- ~15s\n explain analyze select t from a order by t collate \"en-US-x-icu\";\n\n -- ~21s\n explain analyze select t from a order by t collate \"en-US-u-va-posix-\nx-icu\";\n\n -- ~34s\n explain analyze select t from a order by t collate \"en_US\";\n\nI believe the confusion in this thread comes from:\n\n* The syntax of CREATE DATABASE (the same as v15 but still confusing)\n* The fact that you need provider=libc to get memcmp() behavior (same\nas v15 but still confusing)\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:13:27 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, Apr 21, 2023 at 3:25 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I am also having second thoughts about accepting \"C\" or \"POSIX\" as an\n> ICU locale and transforming it to \"en-US-u-va-posix\" in v16. It's not\n> terribly useful (why not just use memcmp()?), it's not fast in my\n> measurements (en-US is faster), so maybe it's better to just throw an\n> error and tell the user to use C (or provider=none as I suggest\n> above)?\n\nI mean, to renew a complaint I've made previously, how the heck is\nanyone supposed to understand what's going on here?\n\nWe have no meaningful documentation of how to select an ICU locale\nthat works for you. We have a couple of examples and a suggestion that\nyou should use BCP 47. But when I asked before for documentation\nreferences, the ones you provided were not clear, basically\nincomprehensible. In follow-up discussion, you admitted you'd had to\nconsult the source code to figure certain things out.\n\nAnd the fact that \"C\" or \"POSIX\" gets transformed into\n\"en-US-u-va-posix\" is also completely documented. That string appears\ntwice in the code, but zero times in the documentation. There's code\nto do it, but users shouldn't have to read code, and it wouldn't help\nmuch if they did, because the code comments don't really explain the\nrationale behind this choice either.\n\nI find the fact that people are having trouble here completely\npredictable. Of course if people ask for \"C\" and the system tells them\nthat it's using \"en-US-u-va-posix\" instead they're going to be\nconfused and ask questions, exactly as is happening here. glibc\ncollations aren't particularly well-documented either, but people have\nsome experience with, and they can get a list of values that have a\nchance of working from /usr/share/locale, and they know what \"C\"\nmeans. Nobody knows what \"en-US-u-va-posix\" is. It's not even\nGoogleable, really, whereas \"C locale\" is.\n\nMy opinion is that the switch to using ICU by default is ill-advised\nand should be reverted. The compatibility break isn't worth whatever\nadvantages ICU may have, the documentation to allow people to\ntransition to ICU with reasonable effort doesn't exist, and the fact\nthat within weeks of feature freeze people who know a lot about\nPostgreSQL are struggling to get the behavior they want is a really\nbad sign.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Apr 2023 16:33:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 19:00 +0100, Andrew Gierth wrote:\n> > > > > \n> Also, somewhere along the line someone broke initdb --no-locale,\n> which\n> should result in C locale being the default everywhere, but when I\n> just\n> tested it it picked 'en' for an ICU locale, which is not the right\n> thing.\n\nFixed, thank you.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:45:50 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 16:00 -0400, Tom Lane wrote:\n> Maybe this means we are not ready to do ICU-by-default in v16.\n> It certainly feels like there might be more here than we want to\n> start designing post-feature-freeze.\n\nI don't see how punting to the next release helps. If the CREATE\nDATABASE syntax (and similar issues for createdb and initdb) in v15 is\njust too confusing, and we can't find a remedy for v16, then we\nprobably won't find a remedy for v17 either.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 13:50:41 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": ">>>>> \"Jeff\" == Jeff Davis <pgsql@j-davis.com> writes:\n\n >> Also, somewhere along the line someone broke initdb --no-locale,\n >> which should result in C locale being the default everywhere, but\n >> when I just tested it it picked 'en' for an ICU locale, which is not\n >> the right thing.\n\n Jeff> Fixed, thank you.\n\nIs that the right fix, though? (It forces --locale-provider=libc for the\ncluster default, which might not be desirable?)\n\n-- \nAndrew.\n\n\n", "msg_date": "Fri, 21 Apr 2023 22:08:59 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 22:08 +0100, Andrew Gierth wrote:\n> > > > > \n> Is that the right fix, though? (It forces --locale-provider=libc for\n> the\n> cluster default, which might not be desirable?)\n\nFor the \"no locale\" behavior (memcmp()-based) the provider needs to be\nlibc. Do you see an alternative?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 14:25:26 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": ">>>>> \"Jeff\" == Jeff Davis <pgsql@j-davis.com> writes:\n\n >> Is that the right fix, though? (It forces --locale-provider=libc for\n >> the cluster default, which might not be desirable?)\n\n Jeff> For the \"no locale\" behavior (memcmp()-based) the provider needs\n Jeff> to be libc. Do you see an alternative?\n\nCan lc_collate_is_c() be taught to check whether an ICU locale is using\nPOSIX collation?\n\nThere's now another bug in that --no-locale no longer does the same\nthing as --locale=C (which is its long-established documented behavior).\nHow should these various options interact? This all seems not well\nthought out from a usability perspective, and I think a proper fix\nshould involve a bit more serious consideration.\n\n-- \nAndrew.\n\n\n", "msg_date": "Fri, 21 Apr 2023 22:35:09 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 16:33 -0400, Robert Haas wrote:\n\n> My opinion is that the switch to using ICU by default is ill-advised\n> and should be reverted.\n\nMost of the complaints seem to be complaints about v15 as well, and\nwhile those complaints may be a reason to not make ICU the default,\nthey are also an argument that we should continue to learn and try to\nfix those issues because they exist in an already-released version.\nLeaving it the default for now will help us fix those issues rather\nthan hide them.\n\nIt's still early, so we have plenty of time to revert the initdb\ndefault if we need to.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 14:56:19 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "> > My opinion is that the switch to using ICU by default is ill-advised\n> > and should be reverted.\n> \n> Most of the complaints seem to be complaints about v15 as well, and while\n> those complaints may be a reason to not make ICU the default, they are also\n> an argument that we should continue to learn and try to fix those issues\n> because they exist in an already-released version.\n> Leaving it the default for now will help us fix those issues rather than hide\n> them.\n> \n> It's still early, so we have plenty of time to revert the initdb default if we need\n> to.\n> \n> Regards,\n> \tJeff Davis\n\nI'm fine with that. Sounds like it wouldn't be too hard to just pull it out at the end.\n\nBefore this, I didn't even know ICU existed in PG15. My first realization that ICU was even a thing was when my PG16 refused to compile without adding my ICU path to my pkg-config or putting in --without-icu.\n\nSo yah I suspect leaving it in a little bit longer will uncover some more issues and won't harm too much.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 18:39:26 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 16:33 -0400, Robert Haas wrote:\n> And the fact that \"C\" or \"POSIX\" gets transformed into\n> \"en-US-u-va-posix\"\n\nI already expressed, on reflection, that we should probably just not do\nthat. So I think we're in agreement on this point; patch attached.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 21 Apr 2023 16:00:30 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, Apr 21, 2023 at 5:56 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Most of the complaints seem to be complaints about v15 as well, and\n> while those complaints may be a reason to not make ICU the default,\n> they are also an argument that we should continue to learn and try to\n> fix those issues because they exist in an already-released version.\n> Leaving it the default for now will help us fix those issues rather\n> than hide them.\n>\n> It's still early, so we have plenty of time to revert the initdb\n> default if we need to.\n\nThat's fair enough, but I really think it's important that some energy\nget invested in providing adequate documentation for this stuff. Just\npatching the code is not enough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Apr 2023 20:12:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 21.04.23 19:14, Peter Eisentraut wrote:\n> On 21.04.23 19:09, Sandro Santilli wrote:\n>> On Fri, Apr 21, 2023 at 11:48:51AM -0400, Tom Lane wrote:\n>>> \"Regina Obe\" <lr@pcorp.us> writes:\n>>>\n>>>> https://trac.osgeo.org/postgis/ticket/5375\n>>>\n>>> If they actually are using locale C, I would say this is a bug.\n>>> That should designate memcmp sorting and nothing else.\n>>\n>> Sounds like a bug to me. This is happening with a PostgreSQL cluster\n>> created and served by a build of commit c04c6c5d6f :\n>>\n>>    =# select version();\n>>    PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu \n>> 11.3.0-1ubuntu1~22.04) 11.3.0, 64-bit\n>>    =# show lc_collate;\n>>    C\n>>    =# select '+' < '-';\n>>    f\n> \n> If the database is created with locale provider ICU, then lc_collate \n> does not apply here, so the result might be correct (depending on what \n> locale you have set).\n\nThe GUC settings lc_collate and lc_ctype are from a time when those \nlocale settings were cluster-global. When we made those locale settings \nper-database (PG 8.4), we kept them as read-only. As of PG 15, you can \nuse ICU as the per-database locale provider, so what is being attempted \nin the above example is already meaningless before PG 16, since you need \nto look into pg_database to find out what is really happening.\n\nI think we should just remove the GUC parameters lc_collate and lc_ctype.\n\n\n\n", "msg_date": "Mon, 24 Apr 2023 17:10:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 22.04.23 01:00, Jeff Davis wrote:\n> On Fri, 2023-04-21 at 16:33 -0400, Robert Haas wrote:\n>> And the fact that \"C\" or \"POSIX\" gets transformed into\n>> \"en-US-u-va-posix\"\n> \n> I already expressed, on reflection, that we should probably just not do\n> that. So I think we're in agreement on this point; patch attached.\n\nThis makes sense to me. This way, if someone specifies 'C' locale \ntogether with ICU provider they get an error. They can then choose to \nuse the libc provider, to get the performance path, or stick with ICU by \nusing the native spelling of the locale.\n\n\n\n", "msg_date": "Mon, 24 Apr 2023 17:26:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 16:00 -0400, Tom Lane wrote:\n> I think I might like this idea, except for one thing: you're\n> imagining\n> that the locale doesn't control anything except string comparisons.\n> What about to_upper/to_lower, character classifications in regexes,\n> etc?\n\nIf provider='libc' and LC_CTYPE='C', str_toupper/str_tolower are\nhandled with asc_tolower/asc_toupper. The regex character\nclassification is done with pg_char_properties. In these cases neither\nICU nor libc is used; it's just code in postgres.\n\nlibc is special in that you can set LC_COLLATE and LC_CTYPE separately,\nso that different locales are used for sorting and character\nclassification. That's potentially useful to set LC_COLLATE to C for\nperformance reasons, while setting LC_CTYPE to a useful locale. We\ndon't allow ICU to set collation and ctype separately (it would be\npossible to allow it, but I don't think there's a huge demand and it's\narguably inconsistent to set them differently).\n\n> (I'm not sure whether those operations can get redirected to ICU\n> today\n> or whether they still always go to libc, but we'll surely want to fix\n> it eventually if the latter is still true.)\n\nThose operations do get redirected to ICU today. There are extensions\nthat call locale-sensitive libc functions directly, and obviously those\nwon't use ICU.\n\n\n> Aside from the user-surprise issues discussed up to now, pg_dump\n> scripts\n> emitted by pre-v15 pg_dump are not going to contain LOCALE_PROVIDER\n> clauses in CREATE DATABASE, and people are going to be very unhappy\n> if that means they suddenly get totally different locale semantics\n> after restoring into a new DB.\n\nAgreed.\n\n>   I think we need some plan for mapping\n> libc-style locale specs into ICU locales so that we can make that\n> more nearly transparent.\n\nICU does a reasonable job mapping libc-like locale names to ICU\nlocales, e.g. en_US to en-US, etc. The ordering semantics aren't\nguaranteed to be the same, of course (because the libc-locales are\nplatform-dependent), but it's at least conceptually the same locale.\n\n> \n> Maybe this means we are not ready to do ICU-by-default in v16.\n> It certainly feels like there might be more here than we want to\n> start designing post-feature-freeze.\n\nThis thread is already on the Open Items list. As long as it's not too\ndisruptive to others I'll leave it as-is for now to see how this sorts\nout. Right now it's not clear to me how much of this is a v15 issue vs\na v16 issue.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 24 Apr 2023 21:31:40 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> > (I'm not sure whether those operations can get redirected to ICU\n> > today\n> > or whether they still always go to libc, but we'll surely want to fix\n> > it eventually if the latter is still true.)\n> \n> Those operations do get redirected to ICU today. \n\nFTR the full text search parser still uses the libc functions\nis[w]space/alpha/digit... that depend on lc_ctype, whether the db\ncollation provider is ICU or not.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 25 Apr 2023 17:00:19 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> FTR the full text search parser still uses the libc functions\n> is[w]space/alpha/digit... that depend on lc_ctype, whether the db\n> collation provider is ICU or not.\n\nYeah, those aren't even connected up to the collation-selection\nmechanisms; lots of work to do there. I wonder if they could be\nmade to use regc_pg_locale.c instead of duplicating logic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Apr 2023 11:20:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 22:35 +0100, Andrew Gierth wrote:\n> > > > > \n> Can lc_collate_is_c() be taught to check whether an ICU locale is\n> using\n> POSIX collation?\n\nAttached are a few small patches:\n\n 0001: don't convert C to en-US-u-va-posix\n 0002: handle locale C the same regardless of the provider, as you\nsuggest above\n 0003: make LOCALE (or --locale) apply to everything including ICU\n\nAs far as I can tell, any libc locale has a reasonable match in ICU, so\nsetting LOCALE to either C or a libc locale name should be fine. Some\nlocales are only valid in ICU, e.g. '@colStrength=primary', or a\nlanguage tag representation, so if you do something like:\n\n create database foo locale 'en_US@colStrenghth=primary'\n template template0;\n\nYou'll get a decent error like:\n\n ERROR: invalid LC_COLLATE locale name: \"en_US@colStrenghth=primary\"\n HINT: If the locale name is specific to ICU, use ICU_LOCALE.\n\nOverall, I think it works out nicely. Let me know if there are still\nsome confusing cases. I tried a few variations and this one seemed the\nbest, but I may have missed something.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 25 Apr 2023 18:15:21 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> Attached are a few small patches:\n> \n> 0001: don't convert C to en-US-u-va-posix\n> 0002: handle locale C the same regardless of the provider, as you\n> suggest above\n> 0003: make LOCALE (or --locale) apply to everything including ICU\n\nTesting this briefly I noticed two regressions\n\n1) all pg_collation.collversion are empty due to a trivial bug in 0002:\n\n@ -1650,6 +1686,10 @@ get_collation_actual_version(char collprovider, const\nchar *collcollate)\n {\n\tchar\t *collversion = NULL;\n\n+\tif (pg_strcasecmp(\"C\", collcollate) ||\n+\t\tpg_strcasecmp(\"POSIX\", collcollate))\n+\t\treturn NULL;\n+\n\nThis should be pg_strcasecmp(...) == 0\n\n2) The following works with HEAD (default provider=icu) but errors out with\nthe patches:\n\npostgres=# create database lat9 locale 'fr_FR@euro' encoding LATIN9 template\n'template0';\nERROR:\tcould not convert locale name \"fr_FR@euro\" to language tag:\nU_ILLEGAL_ARGUMENT_ERROR\n\nfr_FR@euro is a libc locale name \n\n$ locale -a|grep fr_FR\nfr_FR\nfr_FR@euro\nfr_FR.iso88591\nfr_FR.iso885915@euro\nfr_FR.utf8\n\nI understand that fr_FR@euro is taken as an ICU locale name, with the idea\nthat the locale\nsyntax being more or less compatible between both providers, this should work\nsmoothly. 0003 seems to go further in the interpretation and fail on it.\nTBH the assumption that it's OK to feed libc locale names to ICU feels quite\nuncomfortable.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 27 Apr 2023 14:23:24 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Thu, 2023-04-27 at 14:23 +0200, Daniel Verite wrote:\n> This should be pg_strcasecmp(...) == 0\n\nGood catch, thank you! Fixed in updated patches.\n\n> postgres=# create database lat9 locale 'fr_FR@euro' encoding LATIN9\n> template\n> 'template0';\n> ERROR:  could not convert locale name \"fr_FR@euro\" to language tag:\n> U_ILLEGAL_ARGUMENT_ERROR\n\nICU 63 and earlier convert it without error to the language tag 'fr-FR-\nu-cu-eur', which is correct. ICU 64 removed support for transforming\nsome locale variants, because apparently they think those variants are\nobsolete:\n\nhttps://unicode-org.atlassian.net/browse/ICU-22268\nhttps://unicode-org.atlassian.net/browse/ICU-20187\n\n(Aside: how obsolete are those variants?)\n\nIt's frustrating that they'd remove such transformations from the\ncanonicalization process.\n\nFortunately, it looks like it's easy enough to do the transformation\nourselves. The only problematic format is '...@VARIANT'. The other\nformat 'fr_FR_EURO' doesn't seem to be a valid glibc locale name[1] and\nwindows seems to use BCP 47[2].\n\nAnd there don't seem to be a lot of variants to handle. ICU 63 only\nhandles 3 variants, so that's what my patch does. Any unknown variant\nbetween 5 and 8 characters won't throw an error. There could be more\nproblem cases, but I'm not sure how much of a practical problem they\nare.\n\nIf we try to keep the meaning of LOCALE to only LC_COLLATE and\nLC_CTYPE, that will continue to be confusing for anyone that uses\nprovider=icu.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.gnu.org/software/libc/manual/html_node/Locale-Names.html\n[2]\nhttps://learn.microsoft.com/en-us/windows/win32/intl/locale-names", "msg_date": "Fri, 28 Apr 2023 14:35:25 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-28 at 14:35 -0700, Jeff Davis wrote:\n> On Thu, 2023-04-27 at 14:23 +0200, Daniel Verite wrote:\n> > This should be pg_strcasecmp(...) == 0\n> \n> Good catch, thank you! Fixed in updated patches.\n\nRebased patches.\n\n=== 0001: do not convert C to en-US-u-va-posix\n\nI plan to commit this soon. If someone specifies \"C\", they are probably\nexpecting memcmp()-like behavior, or some kind of error/warning that it\ncan't be provided.\n\nRemoving this transformation means that if you specify iculocale=C,\nyou'll get an error or warning (depending on icu_validation_level),\nbecause C is not a recognized icu locale. Depending on how some of the\nother issues in this thread are sorted out, we may want to relax the\nvalidation.\n\n=== 0002: fix @euro, etc. in ICU >= 64\n\nI'd like to commit this soon too, but I'll wait for someone to take a\nlook. It makes it more reliable to map libc names to icu locale names\nregardless of the ICU version.\n\nIt doesn't solve the problem for locales like \"de__PHONEBOOK\", but\nthose don't seem to be a libc format (I think just an old ICU format),\nso I don't see a big reason to carry it forward. It might be another\nreason to turn down the validation level to WARNING, though.\n\n=== 0003: support C memcmp() behavior with ICU provider\n\nThe current patch 0003 has a problem, because in previous postgres\nversions (going all the way back), we allowed \"C\" as a valid ICU\nlocale, that would actually be passed to ICU as a locale name. But ICU\ndidn't recognize it, and it would end up opening the root locale. So we\ncan't simply redefine \"C\" to mean \"memcmp\", because that would\npotentially break indexes.\n\nI see the following potential solutions:\n\n 1. Represent the memcmp behavior with iculocale=NULL, or some other\ncatalog hack, so that we can distinguish between a locale \"C\" upgraded\nfrom a previous version (which should pass \"C\" to ICU and get the root\nlocale), and a new collation defined with locale \"C\" (which should have\nmemcmp behavior). The catalog representation for locale information is\nalready complex, so I'm not excited about this option, but it will\nwork.\n\n 2. When provider=icu and locale=C, magically transform that into\nprovider=libc to get memcmp-like behavior for new collations but\npreserve the existing behavior for upgraded collations. Not especially\nclean, but if we issue a NOTICE perhaps that would avoid confusion.\n\n 3. Like #2, except create a new provider type \"none\" which may be\nslightly less confusing.\n\n=== 0004: make LOCALE apply to ICU for CREATE DATABASE\n\nTo understand this patch it helps to understand the confusing situation\nwith CREATE DATABASE in version 15:\n\nThe keywords LC_CTYPE and LC_COLLATE set the server environment\nLC_CTYPE/LC_COLLATE for that database and can be specified regardless\nof the provider. LOCALE can be specified along with (or instead of)\nLC_CTYPE and LC_COLLATE, in which case whichever of LC_CTYPE or\nLC_COLLATE is unspecified defaults to the setting of LOCALE. Iff the\nprovider is libc, LC_CTYPE and LC_COLLATE also act as the database\ndefault collation's locale. If the provider is icu, then none of\nLOCALE, LC_CTYPE, or LC_COLLATE affect the database default collation's\nlocale at all; that's controlled by ICU_LOCALE (which may be omitted if\nthe template's daticulocale is non-NULL).\n\nThe idea of patch 0004 is to address the last part, which is probably\nthe most confusing aspect. But for that to work smoothly, we need\nsomething like 0003 so that LOCALE=C gives the same semantics\nregardless of the provider.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 03 May 2023 22:35:41 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-04-21 at 20:12 -0400, Robert Haas wrote:\n> On Fri, Apr 21, 2023 at 5:56 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Most of the complaints seem to be complaints about v15 as well, and\n> > while those complaints may be a reason to not make ICU the default,\n> > they are also an argument that we should continue to learn and try\n> > to\n> > fix those issues because they exist in an already-released version.\n> > Leaving it the default for now will help us fix those issues rather\n> > than hide them.\n> > \n> > It's still early, so we have plenty of time to revert the initdb\n> > default if we need to.\n> \n> That's fair enough, but I really think it's important that some\n> energy\n> get invested in providing adequate documentation for this stuff. Just\n> patching the code is not enough.\n\nAttached a significant documentation patch.\n\nI tried to make it comprehensive without trying to be exhaustive, and I\nseparated the explanation of language tags from what collation settings\nyou can include in a language tag, so hopefully that's more clear.\n\nI added quite a few examples spread throughout the various sections,\nand I preserved the existing examples at the end. I also left all of\nthe external links at the bottom for those interested enough to go\nbeyond what's there.\n\nI didn't add additional documentation for ICU rules. There are so many\noptions for collations that it's hard for me to think of realistic\nexamples to specify the rules directly, unless someone wants to invent\na new language. Perhaps useful if working with an interesting text file\nformat with special treatment for delimiters?\n\nI asked the question about rules here:\n\nhttps://www.postgresql.org/message-id/e861ac4fdae9f9f5ce2a938a37bcb5e083f0f489.camel%40cybertec.at\n\nand got some limited response about addressing sort complaints. That\nsounds reasonable, but a lot of that can also be handled just by\nspecifying the right collation settings. Someone who understands the\nuse case better could add some more documentation.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Fri, 05 May 2023 17:25:18 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> === 0001: do not convert C to en-US-u-va-posix\n\n> I plan to commit this soon.\n\nSeveral buildfarm animals have failed since this went in. The\nonly one showing enough info to diagnose is siskin [1]:\n\n@@ -1043,16 +1043,15 @@\n ERROR: ICU locale \"nonsense-nowhere\" has unknown language \"nonsense\"\n HINT: To disable ICU locale validation, set parameter icu_validation_level to DISABLED.\n CREATE COLLATION testx (provider = icu, locale = 'C'); -- fails\n-ERROR: could not convert locale name \"C\" to language tag: U_ILLEGAL_ARGUMENT_ERROR\n+NOTICE: using standard form \"en-US-u-va-posix\" for locale \"C\"\n CREATE COLLATION testx (provider = icu, locale = '@colStrength=primary;nonsense=yes'); -- fails\n ERROR: could not convert locale name \"@colStrength=primary;nonsense=yes\" to language tag: U_ILLEGAL_ARGUMENT_ERROR\n SET icu_validation_level = WARNING;\n CREATE COLLATION testx (provider = icu, locale = '@colStrength=primary;nonsense=yes'); DROP COLLATION testx;\n WARNING: could not convert locale name \"@colStrength=primary;nonsense=yes\" to language tag: U_ILLEGAL_ARGUMENT_ERROR\n+ERROR: collation \"testx\" already exists\n CREATE COLLATION testx (provider = icu, locale = 'C'); DROP COLLATION testx;\n-WARNING: could not convert locale name \"C\" to language tag: U_ILLEGAL_ARGUMENT_ERROR\n-WARNING: ICU locale \"C\" has unknown language \"c\"\n-HINT: To disable ICU locale validation, set parameter icu_validation_level to DISABLED.\n+NOTICE: using standard form \"en-US-u-va-posix\" for locale \"C\"\n CREATE COLLATION testx (provider = icu, locale = 'nonsense-nowhere'); DROP COLLATION testx;\n WARNING: ICU locale \"nonsense-nowhere\" has unknown language \"nonsense\"\n HINT: To disable ICU locale validation, set parameter icu_validation_level to DISABLED.\n\nI suppose this is environment-dependent. Sadly, the buildfarm\nclient does not show the prevailing LANG or LC_XXX settings.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=siskin&dt=2023-05-08%2020%3A09%3A26\n\n\n", "msg_date": "Mon, 08 May 2023 17:47:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Mon, 2023-05-08 at 17:47 -0400, Tom Lane wrote:\n> -ERROR:  could not convert locale name \"C\" to language tag:\n> U_ILLEGAL_ARGUMENT_ERROR\n> +NOTICE:  using standard form \"en-US-u-va-posix\" for locale \"C\"\n\n...\n\n> I suppose this is environment-dependent.  Sadly, the buildfarm\n> client does not show the prevailing LANG or LC_XXX settings.\n\nLooks like it's failing-to-fail on some versions of ICU which\nautomatically perform that conversion.\n\nThe easiest thing to do is revert it for now, and after we sort out the\nmemcmp() path for the ICU provider, then I can commit it again (after\nthat point it would just be code cleanup and should have no functional\nimpact).\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 08 May 2023 14:59:41 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 2023-Apr-24, Peter Eisentraut wrote:\n\n> The GUC settings lc_collate and lc_ctype are from a time when those locale\n> settings were cluster-global. When we made those locale settings\n> per-database (PG 8.4), we kept them as read-only. As of PG 15, you can use\n> ICU as the per-database locale provider, so what is being attempted in the\n> above example is already meaningless before PG 16, since you need to look\n> into pg_database to find out what is really happening.\n> \n> I think we should just remove the GUC parameters lc_collate and lc_ctype.\n\nI agree with removing these in v16, since they are going to become more\nmeaningless and confusing.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 9 May 2023 10:25:53 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Tue, 2023-05-09 at 10:25 +0200, Alvaro Herrera wrote:\n> I agree with removing these in v16, since they are going to become\n> more\n> meaningless and confusing.\n\nAgreed, but it would be nice to have an alternative that does the right\nthing.\n\nIt's awkward for a user to read pg_database.datlocprovider, then\ndepending on that, either look in datcollate or daticulocale. (It's\nawkward in the code, too.)\n\nMaybe some built-in function that returns a tuple of the default\nprovider, the locale, and the version? Or should we also output the\nctype somehow (which affects the results of upper()/lower())?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 09 May 2023 08:09:29 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 09.05.23 10:25, Alvaro Herrera wrote:\n> On 2023-Apr-24, Peter Eisentraut wrote:\n> \n>> The GUC settings lc_collate and lc_ctype are from a time when those locale\n>> settings were cluster-global. When we made those locale settings\n>> per-database (PG 8.4), we kept them as read-only. As of PG 15, you can use\n>> ICU as the per-database locale provider, so what is being attempted in the\n>> above example is already meaningless before PG 16, since you need to look\n>> into pg_database to find out what is really happening.\n>>\n>> I think we should just remove the GUC parameters lc_collate and lc_ctype.\n> \n> I agree with removing these in v16, since they are going to become more\n> meaningless and confusing.\n\nHere is my proposed patch for this.", "msg_date": "Thu, 11 May 2023 13:07:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 09.05.23 17:09, Jeff Davis wrote:\n> It's awkward for a user to read pg_database.datlocprovider, then\n> depending on that, either look in datcollate or daticulocale. (It's\n> awkward in the code, too.)\n> \n> Maybe some built-in function that returns a tuple of the default\n> provider, the locale, and the version? Or should we also output the\n> ctype somehow (which affects the results of upper()/lower())?\n\nThere is also the deterministic flag and the icurules setting. \nDepending on what level of detail you imagine the user needs, you really \ndo need to look at the whole picture, not some subset of it.\n\n\n\n", "msg_date": "Thu, 11 May 2023 13:09:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "New patch series attached.\n\n=== 0001: fix bug that allows creating hidden collations\n\nBug:\nhttps://www.postgresql.org/message-id/051c9395cf880307865ee8b17acdbf7f838c1e39.camel@j-davis.com\n\n=== 0002: handle some kinds of libc-stlye locale strings\n\nICU used to handle libc locale strings like 'fr_FR@euro', but doesn't\nin later versions. Handle them in postgres for consistency.\n\n=== 0003: reduce icu_validation_level to WARNING\n\nGiven that we've seen some inconsistency in which locale names are\naccepted in different ICU versions, it seems best not to be too strict.\nPeter Eisentraut suggested that it be set to ERROR originally, but a\nWARNING should be sufficient to see problems without introducing risks\nmigrating to version 16.\n\nI don't expect objections to 0003, so I may commit this soon, but I'll\ngive it a little time in case someone has an opinion.\n\n=== 0004-0006: \n\nTo solve the issues that have come up in this thread, we need CREATE\nDATABASE (and createdb and initdb) to use LOCALE to mean the collation\nlocale regardless of which provider is in use (which is what 0006\ndoes).\n\n0006 depends on ICU handling libc locale names. It already does a good\njob for most libc locale names (though patch 0002 fixes a few cases\nwhere it doesn't). There may be more cases, but for the most part libc\nnames are interpreted in a reasonable way. But one important case is\nmissing: ICU does not handle the \"C\" locale as we expect (that is,\nusing memcmp()).\n\nWe've already allowed users to create ICU collations with the C locale\nin the past, which uses the root collation (not memcmp()), and we need\nto keep supporting that for upgraded clusters. So that leaves us with a\ncatalog representation problem. I mentioned upthread that we can solve\nthat by:\n\n 1. Using iculocale=NULL to mean \"C-as-in-memcmp\", or having some\nother catalog hack (like another field). That's not desirable because\nthe catalog representation is already complex and it may be hard for\nusers to tell what's happening.\n\n 2. When provider=icu and locale=C, switch to provider=libc locale=C.\nThis is very messy, because currently the syntax allows specifying a\ndatabase with LOCALE_PROVIDER='icu' ICU_LOCALE='C' LC_COLLATE='en_US' -\n- if the provider gets changed to libc, what would we set datcollate\nto? I don't think this is workable without some breakage. We can't\nsimply override datcollate to be C in that case, because there are some\nthings other than the default collation that might need it set to en_US\nas the user specified.\n\n 3. Introduce collation provider \"none\", which is always memcmp-based\n(patch 0004). It's equivalent to the libc locale=C, but it allows\nspecifying the LC_COLLATE and LC_CTYPE independently. A command like\nCREATE DATABASE ... LOCALE_PROVIDER='icu' ICU_LOCALE='C'\nLC_COLLATE='en_US' would get changed (with a NOTICE) to provider \"none\"\n(patch 0005), so you'd have datlocprovider=none, datcollate=en_US. For\nthe database default collation, that would always use memcmp(), but the\nserver environment LC_COLLATE would be set to en_US as the user\nspecified.\n\nFor this patch series, I chose approach #3. I think it works out nicely\n-- it provides a better place to document the \"no locale\" behavior\n(including a warning that it depends on the database encoding), and I\nthink it's more clear to the user that locale=C is not actually using a\nprovider at all. It's more invasive, but feels like a better solution.\nIf others don't like it I can implement approach #1 instead.\n\n=== 0007: Add a GUC to control the default collation provider\n\nHaving a GUC would make it easier to migrate to ICU without surprises.\nThis only affects the default for CREATE COLLATION, not CREATE DATABASE\n(and obviously not initdb).\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Thu, 11 May 2023 14:29:18 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Hello Jeff,\n\n09.05.2023 00:59, Jeff Davis wrote:\n> The easiest thing to do is revert it for now, and after we sort out the\n> memcmp() path for the ICU provider, then I can commit it again (after\n> that point it would just be code cleanup and should have no functional\n> impact).\n\nOn the current master (after 455f948b0, and before f7faa9976, of course)\nI get an ASAN-detected failure with the following query:\nCREATE COLLATION col (provider = icu, locale = '123456789012');\n\n==2929883==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffc491be09c at pc 0x556e8571a260 bp 0x7\nffc491be020 sp 0x7ffc491bd7c8\nREAD of size 15 at 0x7ffc491be09c thread T0\n     #0 0x556e8571a25f in __interceptor_strcmp.part.0 (.../usr/local/pgsql/bin/postgres+0x2aa025f)\n     #1 0x556e86d77ee6 in icu_language_tag .../src/backend/utils/adt/pg_locale.c:2802\n...\nAddress 0x7ffc491be09c is located in stack of thread T0 at offset 76 in frame\n     #0 0x556e86d77cfe in icu_language_tag .../src/backend/utils/adt/pg_locale.c:2782\n\n   This frame has 2 object(s):\n     [48, 52) 'status' (line 2784)\n     [64, 76) 'lang' (line 2785) <== Memory access at offset 76 overflows this variable\n...\n\nHere, uloc_getLanguage(loc_str, lang, ULOC_LANG_CAPACITY, &status) returns\nstatus = -124, i.e.,\n     U_STRING_NOT_TERMINATED_WARNING = -124,/**< An output string could not be NUL-terminated because output \nlength==destCapacity. */\n(ULOC_LANG_CAPACITY = 12)\nthis value is not covered by U_FAILURE(status), and strcmp(), that follows,\ngoes out of the lang variable bounds.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 13 May 2023 13:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 11.05.23 23:29, Jeff Davis wrote:\n> New patch series attached.\n> \n> === 0001: fix bug that allows creating hidden collations\n> \n> Bug:\n> https://www.postgresql.org/message-id/051c9395cf880307865ee8b17acdbf7f838c1e39.camel@j-davis.com\n\nThis is still being debated in the other thread. Not really related to \nthis thread, so I suggest dropping it from this patch series.\n\n\n> === 0002: handle some kinds of libc-stlye locale strings\n> \n> ICU used to handle libc locale strings like 'fr_FR@euro', but doesn't\n> in later versions. Handle them in postgres for consistency.\n\nI tend to agree with ICU that these variants are obsolete, and we don't \nneed to support them anymore. If this were a tiny patch, then maybe ok, \nbut the way it's presented here the whole code is duplicated between \npg_locale.c and initdb.c, which is not great.\n\n\n> === 0003: reduce icu_validation_level to WARNING\n> \n> Given that we've seen some inconsistency in which locale names are\n> accepted in different ICU versions, it seems best not to be too strict.\n> Peter Eisentraut suggested that it be set to ERROR originally, but a\n> WARNING should be sufficient to see problems without introducing risks\n> migrating to version 16.\n\nI'm not sure why this is the conclusion. Presumably, the detection \ncapabilities of ICU improve over time, so we want to take advantage of \nthat? What are some example scenarios where this change would help?\n\n\n> === 0004-0006:\n> \n> To solve the issues that have come up in this thread, we need CREATE\n> DATABASE (and createdb and initdb) to use LOCALE to mean the collation\n> locale regardless of which provider is in use (which is what 0006\n> does).\n> \n> 0006 depends on ICU handling libc locale names. It already does a good\n> job for most libc locale names (though patch 0002 fixes a few cases\n> where it doesn't). There may be more cases, but for the most part libc\n> names are interpreted in a reasonable way. But one important case is\n> missing: ICU does not handle the \"C\" locale as we expect (that is,\n> using memcmp()).\n> \n> We've already allowed users to create ICU collations with the C locale\n> in the past, which uses the root collation (not memcmp()), and we need\n> to keep supporting that for upgraded clusters.\n\nI'm not sure I agree that we need to keep supporting that. The only way \nyou could get that in past releases is if you specify explicitly, \"give \nme provider ICU and locale C\", and then it wouldn't actually even work \ncorrectly. So nobody should be using that in practice, and nobody \nshould have stumbled into that combination of settings by accident.\n\n\n> 3. Introduce collation provider \"none\", which is always memcmp-based\n> (patch 0004). It's equivalent to the libc locale=C, but it allows\n> specifying the LC_COLLATE and LC_CTYPE independently. A command like\n> CREATE DATABASE ... LOCALE_PROVIDER='icu' ICU_LOCALE='C'\n> LC_COLLATE='en_US' would get changed (with a NOTICE) to provider \"none\"\n> (patch 0005), so you'd have datlocprovider=none, datcollate=en_US. For\n> the database default collation, that would always use memcmp(), but the\n> server environment LC_COLLATE would be set to en_US as the user\n> specified.\n\nThis seems most attractive, but I think it's quite invasive at this \npoint, especially given the dubious premise (see above).\n\n\n> === 0007: Add a GUC to control the default collation provider\n> \n> Having a GUC would make it easier to migrate to ICU without surprises.\n> This only affects the default for CREATE COLLATION, not CREATE DATABASE\n> (and obviously not initdb).\n\nIt's not clear to me why we would want that. Also not clear why it \nshould only affect CREATE COLLATION.\n\n\n\n", "msg_date": "Mon, 15 May 2023 12:06:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Sat, 2023-05-13 at 13:00 +0300, Alexander Lakhin wrote:\n> On the current master (after 455f948b0, and before f7faa9976, of\n> course)\n> I get an ASAN-detected failure with the following query:\n> CREATE COLLATION col (provider = icu, locale = '123456789012');\n> \n\nThank you for the report!\n\nICU source specifically says:\n\n /** \n * Useful constant for the maximum size of the language\n part of a locale ID. \n * (including the terminating NULL). \n * @stable ICU 2.0 \n */\n #define ULOC_LANG_CAPACITY 12\n\nSo the fact that it returning success without nul-terminating the\nresult is an ICU bug in my opinion, and I reported it here:\n\nhttps://unicode-org.atlassian.net/browse/ICU-22394\n\nUnfortunately that means we need to be a bit more paranoid and always\ncheck for that warning, even if we have a preallocated buffer of the\n\"correct\" size. It also means that both U_STRING_NOT_TERMINATED_WARNING\nand U_BUFFER_OVERFLOW_ERROR will be user-facing errors (potentially\nscary), unless we check for those errors each time and report specific\nerrors for them.\n\nAnother approach is to always check the length and loop using dynamic\nallocation so that we never overflow (and forget about any static\nbuffers). That seems like overkill given that the problem case is\nobviously invalid input; I think as long as we catch it and throw an\nERROR it's fine. But I can do this if others think it's worthwhile.\n\nPatch attached. It just checks for the U_STRING_NOT_TERMINATED_WARNING\nin a few places and turns it into an ERROR. It also cleans up the loop\nfor uloc_getLanguageTag() to check for U_STRING_NOT_TERMINATED_WARNING\nrather than (U_SUCCESS(status) && len >= buflen).\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Mon, 15 May 2023 14:03:23 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Mon, 2023-05-08 at 14:59 -0700, Jeff Davis wrote:\n> The easiest thing to do is revert it for now, and after we sort out\n> the\n> memcmp() path for the ICU provider, then I can commit it again (after\n> that point it would just be code cleanup and should have no\n> functional\n> impact).\n\nThe conversion won't be entirely dead code even after we handle the \"C\"\nlocale with memcmp(): for a locale like \"C.UTF-8\", it will still be\npassed to the collation provider (same as with libc), and in that case,\nwe should still convert that to a language tag consistently across ICU\nversions.\n\nFor it to be entirely dead code, we would need to convert any locale\nwith language \"C\" (e.g. \"C.UTF-8\") to use the memcmp() path. I'm fine\nwith that, but that's not what the libc provider does today, and\nperhaps we should be consistent between the two. If we do leave the\ncode in place, we can document that specific \"en-US-u-va-posix\" locale\nso that it's not too surprising for users.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 15 May 2023 14:16:31 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Hi Jeff,\n\n16.05.2023 00:03, Jeff Davis wrote:\n> On Sat, 2023-05-13 at 13:00 +0300, Alexander Lakhin wrote:\n>> On the current master (after 455f948b0, and before f7faa9976, of\n>> course)\n>> I get an ASAN-detected failure with the following query:\n>> CREATE COLLATION col (provider = icu, locale = '123456789012');\n>>\n> Thank you for the report!\n>\n> ICU source specifically says:\n>\n> /**\n> * Useful constant for the maximum size of the language\n> part of a locale ID.\n> * (including the terminating NULL).\n> * @stable ICU 2.0\n> */\n> #define ULOC_LANG_CAPACITY 12\n>\n> So the fact that it returning success without nul-terminating the\n> result is an ICU bug in my opinion, and I reported it here:\n>\n> https://unicode-org.atlassian.net/browse/ICU-22394\n>\n> Unfortunately that means we need to be a bit more paranoid and always\n> check for that warning, even if we have a preallocated buffer of the\n> \"correct\" size. It also means that both U_STRING_NOT_TERMINATED_WARNING\n> and U_BUFFER_OVERFLOW_ERROR will be user-facing errors (potentially\n> scary), unless we check for those errors each time and report specific\n> errors for them.\n>\n> Another approach is to always check the length and loop using dynamic\n> allocation so that we never overflow (and forget about any static\n> buffers). That seems like overkill given that the problem case is\n> obviously invalid input; I think as long as we catch it and throw an\n> ERROR it's fine. But I can do this if others think it's worthwhile.\n>\n> Patch attached. It just checks for the U_STRING_NOT_TERMINATED_WARNING\n> in a few places and turns it into an ERROR. It also cleans up the loop\n> for uloc_getLanguageTag() to check for U_STRING_NOT_TERMINATED_WARNING\n> rather than (U_SUCCESS(status) && len >= buflen).\n\nI'm not sure about the proposed change in icu_from_uchar(). It seems that\nlen_result + 1 bytes should always be enough for the result string terminated\nwith NUL. If that's not true (we want to protect from some ICU bug here),\nthen the change should be backpatched?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 16 May 2023 19:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Tue, 2023-05-16 at 19:00 +0300, Alexander Lakhin wrote:\n> I'm not sure about the proposed change in icu_from_uchar(). It seems\n> that\n> len_result + 1 bytes should always be enough for the result string\n> terminated\n> with NUL. If that's not true (we want to protect from some ICU bug\n> here),\n> then the change should be backpatched?\n\nI believe it's enough and I'm not aware of any bug there so no backport\nis required.\n\nI added checks in places that were (a) checking for U_FAILURE; and (b)\nexpecting the result to be NUL-terminated. That's mostly callers of\nuloc_getLanguage(), where I was not quite paranoid enough.\n\nThere were a couple other places though, and I went ahead and added\nchecks there out of paranoia, too. One was ucnv_fromUChars(), and the\nother was uloc_canonicalize().\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 16 May 2023 09:47:51 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 5/5/23 8:25 PM, Jeff Davis wrote:\r\n> On Fri, 2023-04-21 at 20:12 -0400, Robert Haas wrote:\r\n>> On Fri, Apr 21, 2023 at 5:56 PM Jeff Davis <pgsql@j-davis.com> wrote:\r\n>>> Most of the complaints seem to be complaints about v15 as well, and\r\n>>> while those complaints may be a reason to not make ICU the default,\r\n>>> they are also an argument that we should continue to learn and try\r\n>>> to\r\n>>> fix those issues because they exist in an already-released version.\r\n>>> Leaving it the default for now will help us fix those issues rather\r\n>>> than hide them.\r\n>>>\r\n>>> It's still early, so we have plenty of time to revert the initdb\r\n>>> default if we need to.\r\n>>\r\n>> That's fair enough, but I really think it's important that some\r\n>> energy\r\n>> get invested in providing adequate documentation for this stuff. Just\r\n>> patching the code is not enough.\r\n> \r\n> Attached a significant documentation patch.\r\n\r\n\r\n> I tried to make it comprehensive without trying to be exhaustive, and I\r\n> separated the explanation of language tags from what collation settings\r\n> you can include in a language tag, so hopefully that's more clear.\r\n> \r\n> I added quite a few examples spread throughout the various sections,\r\n> and I preserved the existing examples at the end. I also left all of\r\n> the external links at the bottom for those interested enough to go\r\n> beyond what's there.\r\n\r\n[Personal hat, not RMT]\r\n\r\nThanks -- this is super helpful. A bunch of these examples I had \r\npreviously had to figure out by randomly searching blog posts / \r\ntrial-and-error, so I think this will help developers get started more \r\nquickly.\r\n\r\nComments (and a lot are just little nits to tighten the language)\r\n\r\nCommit message -- typo: \"documentaiton\"\r\n\r\n\r\n+ If you see such a message, ensure that the \r\n<symbol>PROVIDER</symbol> and\r\n+ <symbol>LOCALE</symbol> are as you expect, and consider specifying\r\n+ directly as the canonical language tag instead of relying on the\r\n+ transformation.\r\n+ </para>\r\n\r\nI'd recommend make this more prescriptive:\r\n\r\n\"If you see this notice, ensure that the <symbol>PROVIDER</symbol> and \r\n<symbol>LOCALE</symbol> are the expected result. For consistent results \r\nwhen using the ICU provider, specify the canonical <link \r\nlinkend=\"icu-language-tag\">language tag</link> instead of relying on the \r\ntransformation.\"\r\n\r\n+ If there is some problem interpreting the locale name, or if it \r\nrepresents\r\n+ a language or region that ICU does not recognize, a message will \r\nbe reported:\r\n\r\nThis is passive voice, consider:\r\n\r\n\"If there is a problem interpreting the locale name, or if the locale \r\nname represents a language or region that ICU does not recognize, you'll \r\nsee the following error:\"\r\n\r\n\r\n+ <sect3 id=\"icu-language-tag\">\r\n+ <title>Language Tag</title>\r\n+ <para>\r\n\r\nBefore jumping in, I'd recommend a quick definition of what a language \r\ntag is, e.g.:\r\n\r\n\"A language tag, defined in BCP 47, is a standardized identifier used to \r\nidentify languages in computer systems\" or something similar.\r\n\r\n(I did find a database that made it simpler to search for these, which \r\nis one issue I've previously add, but I don't think we'd want to link to i)\r\n\r\n+ To include this additional collation information in a language tag,\r\n+ append <literal>-u</literal>, followed by one or more\r\n\r\nMy first question was \"what's special about '-u'\", so maybe we say:\r\n\r\n\"To include this additional collation information in a language tag, \r\nappend <literal>-u</literal>, which indicates there are additional \r\ncollation settings, followed by one or more...\"\r\n\r\n+ ICU locales are specified as a <link \r\nlinkend=\"icu-language-tag\">Language\r\n+ Tag</link>, but can also accept most libc-style locale names \r\n(which will\r\n+ be transformed into language tags if possible).\r\n+ </para>\r\n\r\nI'd recommend removing the parantheticals:\r\n\r\nICU locales are specified as a BCP 47 <link \r\nlinkend=\"icu-language-tag\">Language\r\n Tag</link>, but can also accept most libc-style locale names. If \r\npossible, libc-style locale names are transformed into language tags.\r\n\r\n+ <title>ICU Collation Levels</title>\r\n\r\nNothing to add here other than to say I'm extremely appreciative of this \r\nsection. Once upon a time I sunk a lot of time trying to figure out how \r\nall of these levels worked.\r\n\r\n+ Sensitivity when determining equality, with\r\n+ <literal>level1</literal> the least sensitive and\r\n+ <literal>identic</literal> the most sensitive. See <xref\r\n+ linkend=\"icu-collation-levels\"/> for details.\r\n\r\nThis discusses equality sensitivity, but I'm not sure if I understand \r\nthat term here. The ICU docs seem to call these \"strengths\"[1], maybe we \r\nuse that term to be consistent with upstream?\r\n\r\n+ If set to <literal>upper</literal>, upper case sorts before lower\r\n+ case. If set to <literal>lower</literal>, lower case sorts before\r\n+ upper case. If set to <literal>false</literal>, it depends on the\r\n+ locale.\r\n\r\nSuggestion to tighten this up:\r\n\r\n\"If set to <literal>false</literal>, the sort depends on the rules of \r\nthe locale.\"\r\n\r\n+ Defaults may depend on locale. The above table is not meant to be\r\n+ complete. See <xref linkend=\"icu-external-references\"/> for additinal\r\n+ options and details.\r\n\r\nTypo: additinal => \"additional\"\r\n\r\n> I didn't add additional documentation for ICU rules. There are so many\r\n> options for collations that it's hard for me to think of realistic\r\n> examples to specify the rules directly, unless someone wants to invent\r\n> a new language. Perhaps useful if working with an interesting text file\r\n> format with special treatment for delimiters?\r\n> \r\n> I asked the question about rules here:\r\n> \r\n> https://www.postgresql.org/message-id/e861ac4fdae9f9f5ce2a938a37bcb5e083f0f489.camel%40cybertec.at\r\n> \r\n> and got some limited response about addressing sort complaints. That\r\n> sounds reasonable, but a lot of that can also be handled just by\r\n> specifying the right collation settings. Someone who understands the\r\n> use case better could add some more documentation.\r\n\r\nI'm not too sure about this one -- from my experience, users want \r\npredictability in sorts, but there are a variety of ways to get that \r\nexperience.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://unicode-org.github.io/icu/userguide/collation/concepts.html", "msg_date": "Tue, 16 May 2023 15:35:28 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Tue, 2023-05-16 at 15:35 -0400, Jonathan S. Katz wrote:\n> +          Sensitivity when determining equality, with\n> +          <literal>level1</literal> the least sensitive and\n> +          <literal>identic</literal> the most sensitive. See <xref\n> +          linkend=\"icu-collation-levels\"/> for details.\n> \n> This discusses equality sensitivity, but I'm not sure if I understand\n> that term here. The ICU docs seem to call these \"strengths\"[1], maybe\n> we \n> use that term to be consistent with upstream?\n\n\"Sensitivity\" comes from \"case sensitivity\" which is more clear to me\nthan \"strength\". I added the term \"strength\" to correspond to the\nunicode terminology, but I kept sensitivity and I tried to make it\nslightly more clear.\n\nOther than that, and I took your suggestions almost verbatim. Patch\nattached. Thank you!\n\nI also made a few other changes:\n\n * added paragraph transformation of '' or 'root' to the 'und'\nlanguage (root collation)\n * added paragraph that the \"identic\" level still performs some basic\nnormalization\n * added example for when full normalization matters\n\nI should also say that I don't really understand the case when \"kc\" is\nset to true and \"ks\" is level 2 or higher. If someone has an example of\nwhere that matters, let me know.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 16 May 2023 20:23:16 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Tue, 2023-05-16 at 20:23 -0700, Jeff Davis wrote:\n> Other than that, and I took your suggestions almost verbatim. Patch\n> attached. Thank you!\n\nAttached new patch with a typo fix and a few other edits. I plan to\ncommit soon.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 17 May 2023 15:59:06 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 5/17/23 6:59 PM, Jeff Davis wrote:\r\n> On Tue, 2023-05-16 at 20:23 -0700, Jeff Davis wrote:\r\n>> Other than that, and I took your suggestions almost verbatim. Patch\r\n>> attached. Thank you!\r\n> \r\n> Attached new patch with a typo fix and a few other edits. I plan to\r\n> commit soon.\r\n\r\nI did a quicker read through this time. LGTM overall. I like what you \r\ndid with the explanations around sensitivity (now it makes sense).\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 17 May 2023 19:59:48 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Wed, 2023-05-17 at 19:59 -0400, Jonathan S. Katz wrote:\n> I did a quicker read through this time. LGTM overall. I like what you\n> did with the explanations around sensitivity (now it makes sense).\n\nCommitted, thank you.\n\nThere are a few things I don't understand that would be good to\ndocument better:\n\n* Rules. I still don't quite understand the use case: are these for\npeople inventing new languages? What is a plausible use case that isn't\ncovered by the existing locales and collation settings? Do rules make\nsense for a database default collation? Are they for language experts\nonly or might an ordinary developer benefit from using them?\n\n* The collation types \"phonebk\", \"emoji\", etc.: are these variants of\nparticular locales, or do they make sense in multiple locales? I don't\nknow where they fit in or how to document them.\n\n* I don't understand what \"kc\" means if \"ks\" is not set to \"level1\".\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 18 May 2023 10:55:49 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 5/18/23 1:55 PM, Jeff Davis wrote:\r\n> On Wed, 2023-05-17 at 19:59 -0400, Jonathan S. Katz wrote:\r\n>> I did a quicker read through this time. LGTM overall. I like what you\r\n>> did with the explanations around sensitivity (now it makes sense).\r\n> \r\n> Committed, thank you.\r\n\r\n\\o/\r\n\r\n> There are a few things I don't understand that would be good to\r\n> document better:\r\n> \r\n> * Rules. I still don't quite understand the use case: are these for\r\n> people inventing new languages? What is a plausible use case that isn't\r\n> covered by the existing locales and collation settings? Do rules make\r\n> sense for a database default collation? Are they for language experts\r\n> only or might an ordinary developer benefit from using them?\r\n\r\n From my read of them, as an app developer I'd be very unlikely to use \r\nthis. Maybe there is something with building out some collation rules \r\nvis-a-vis an extension, but I have trouble imagining the use-case. I may \r\nalso not be the target audience for this feature.\r\n\r\n> * The collation types \"phonebk\", \"emoji\", etc.: are these variants of\r\n> particular locales, or do they make sense in multiple locales? I don't\r\n> know where they fit in or how to document them.\r\n\r\nI remember I had a exploratory use case for \"phonebk\" but I couldn't \r\nfigure out how to get it to work. AIUI from random searching, the idea \r\nis that it provides the \"phonebook\" rules for ordering \"names\" in a \r\nparticular locale, but I couldn't get it to work.\r\n\r\n> * I don't understand what \"kc\" means if \"ks\" is not set to \"level1\".\r\n\r\nMe neither, but I haven't stared at this as hard as others.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Thu, 18 May 2023 13:58:48 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 21 Apr 2023 at 22:46, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2023-04-21 at 19:00 +0100, Andrew Gierth wrote:\n> > > > > >\n> > Also, somewhere along the line someone broke initdb --no-locale,\n> > which\n> > should result in C locale being the default everywhere, but when I\n> > just\n> > tested it it picked 'en' for an ICU locale, which is not the right\n> > thing.\n>\n> Fixed, thank you.\n\nAs I complain about in [0], since 5cd1a5af --no-locale has been broken\n/ bahiving outside it's description: Instead of being equivalent to\n`--locale=C` it now also overrides `--locale-provider=libc`, resulting\nin undocumented behaviour.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n[0] https://www.postgresql.org/message-id/CAEze2WiZFQyyb-DcKwayUmE4rY42Bo6kuK9nBjvqRHYxUYJ-DA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 18 May 2023 20:11:36 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Thu, 2023-05-18 at 13:58 -0400, Jonathan S. Katz wrote:\n>  From my read of them, as an app developer I'd be very unlikely to\n> use \n> this. Maybe there is something with building out some collation rules\n> vis-a-vis an extension, but I have trouble imagining the use-case. I\n> may \n> also not be the target audience for this feature.\n\nThat's a problem for the ICU rules feature. I understand some features\nmay be for domain experts only, but we at least need to call that out\nso that ordinary developers don't get confused. And we should hear from\nsome of those domain experts that they actually want it and it solves a\nreal problem.\n\nFor the features that can be described with collation\nsettings/attributes right in the locale name, the use cases are more\nplausible and we've supported them since v10, so it's good to document\nthem as best we can. It's hard to expose only the particular ICU\ncollation settings we understand best (e.g. the \"ks\" setting that\nallows case insensitive collation), so it's inevitable that there will\nbe some settings that are more obscure and harder to document.\n\nBut in the case of ICU rules, they are newly-supported in 16, so there\nshould be a clear reason we're adding them. Otherwise we're just\nsetting up users for confusion or problems, and creating backwards-\ncompatibility headaches for ourselves (and the last thing we want is to\nfret over backwards compatibility for a feature with no users).\n\nBeyond that, there seems to be some danger: if the syntax for rules is\nnot perfectly compatible between ICU versions, the user might run into\nbig problems.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 18 May 2023 11:26:14 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Thu, 2023-05-18 at 20:11 +0200, Matthias van de Meent wrote:\n> As I complain about in [0], since 5cd1a5af --no-locale has been\n> broken\n> / bahiving outside it's description: Instead of being equivalent to\n> `--locale=C` it now also overrides `--locale-provider=libc`,\n> resulting\n> in undocumented behaviour.\n\nI agree that 5cd1a5af is incomplete.\n\nPosting updated patches. Feedback on the approaches below would be\nappreciated.\n\nFor context, in version 15:\n\n $ initdb -D data --locale-provider=icu --icu-locale=en\n => create database clocale template template0 locale='C';\n => select datname, datlocprovider, daticulocale\n from pg_database where datname='clocale';\n datname | datlocprovider | daticulocale \n ---------+----------------+--------------\n clocale | i | en\n (1 row)\n\nThat behavior is confusing, and when I made ICU the default provider in\nv16, the confusion was extended into more cases.\n\nIf we leave the CREATE DATABASE (and createdb and initdb) syntax in\nplace, such that LOCALE (and --locale) do not apply to ICU at all, then\nI don't see a path to a good ICU user experience.\n\nTherefore I conclude that we need LOCALE (and --locale) to apply to ICU\nsomehow. (The LOCALE option already applies to ICU during CREATE\nCOLLATION, just not CREATE DATABASE or initdb.)\n\nPatch 0003 does this. It's fairly straightforward and I believe we need\nthis patch.\n\nBut to actually fix your complaint we also need --no-locale to be\nequivalent to --locale=C and for those options to both use memcmp()\nsemantics. There are several approaches to accomplish this, and I think\nthis is the part where I most need some feedback. There are only so\nmany approaches, and each one has some potential downsides, but I\nbelieve we need to select one:\n\n\n(1) Give up and leave the existing CREATE DATABASE (and createdb, and\ninitdb) semantics in place, along with the confusing behavior in v15.\n\nThis is a last resort, in my opinion. It gives us no path toward a good\nuser experience with ICU, and leaves us with all of the problems of the\nOS as a collation provider.\n\n(2) Automatically change the provider to libc when locale=C.\n\nAlmost works, but it's not clear how we handle the case \"provider=icu\nlc_collate='fr_FR.utf8' locale=C\".\n\nIf we change it to \"provider=libc lc_collate=C\", we've overridden the\nspecified lc_collate. If we ignore the locale=C, that would be\nsurprising to users. If we throw an error, that would be a backwards\ncompatibility issue.\n\nOne possible solution would be to change the catalog representation to\nallow setting the default collation locale separately from datcollate\neven for the libc provider. For instance, rename daticulocale to\ndatdeflocale, and store the default collation locale there for both\nlibc and ICU. Then, \"provider=icu lc_collate='fr_FR.utf8' locale=C\"\ncould be changed into \"provider=libc lc_collate='fr_FR.utf8'\ndeflocale=C\". It may be confusing that datcollate is a different\nconcept from datdeflocale; but then again they are different concepts\nand it's confusing that they are currently combined into one.\n\n(3) Support iculocale=C in the ICU provider using the memcmp() path.\n\nIn other words, if provider=icu and iculocale=C, lc_collate_is_c() and\nlc_ctpye_is_c() would both return true.\n\nThere's a potential problem for users who've misused ICU in the past\n(15 or earlier) by using provider=icu and iculocale=C. ICU would accept\nsuch a locale name, but not recognize it and fall back to the root\nlocale, so it never worked as the user intended it. But if we redefine\nC to be memcmp(), then such users will have broken indexes if they\nupgrade.\n\nWe could add a check at pg_upgrade time for iculocale=C in versions 15\nand earlier, and cause the check (and therefore the upgrade) to fail.\nThat may be reasonable considering that it never really worked in the\npast, and perhaps very few users actually ever created such a\ncollation. But if some user runs into that problem, we'd have to resort\nto a hack like telling them to \"update pg_collation set iculocale='und'\nwhere iculocale='C'\" and then try the upgrade again, which is not a\ngreat answer (as far as I can tell it would be a correct answer and\nshould not break their indexes, but it feels pretty dangerous).\n\nThere may be some other resolutions to this problem, such as catalog\nhacks that allow for different representations of iculocale=C pre-16\nand post-16. That doesn't sound great though, and we'd have to figure\nout what to do with pg_dump.\n\n(4) Create a new \"none\" provider (which has no locale and always memcmp\nsemantics), and automatically change the provider to \"none\" if\nprovider=icu and iculocale=C.\n\nThis solves the problem case in #2 and the potential upgrade problem in\n#3. It also makes the documentation a bit more natural, in my opinion,\neven if we retain the special case for provider=libc collate=C.\n\n\n#4 is the approach I chose (patches 0001 and 0002), but I'd like to\nhear what others think.\n\n\nFor historical reasons, users may assume that LC_COLLATE controls the\ndefault collation order because that's true in libc. And if their\nprovider is ICU, they may be surprised that it doesn't. I believe we\ncould extend each of the above approaches to use LC_COLLATE as the\ndefault for ICU_LOCALE if the former is specified and the latter is\nnot, and that may make things smoother.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Thu, 18 May 2023 15:46:52 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> 2) Automatically change the provider to libc when locale=C.\n> \n> Almost works, but it's not clear how we handle the case \"provider=icu\n> lc_collate='fr_FR.utf8' locale=C\".\n> \n> If we change it to \"provider=libc lc_collate=C\", we've overridden the\n> specified lc_collate. If we ignore the locale=C, that would be\n> surprising to users. If we throw an error, that would be a backwards\n> compatibility issue.\n\nThis thread started with a report illustrating that when users mention\nthe locale \"C\", they implicitly mean \"C\" from the libc provider, as\nwhen libc was the default. The problem is that as soon as ICU is the\ndefault, any reference to a libc collation should mention explicitly\nthat the provider is libc.\n\nIt seems what we're set on the idea to create an exception for \"C\"\n(and I assume also \"POSIX\") to avoid too much confusion, and because\n\"C\" is quite special anyway, and has no equivalent in ICU (the switch\nin v16 to ICU as the default provider is based on the premise that the\nlocales with the same name will behave pretty much the same with ICU\nas they did with libc, but it's absolutely not the case with \"C\").\n\nISTM that if we want to go that route, we need the make the minimum\nchanges at the user interface level and not any deeper, so that when\n(locale=\"C\" OR locale=\"POSIX\") AND the provider has not been specified,\nthen the command (initdb and create database) act as if the user had\nspecified provider=libc.\n\n\n> (3) Support iculocale=C in the ICU provider using the memcmp() path.\n\n> In other words, if provider=icu and iculocale=C, lc_collate_is_c() and\n> lc_ctpye_is_c() would both return true.\n\nICU does not provide a locale that behaves like that, and it doesn't\nfeel right to pretend it does. It feels like attacking the problem\nat the wrong level.\n\n> (4) Create a new \"none\" provider (which has no locale and always memcmp\n> semantics), and automatically change the provider to \"none\" if\n> provider=icu and iculocale=C.\n\nIt still uses libc/C for character classification and case changing,\nso \"no locale\" is technically not true. Personally I don't see\nthe benefit of adding a \"none\" provider. C is a libc locale\nand libc is not disappearing. I also think that when users explicitly\nindicate provider=icu, they should get icu.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 19 May 2023 21:13:47 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> Committed, thank you.\n\nThis commit has given the PDF docs build some indigestion:\n\nMaking portrait pages on A4 paper (210mmx297mm)\n/home/postgres/bin/fop -fo postgres-A4.fo -pdf postgres-A4.pdf\n[WARN] FOUserAgent - Font \"Symbol,normal,700\" not found. Substituting with \"Symbol,normal,400\".\n[WARN] FOUserAgent - Font \"ZapfDingbats,normal,700\" not found. Substituting with \"ZapfDingbats,normal,400\".\n[WARN] FOUserAgent - Hyphenation pattern not found. URI: en.\n[WARN] FOUserAgent - The contents of fo:block line 1 exceed the available area in the inline-progression direction by 3531 millipoints. (See position 55117:2388)\n[WARN] FOUserAgent - The contents of fo:block line 1 exceed the available area in the inline-progression direction by 1871 millipoints. (See position 55117:12998)\n[WARN] FOUserAgent - Glyph \"?\" (0x323, dotbelowcmb) not available in font \"Courier\".\n[WARN] FOUserAgent - Glyph \"?\" (0x302, circumflexcmb) not available in font \"Courier\".\n[WARN] FOUserAgent - The contents of fo:block line 12 exceed the available area in the inline-progression direction by 20182 millipoints. (See position 55172:188)\n[WARN] FOUserAgent - The contents of fo:block line 10 exceed the available area in the inline-progression direction by 17682 millipoints. (See position 55172:188)\n[WARN] FOUserAgent - Glyph \"?\" (0x142, lslash) not available in font \"Times-Roman\".\n[WARN] PropertyMaker - span=\"inherit\" on fo:block, but no explicit value found on the parent FO.\n\n(The first three and last one warnings are things we've been living\nwith, but the ones between are new.)\n\nThe first two \"exceed the available area\" complaints are in the \"ICU\nCollation Levels\" table. We can silence them by providing some column\nwidth hints to make the \"Description\" column a tad wider than the rest,\nas in the proposed patch attached. The other two, as well as the first\ntwo glyph-not-available complaints, are caused by this bit:\n\n Full normalization is important in some cases, such as when\n multiple accents are applied to a single character. For instance,\n <literal>'ệ'</literal> can be composed of code points\n <literal>U&amp;'\\0065\\0323\\0302'</literal> or\n <literal>U&amp;'\\0065\\0302\\0323'</literal>. With full normalization\n on, these code point sequences are treated as equal; otherwise they\n are unequal.\n\nwhich renders just abysmally (see attached screen shot), and even in HTML\nwhere it's rendering about as intended, it really is an unintelligible\nexample. Few people are going to understand that the circumflex and the\ndot-below are separately applied accents. How about we drop the explicit\nexample and write something like\n\n Full normalization allows code point sequences such as\n characters with multiple accent marks applied in different\n orders to be seen as equal.\n\n?\n\n(The last missing-glyph complaint is evidently from the release notes;\nI'll bug Bruce about that separately.)\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 19 May 2023 18:57:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-05-19 at 21:13 +0200, Daniel Verite wrote:\n> ISTM that if we want to go that route, we need the make the minimum\n> changes at the user interface level and not any deeper, so that when\n> (locale=\"C\" OR locale=\"POSIX\") AND the provider has not been\n> specified,\n> then the command (initdb and create database) act as if the user had\n> specified provider=libc.\n\nIf we special case locale=C, but do nothing for locale=fr_FR, then I'm\nnot sure we've solved the problem. Andrew Gierth raised the issue here,\nwhich he called \"maximally confusing\":\n\nhttps://postgr.es/m/874jp9f5jo.fsf@news-spur.riddles.org.uk\n\nThat's why I feel that we need to make locale apply to whatever the\nprovider is, not just when it happens to be C.\n\n> > (3) Support iculocale=C in the ICU provider using the memcmp()\n> > path.\n> \n> > In other words, if provider=icu and iculocale=C, lc_collate_is_c()\n> > and\n> > lc_ctpye_is_c() would both return true.\n> \n> ICU does not provide a locale that behaves like that, and it doesn't\n> feel right to pretend it does. It feels like attacking the problem\n> at the wrong level.\n\nI agree that #3 feels slightly wrong, but I think it's still a viable\noption until we have consensus on something better.\n\n> > (4) Create a new \"none\" provider (which has no locale and always\n> > memcmp\n> > semantics), and automatically change the provider to \"none\" if\n> > provider=icu and iculocale=C.\n> \n> It still uses libc/C for character classification and case changing,\n> so \"no locale\" is technically not true.\n\nThe provider affects callers that have a pg_locale_t, such as the SQL-\ncallable lower() function. For those callers, the \"none\" provider uses\npg_ascii_tolower(), etc., not libc. That's why I called it \"none\" --\nit's using simple internal postgres implementations instead of a\nprovider.\n\nFor callers that don't have a pg_locale_t, they may call libc functions\ndirectly and rely on the server environment. But in those cases,\nthere's no way to set a provider at all, it's just relying on the\nserver environment. There aren't many of these cases, and hopefully we\ncan eliminate the reliance on the server environment over time.\n\nIf I'm missing something, let me know what cases you have in mind.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Sat, 20 May 2023 18:42:55 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 18.05.23 19:55, Jeff Davis wrote:\n> On Wed, 2023-05-17 at 19:59 -0400, Jonathan S. Katz wrote:\n>> I did a quicker read through this time. LGTM overall. I like what you\n>> did with the explanations around sensitivity (now it makes sense).\n> \n> Committed, thank you.\n> \n> There are a few things I don't understand that would be good to\n> document better:\n> \n> * Rules. I still don't quite understand the use case: are these for\n> people inventing new languages? What is a plausible use case that isn't\n> covered by the existing locales and collation settings? Do rules make\n> sense for a database default collation? Are they for language experts\n> only or might an ordinary developer benefit from using them?\n\nThe rules are for setting whatever sort order you like. Maybe you want \nto sort + before - or whatever. It's like, if you don't like it, build \nyour own.\n\n> * The collation types \"phonebk\", \"emoji\", etc.: are these variants of\n> particular locales, or do they make sense in multiple locales? I don't\n> know where they fit in or how to document them.\n\nThe k* settings are parametric settings, in that they transform the sort \nkey in some algorithmic way. The co settings are just everything else. \nThey are not parametric, they are just some other sort order that \nsomeone spelled out explicitly.\n\n> * I don't understand what \"kc\" means if \"ks\" is not set to \"level1\".\n\nThere is an example here: \nhttps://peter.eisentraut.org/blog/2023/05/16/overview-of-icu-collation-settings#colcaselevel\n\n\n\n", "msg_date": "Mon, 22 May 2023 14:27:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 18.05.23 00:59, Jeff Davis wrote:\n> On Tue, 2023-05-16 at 20:23 -0700, Jeff Davis wrote:\n>> Other than that, and I took your suggestions almost verbatim. Patch\n>> attached. Thank you!\n> \n> Attached new patch with a typo fix and a few other edits. I plan to\n> commit soon.\n\nSome small follow-up on this patch:\n\nPlease put blank lines between\n\n</sect3>\n<sect3 ...>\n\netc., matching existing style.\n\nWe usually don't capitalize the collation parameters like\n\nCREATE COLLATION mycollation1 (PROVIDER = icu, LOCALE = 'ja-JP);\n\nelsewhere in the documentation.\n\nTable 24.2. ICU Collation Settings should probably be sorted by key, or \nat least by something.\n\nAll tables should referenced in the text, like \"Table x.y shows this and \nthat.\" (Note that a table could float to a different page in some \noutput formats, so just putting it into a section without some \nintroductory text isn't sound.)\n\nTable 24.1. ICU Collation Levels shows punctuation as level 4, which is \nonly true in shifted mode, which isn't the default. The whole business \nof treating variable collation elements is getting a bit lost in this \ndescription. The kv option is described as \"Classes of characters \nignored during comparison at level 3.\", which is effectively true but \nnot the whole picture.\n\n\n\n", "msg_date": "Mon, 22 May 2023 14:34:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Thu, 2023-05-11 at 13:09 +0200, Peter Eisentraut wrote:\n> There is also the deterministic flag and the icurules setting. \n> Depending on what level of detail you imagine the user needs, you\n> really \n> do need to look at the whole picture, not some subset of it.\n\n(Nit: all database default collations are deterministic.)\n\nI agree, but I think there should be a way to see the whole picture in\none command. If nothing else, for repro cases sent to the list, it\nwould be nice to have a single line like:\n\n SELECT show_default_collation_whole_picture();\n\nRight now it involves some back and forth, like checking\ndatlocprovider, then looking in the right fields and ignoring the wrong\nones.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 22 May 2023 10:08:17 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Thu, 2023-05-11 at 13:07 +0200, Peter Eisentraut wrote:\n> Here is my proposed patch for this.\n\nThe commit message makes it sound like lc_collate/ctype are completely\nobsolete, and I don't think that's quite right: they still represent\nthe server environment, which does still matter in some cases.\n\nI'd just say that they are too confusing (likely to be misused), and\nbecoming obsolete (or less relevant), or something along those lines.\n\nOtherwise, this is fine with me. I didn't do a detailed review because\nit's just mechanical.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 22 May 2023 10:35:25 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Mon, 2023-05-22 at 14:27 +0200, Peter Eisentraut wrote:\n> The rules are for setting whatever sort order you like.  Maybe you\n> want \n> to sort + before - or whatever.  It's like, if you don't like it,\n> build \n> your own.\n\nA build-your-own feature is fine, but it's not completely zero cost.\n\nThere some risk that rules specified for ICU version X fail to load for\nICU version Y. If that happens to your database default collation, you\nare in big trouble. The risk of failing to load a language tag in a\nlater version, especially one returned by uloc_toLanguageTag() in\nstrict mode, is much lower. We can reduce the risk by allowing rules\nonly for CREATE COLLATION (not CREATE DATABASE), and see what users do\nwith it first, and consider adding it to CREATE DATABASE later.\n\nWe can also try to explain in the docs that it's a build-it-yourself\nkind of feature (use it if you see a purpose, otherwise ignore it),\nthough I'm not sure quite how we should word it.\n\nAnd I'm skeptical that we don't have a single plausible end-to-end user\nstory. I just can't think of any reason someone would need something\nlike this, given how flexible the collation settings in the language\ntags are. The best case I can think of is if someone is trying to make\nan ICU collation that matches some non-ICU collation in another system,\nwhich sounds hard; but perhaps it's reasonable to do in cases where it\njust needs to work well-enough in some limited case.\n\nAlso, do we have an answer as to why specifying the rules as '' is not\nthe same as not specifying any rules[1]?\n\n[1]\nhttps://www.postgresql.org/message-id/36a6e89689716c2ca1fae8adc8e84601a041121c.camel@j-davis.com\n\n> The co settings are just everything else. \n> They are not parametric, they are just some other sort order that \n> someone spelled out explicitly.\n\nThis sounds like another case where we can't really tell the user why\nthey would want to use a specific \"co\" setting; they should only use it\nif they already know they want it. Is there some way we can word that\nin the documentation so that people don't misuse them?\n\nFor instance, one of them is called \"emoji\". I'm sure a lot of\napplications use emoji (or at least might encounter them), should they\nalways use co-emoji, or would some people who are using emoji not want\nit? Can it be combined with \"ks\" or other \"k*\" settings?\n\nWhat I'm trying to avoid is users seeing something in the documentation\nand using it without it really being a good fit for their problem. Then\nthey see something unexpected, and need to rebuild all of their indexes\nor something.\n\n> > * I don't understand what \"kc\" means if \"ks\" is not set to\n> > \"level1\".\n> \n> There is an example here: \n> https://peter.eisentraut.org/blog/2023/05/16/overview-of-icu-collation-settings#colcaselevel\n\nInteresting, thank you.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 22 May 2023 12:03:36 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> If we special case locale=C, but do nothing for locale=fr_FR, then I'm\n> not sure we've solved the problem. Andrew Gierth raised the issue here,\n> which he called \"maximally confusing\":\n> \n> https://postgr.es/m/874jp9f5jo.fsf@news-spur.riddles.org.uk\n> \n> That's why I feel that we need to make locale apply to whatever the\n> provider is, not just when it happens to be C.\n\nWhile I agree that the LOCALE option in CREATE DATABASE is\ncounter-intuitive, I find it questionable that blending ICU\nand libc locales into it helps that much with the user experience.\n\nTrying the lastest v6-* patches applied on top of 722541ead1\n(before the pgindent run), here are a few examples when I\ndon't think it goes well.\n\nThe OS is Ubuntu 22.04 (glibc 2.35, ICU 70.1)\n\ninitdb:\n\n Using default ICU locale \"fr\".\n Using language tag \"fr\" for ICU locale \"fr\".\n The database cluster will be initialized with this locale configuration:\n provider:\t icu\n ICU locale: fr\n LC_COLLATE: fr_FR.UTF-8\n LC_CTYPE:\t fr_FR.UTF-8\n LC_MESSAGES: fr_FR.UTF-8\n LC_MONETARY: fr_FR.UTF-8\n LC_NUMERIC: fr_FR.UTF-8\n LC_TIME:\t fr_FR.UTF-8\n The default database encoding has accordingly been set to \"UTF8\".\n\n\n#1\n\npostgres=# create database test1 locale='fr_FR.UTF-8';\nNOTICE: using standard form \"fr-FR\" for ICU locale \"fr_FR.UTF-8\"\nERROR:\tnew ICU locale (fr-FR) is incompatible with the ICU locale of the\ntemplate database (fr)\nHINT: Use the same ICU locale as in the template database, or use template0\nas template.\n\n\nThat looks like a fairly generic case that doesn't work seamlessly.\n\n\n#2\n\npostgres=# create database test2 locale='C.UTF-8' template='template0';\nNOTICE: using standard form \"en-US-u-va-posix\" for ICU locale \"C.UTF-8\"\nCREATE DATABASE\n\n\nen-US-u-va-posix does not sort like C.UTF-8 in glibc 2.35, so\nthis interpretation is arguably not what a user would expect.\n\nI would expect the ICU warning or error (icu_validation_level) to kick\nin instead of that transliteration.\n\n\n#3\n\n$ grep french /etc/locale.alias\nfrench\t\tfr_FR.ISO-8859-1\n\npostgres=# create database test3 locale='french' template='template0'\nencoding='LATIN1';\nWARNING: ICU locale \"french\" has unknown language \"french\"\nHINT: To disable ICU locale validation, set parameter icu_validation_level\nto DISABLED.\nCREATE DATABASE\n\n\nIn practice we're probably getting the \"und\" ICU locale whereas \"fr\" would\nbe appropriate.\n\n\nI assume that we would find more cases like that if testing on many\noperating systems.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 22 May 2023 22:09:00 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Mon, 2023-05-22 at 22:09 +0200, Daniel Verite wrote:\n> While I agree that the LOCALE option in CREATE DATABASE is\n> counter-intuitive,\n\nI think it's more than that. As Andreww Gierth pointed out:\n\n $ initdb --locale=fr_FR\n ...\n ICU locale: en-US\n ...\n\nIs more than just counter-intuitive. I don't think we can ship 16 that\nway.\n\n> I find it questionable that blending ICU\n> and libc locales into it helps that much with the user experience.\n\nThank you for going through some examples here. I agree that it's not\nperfect, but we need some path to a reasonable ICU user experience, and\nI think we'll have to accept some rough edges to avoid the worst cases,\nlike above.\n\n> initdb:\n> \n>   Using default ICU locale \"fr\".\n>   Using language tag \"fr\" for ICU locale \"fr\".\n> \n\n...\n\n> #1\n> \n> postgres=# create database test1 locale='fr_FR.UTF-8';\n> NOTICE:  using standard form \"fr-FR\" for ICU locale \"fr_FR.UTF-8\"\n> ERROR:  new ICU locale (fr-FR) is incompatible with the ICU locale of\n\nI don't see a problem here. If you specify LOCALE to CREATE DATABASE,\nyou should either be using \"TEMPLATE template0\", or you should be\nexpecting an error if the LOCALE doesn't match exactly.\n\nWhat would you like to see happen here?\n\n> #2\n> \n> postgres=# create database test2 locale='C.UTF-8'\n> template='template0';\n> NOTICE:  using standard form \"en-US-u-va-posix\" for ICU locale\n> \"C.UTF-8\"\n> CREATE DATABASE\n> \n> \n> en-US-u-va-posix does not sort like C.UTF-8 in glibc 2.35, so\n> this interpretation is arguably not what a user would expect.\n\nAs you pointed out, this is not settled in libc either:\n\nhttps://www.postgresql.org/message-id/8a3dc06f-9b9d-4ed7-9a12-2070d8b0165f%40manitou-mail.org\n\nWe really can't expect a particular order for a particular locale name,\nunless we handle it specially like \"C\" or \"POSIX\". If we pass it to the\nprovider, we have to trust the provider to match our conceptual\nexpectations for that locale (and ideally version it properly).\n\n> I would expect the ICU warning or error (icu_validation_level) to\n> kick\n> in instead of that transliteration.\n\nEarlier versions of ICU (<= 63) do this transformation automatically,\nand I don't see a reason to throw an error if ICU considers it valid.\nThe language tag en-US-u-va-posix will be stored in the catalog, and\nthat will be considered valid in later versions of ICU.\n\nLater versions of ICU (>= 64) consider locales with a language name of\n\"C\" to be obsolete and no longer valid. I added code to do the\ntransformation without error in these later versions, but I think we\nhave agreement to remove it.\n\nIf a user specifies the locale as \"C.UTF-8\", we can either pass it to\nICU and see whether that version accepts it or not (and if not, throw a\nwarning/error); or if we decide that \"C.UTF-8\" really means \"C\", we can\nhandle it in the memcmp() path like C and never send it to ICU.\n\n> #3\n> \n> $ grep french /etc/locale.alias\n> french          fr_FR.ISO-8859-1\n> \n> postgres=# create database test3 locale='french' template='template0'\n> encoding='LATIN1';\n> WARNING:  ICU locale \"french\" has unknown language \"french\"\n> HINT:  To disable ICU locale validation, set parameter\n> icu_validation_level\n> to DISABLED.\n> CREATE DATABASE\n> \n> \n> In practice we're probably getting the \"und\" ICU locale whereas \"fr\"\n> would\n> be appropriate.\n\nThis is a good point and illustrates that ICU is not a drop-in\nreplacement for libc in all cases.\n\nI don't see a solution here that doesn't involve some rough edges,\nthough. \"Locale\" is a generic term, and if we continue to insist that\nit really means a libc locale, then ICU will never be on an equal\nfooting with libc, let alone the preferred provider.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 24 May 2023 08:39:07 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 5/24/23 11:39, Jeff Davis wrote:\n> On Mon, 2023-05-22 at 22:09 +0200, Daniel Verite wrote:\n>> In practice we're probably getting the \"und\" ICU locale whereas \"fr\"\n>> would be appropriate.\n> \n> This is a good point and illustrates that ICU is not a drop-in\n> replacement for libc in all cases.\n> \n> I don't see a solution here that doesn't involve some rough edges,\n> though. \"Locale\" is a generic term, and if we continue to insist that\n> it really means a libc locale, then ICU will never be on an equal\n> footing with libc, let alone the preferred provider.\n\nHuge +1\n\nIMHO the experience should be unified to the degree possible.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 24 May 2023 12:31:42 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Mon, 2023-05-15 at 12:06 +0200, Peter Eisentraut wrote:\n> > === 0002: handle some kinds of libc-stlye locale strings\n> > \n> > ICU used to handle libc locale strings like 'fr_FR@euro', but\n> > doesn't\n> > in later versions. Handle them in postgres for consistency.\n> \n> I tend to agree with ICU that these variants are obsolete, and we\n> don't \n> need to support them anymore.  If this were a tiny patch, then maybe\n> ok, \n> but the way it's presented here the whole code is duplicated between \n> pg_locale.c and initdb.c, which is not great.\n\nI dropped this patch from the series.\n\n> > === 0003: reduce icu_validation_level to WARNING\n> > \n> > Given that we've seen some inconsistency in which locale names are\n> > accepted in different ICU versions, it seems best not to be too\n> > strict.\n> > Peter Eisentraut suggested that it be set to ERROR originally, but\n> > a\n> > WARNING should be sufficient to see problems without introducing\n> > risks\n> > migrating to version 16.\n> \n> I'm not sure why this is the conclusion.  Presumably, the detection \n> capabilities of ICU improve over time, so we want to take advantage\n> of \n> that?  What are some example scenarios where this change would help?\n\nFirst of all, I missed this message earlier and I apologize for\nproceeding with a commit that contradicted you -- that was not\nintentional. The change is small and we can go back if needed.\n\nTo restate my reasoning: if we error by default, then changes in ICU\nversions can result in errors, which seems too strong to me. I was\nhoping to escalate the default for this setting to be \"error\" down the\nroad, but it feels like a risk to do so immmediately.\n\nAnother thing to consider is that initdb also does validation, and\nthat's not affected by this GUC. Right now, initdb errors if validation\nfails.\n\n> > We've already allowed users to create ICU collations with the C\n> > locale\n> > in the past, which uses the root collation (not memcmp()), and we\n> > need\n> > to keep supporting that for upgraded clusters.\n> \n> I'm not sure I agree that we need to keep supporting that.  The only\n> way \n> you could get that in past releases is if you specify explicitly,\n> \"give \n> me provider ICU and locale C\", and then it wouldn't actually even\n> work \n> correctly.  So nobody should be using that in practice, and nobody \n> should have stumbled into that combination of settings by accident.\n\nOK, then I'm inclined toward the approach to treat iculocale=C with the\nmemcmp() semantics. Patch added.\n\nI also added a patch with a pg_upgrade check for previous versions with\niculocale=C, to make sure we don't corrupt indexes in case some user\ndid make that mistake.\n\n> >    3. Introduce collation provider \"none\", which is always \n> \n> This seems most attractive, but I think it's quite invasive at this \n> point, especially given the dubious premise (see above).\n\nI removed this from the current patch series, and perhaps we should\nreconsider it in v17.\n\n> > === 0007: Add a GUC to control the default collation provider\n> > \n> > Having a GUC would make it easier to migrate to ICU without\n> > surprises.\n> > This only affects the default for CREATE COLLATION, not CREATE\n> > DATABASE\n> > (and obviously not initdb).\n> \n> It's not clear to me why we would want that.  Also not clear why it \n> should only affect CREATE COLLATION.\n\nRight now the default for CREATE COLLATION is always libc. For CREATE\nDATABASE, it defaults to the template.\n\nI included a patch with a different approach that uses the database\ndefault collation's provider as the default for CREATE COLLATION,\nunless LC_COLLATE or LC_CTYPE is specified.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 25 May 2023 09:24:27 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Mon, 2023-05-22 at 14:34 +0200, Peter Eisentraut wrote:\n> Please put blank lines between\n> \n> </sect3>\n> <sect3 ...>\n> \n> etc., matching existing style.\n> \n> We usually don't capitalize the collation parameters like\n> \n> CREATE COLLATION mycollation1 (PROVIDER = icu, LOCALE = 'ja-JP);\n> \n> elsewhere in the documentation.\n> \n> Table 24.2. ICU Collation Settings should probably be sorted by key,\n> or \n> at least by something.\n> \n> All tables should referenced in the text, like \"Table x.y shows this\n> and \n> that.\"  (Note that a table could float to a different page in some \n> output formats, so just putting it into a section without some \n> introductory text isn't sound.)\n\nThank you, done.\n\n> Table 24.1. ICU Collation Levels shows punctuation as level 4, which\n> is \n> only true in shifted mode, which isn't the default.  The whole\n> business \n> of treating variable collation elements is getting a bit lost in this\n> description.  The kv option is described as \"Classes of characters \n> ignored during comparison at level 3.\", which is effectively true but\n> not the whole picture.\n\nI organized the documentation around practical examples and available\noptions, and less around the conceptual model. I think that's a good\nstart, but you're right that it over-simplifies in a few areas.\n\nDiscussing the model would work better along with an explanation of ICU\nrules, where you can make better use of those concepts. I feel like\nthere are some interesting things that can be done with rules, but I\nhaven't had a chance to really dig in yet.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 25 May 2023 17:23:53 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Jeff Davis wrote:\n\n> > #1\n> > \n> > postgres=# create database test1 locale='fr_FR.UTF-8';\n> > NOTICE: using standard form \"fr-FR\" for ICU locale \"fr_FR.UTF-8\"\n> > ERROR: new ICU locale (fr-FR) is incompatible with the ICU locale of\n> \n> I don't see a problem here. If you specify LOCALE to CREATE DATABASE,\n> you should either be using \"TEMPLATE template0\", or you should be\n> expecting an error if the LOCALE doesn't match exactly.\n> \n> What would you like to see happen here?\n\nWhat's odd is that initdb starting in an fr_FR.UTF-8 environment\nfound that \"fr\" was the default ICU locale to use, whereas\n\"create database\" reports that \"fr\" and \"fr_FR.UTF-8\" refer to\nincompatible locales.\n\nTo me initdb is wrong when coming up with the less precise \"fr\"\ninstead of \"fr-FR\".\n\nI suggest the attached patch to call uloc_getDefault() instead of the\ncurrent code that somehow leaves out the country/region component.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Fri, 26 May 2023 18:24:35 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-05-26 at 18:24 +0200, Daniel Verite wrote:\n> To me initdb is wrong when coming up with the less precise \"fr\"\n> instead of \"fr-FR\".\n> \n> I suggest the attached patch to call uloc_getDefault() instead of the\n> current code that somehow leaves out the country/region component.\n\nThank you. I experimented with several ICU versions and different\nenvironmental settings, and it does seem better at preserving the name\nthat comes from the environment.\n\nThere is a warning in the docs: \"Do not use unless you know what you\nare doing.\"[1] I don't see a reason for the warning or any major risk\nfrom us using it. Perhaps it's because the result is affected by either\nthe environment or the last uloc_setDefault() call. We don't use\nuloc_setDefault(), and we only call uloc_getDefault() once, so I don't\nsee a risk here.\n\nThe fix seems simple enough. Committed.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/uloc_8h.html#a4efa16db7351e62293f8ef0c37aac8d2\n\n\n", "msg_date": "Fri, 26 May 2023 11:47:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "New patch series attached. I plan to commit 0001 and 0002 soon, unless\nthere are objections.\n\n0001 causes the \"C\" and \"POSIX\" locales to be treated with\nmemcmp/pg_ascii semantics in ICU, just like in libc. We also considered\na new \"none\" provider, but it's more invasive, and we can always\nreconsider that in the v17 cycle.\n\n0002 introduces an upgrade check for users who have explicitly\nrequested provider=icu and iculocale=C on older versions, and rejects\nupgrading from v15 in that case to avoid index corruption. Having such\na collation is almost certainly a mistake by the user, because the\ncollator would not give the expected memcmp semantics.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Mon, 05 Jun 2023 10:54:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> New patch series attached. I plan to commit 0001 and 0002 soon, unless\n> there are objections.\n> \n> 0001 causes the \"C\" and \"POSIX\" locales to be treated with\n> memcmp/pg_ascii semantics in ICU, just like in libc. We also\n> considered a new \"none\" provider, but it's more invasive, and we can\n> always reconsider that in the v17 cycle.\n\nFWIW I don't quite see how 0001 improve things or what problem it's\ntrying to solve.\n\n0001 creates exceptions throughout the code so that when an ICU\ncollation has a locale name \"C\" or \"POSIX\" then it does not behave\nlike an ICU collation, even though pg_collation.collprovider='i'\nTo me it's neither desirable nor necessary that a collation that\nhas collprovider='i' is diverted to non-ICU semantics.\n\nAlso in the current state, this diversion does not apply to initdb.\n\n\"initdb --icu-locale=C\" with 0001 applied reports this:\n\n Using language tag \"en-US-u-va-posix\" for ICU locale \"C\".\n The database cluster will be initialized with this locale configuration:\n provider:\t icu\n ICU locale: en-US-u-va-posix\n LC_COLLATE: fr_FR.UTF-8\n [...]\n\nand \"initdb --locale=C\" reports this:\n\n Using default ICU locale \"fr_FR\".\n Using language tag \"fr-FR\" for ICU locale \"fr_FR\".\n The database cluster will be initialized with this locale configuration:\n provider:\t icu\n ICU locale: fr-FR\n LC_COLLATE: C\n [...]\n\nCould you elaborate a bit more on what 0001 is meant to achieve, from\nthe point of view of the user?\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 06 Jun 2023 15:09:59 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 6/6/23 09:09, Daniel Verite wrote:\n> \tJeff Davis wrote:\n>> New patch series attached. I plan to commit 0001 and 0002 soon, unless\n>> there are objections.\n>> \n>> 0001 causes the \"C\" and \"POSIX\" locales to be treated with\n>> memcmp/pg_ascii semantics in ICU, just like in libc. We also\n>> considered a new \"none\" provider, but it's more invasive, and we can\n>> always reconsider that in the v17 cycle.\n\n> 0001 creates exceptions throughout the code so that when an ICU\n> collation has a locale name \"C\" or \"POSIX\" then it does not behave\n> like an ICU collation, even though pg_collation.collprovider='i'\n> To me it's neither desirable nor necessary that a collation that\n> has collprovider='i' is diverted to non-ICU semantics.\n\nThis discussion makes me wonder (though probably too late for the v16 \ncycle) if we shouldn't treat \"C\" and \"POSIX\" locales to be a third \nprovider, something like \"internal\".\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 6 Jun 2023 14:11:18 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Tue, 2023-06-06 at 14:11 -0400, Joe Conway wrote:\n> This discussion makes me wonder (though probably too late for the v16\n> cycle) if we shouldn't treat \"C\" and \"POSIX\" locales to be a third \n> provider, something like \"internal\".\n\nThat's exactly what I did in v6 of this series: I created a \"none\"\nprovider, and when someone specified provider=icu iculocale=C, it would\nchange the provider to \"none\":\n\nhttps://www.postgresql.org/message-id/5f9bf4a0b040428c5db2dc1f23cc3ad96acb5672.camel%40j-davis.com\n\nI'm fine with either approach.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 06 Jun 2023 12:15:39 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 6/6/23 15:15, Jeff Davis wrote:\n> On Tue, 2023-06-06 at 14:11 -0400, Joe Conway wrote:\n>> This discussion makes me wonder (though probably too late for the v16\n>> cycle) if we shouldn't treat \"C\" and \"POSIX\" locales to be a third \n>> provider, something like \"internal\".\n> \n> That's exactly what I did in v6 of this series: I created a \"none\"\n> provider, and when someone specified provider=icu iculocale=C, it would\n> change the provider to \"none\":\n> \n> https://www.postgresql.org/message-id/5f9bf4a0b040428c5db2dc1f23cc3ad96acb5672.camel%40j-davis.com\n> \n> I'm fine with either approach.\n\nHa!\n\nWell it seems like I am +1 on that then ;-)\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 6 Jun 2023 15:18:02 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Tue, 2023-06-06 at 15:09 +0200, Daniel Verite wrote:\n> FWIW I don't quite see how 0001 improve things or what problem it's\n> trying to solve.\n\nThe word \"locale\" is generic, so we need to make LOCALE/--locale apply\nto whatever provider is being used. If \"locale\" only applies to libc,\nusing ICU will always be confusing and never be on the same level as\nlibc, let alone the preferred provider.\n\nThe locale \"C\" is a special case, documented as a non-locale. So, if\nLOCALE/--locale apply to ICU, then either ICU needs to handle locale\n\"C\" in the expected way (v8 patch series); or when we see locale \"C\" we\nneed to somehow change the provider into something that can handle it\n(v6 patch series changes it to the \"none\" provider).\n\nPlease let me know if you disagree with the goal or the reasoning here.\nIf so, please explain where you think we should end up, because the\nstatus quo does not seem great to me.\n\n> 0001 creates exceptions throughout the code so that when an ICU\n> collation has a locale name \"C\" or \"POSIX\" then it does not behave\n> like an ICU collation, even though pg_collation.collprovider='i'\n> To me it's neither desirable nor necessary that a collation that\n> has collprovider='i' is diverted to non-ICU semantics.\n\nIt's not very principled, but it matches what libc does.\n\n> Also in the current state, this diversion does not apply to initdb.\n> \n> \"initdb --icu-locale=C\" with 0001 applied reports this:\n> \n>    Using language tag \"en-US-u-va-posix\" for ICU locale \"C\".\n\nThank you. I fixed it by skipping the canonicalization for C/POSIX\nlocales in initdb.\n\n> Could you elaborate a bit more on what 0001 is meant to achieve, from\n> the point of view of the user?\n\nIt makes it so the user consistently (regardless of the provider) gets\nthe \"no locale\" behavior (as documented and historically expected) when\nthey specify the C or POSIX locales.\n\nThen that enables us to change LOCALE/--locale to apply to ICU, which\nmeans that a simple command like \"initdb --locale=en_US\" does a\nsensible thing regardless of the default provider.\n\nI understand you are skeptical of trying to apply an arbitrary locale\nname to ICU, but if they don't specify the provider, what do you expect\nto happen?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Tue, 06 Jun 2023 12:18:20 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 6/6/23 15:18, Jeff Davis wrote:\n> On Tue, 2023-06-06 at 15:09 +0200, Daniel Verite wrote:\n>> FWIW I don't quite see how 0001 improve things or what problem it's\n>> trying to solve.\n> \n> The word \"locale\" is generic, so we need to make LOCALE/--locale apply\n> to whatever provider is being used. If \"locale\" only applies to libc,\n> using ICU will always be confusing and never be on the same level as\n> libc, let alone the preferred provider.\n\n\nAgree 100%\n\n> The locale \"C\" is a special case, documented as a non-locale. So, if\n> LOCALE/--locale apply to ICU, then either ICU needs to handle locale\n> \"C\" in the expected way (v8 patch series); or when we see locale \"C\" we\n> need to somehow change the provider into something that can handle it\n> (v6 patch series changes it to the \"none\" provider).\n\n+1 to the latter approach\n\n> Please let me know if you disagree with the goal or the reasoning here.\n> If so, please explain where you think we should end up, because the\n> status quo does not seem great to me.\n\nalso +1\n\n>> 0001 creates exceptions throughout the code so that when an ICU\n>> collation has a locale name \"C\" or \"POSIX\" then it does not behave\n>> like an ICU collation, even though pg_collation.collprovider='i'\n>> To me it's neither desirable nor necessary that a collation that\n>> has collprovider='i' is diverted to non-ICU semantics.\n> \n> It's not very principled, but it matches what libc does.\n\nMakes sense to me\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 6 Jun 2023 15:21:25 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 6/6/23 15:18, Jeff Davis wrote:\n>> The locale \"C\" is a special case, documented as a non-locale. So, if\n>> LOCALE/--locale apply to ICU, then either ICU needs to handle locale\n>> \"C\" in the expected way (v8 patch series); or when we see locale \"C\" we\n>> need to somehow change the provider into something that can handle it\n>> (v6 patch series changes it to the \"none\" provider).\n\n> +1 to the latter approach\n\nAlso +1, except that I find \"none\" a rather confusing choice of name.\nThere *is* a provider, it's just PG itself not either libc or ICU.\nI thought Joe's suggestion of \"internal\" made more sense.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jun 2023 15:25:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Tue, Jun 6, 2023 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > On 6/6/23 15:18, Jeff Davis wrote:\n> >> The locale \"C\" is a special case, documented as a non-locale. So, if\n> >> LOCALE/--locale apply to ICU, then either ICU needs to handle locale\n> >> \"C\" in the expected way (v8 patch series); or when we see locale \"C\" we\n> >> need to somehow change the provider into something that can handle it\n> >> (v6 patch series changes it to the \"none\" provider).\n>\n> > +1 to the latter approach\n>\n> Also +1, except that I find \"none\" a rather confusing choice of name.\n> There *is* a provider, it's just PG itself not either libc or ICU.\n> I thought Joe's suggestion of \"internal\" made more sense.\n\nOr perhaps \"builtin\" or \"postgresql\".\n\nI'm just thinking that \"internal\" as a type name kind of means \"you\nshouldn't be touching this from SQL\" and we don't want to give people\nthe idea that the \"C\" locale isn't something you should use.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Jun 2023 15:37:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jun 6, 2023 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also +1, except that I find \"none\" a rather confusing choice of name.\n>> There *is* a provider, it's just PG itself not either libc or ICU.\n>> I thought Joe's suggestion of \"internal\" made more sense.\n\n> Or perhaps \"builtin\" or \"postgresql\".\n\nEither OK by me\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jun 2023 15:55:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 6/6/23 15:55, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Tue, Jun 6, 2023 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Also +1, except that I find \"none\" a rather confusing choice of name.\n>>> There *is* a provider, it's just PG itself not either libc or ICU.\n>>> I thought Joe's suggestion of \"internal\" made more sense.\n> \n>> Or perhaps \"builtin\" or \"postgresql\".\n> \n> Either OK by me\n\nSame here\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 6 Jun 2023 15:56:32 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 6/6/23 3:56 PM, Joe Conway wrote:\r\n> On 6/6/23 15:55, Tom Lane wrote:\r\n>> Robert Haas <robertmhaas@gmail.com> writes:\r\n>>> On Tue, Jun 6, 2023 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>>>> Also +1, except that I find \"none\" a rather confusing choice of name.\r\n>>>> There *is* a provider, it's just PG itself not either libc or ICU.\r\n>>>> I thought Joe's suggestion of \"internal\" made more sense.\r\n>>\r\n>>> Or perhaps \"builtin\" or \"postgresql\".\r\n>>\r\n>> Either OK by me\r\n> \r\n> Same here\r\n\r\nSince we're bikeshedding, \"postgresql\" or \"builtin\" could make it seem \r\nto a (app) developer that these may be recommended options, as we're \r\ntrusting PostgreSQL to make the best choices for us. Granted, v16 is \r\n(theoretically) defaulting to ICU, so that choice is made, but the \r\nunsuspecting developer could make a switch based on that naming.\r\n\r\nHowever, I don't have a strong alternative -- I understand the concern \r\nabout \"internal\", so I'd be OK with \"postgresql\" unless a better name \r\nappears.\r\n\r\nJonathan", "msg_date": "Tue, 6 Jun 2023 16:00:55 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Since we're bikeshedding, \"postgresql\" or \"builtin\" could make it seem \n> to a (app) developer that these may be recommended options, as we're \n> trusting PostgreSQL to make the best choices for us. Granted, v16 is \n> (theoretically) defaulting to ICU, so that choice is made, but the \n> unsuspecting developer could make a switch based on that naming.\n\nI don't think this is a problem, as long as any locale name other\nthan C/POSIX fails when combined with that provider name.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jun 2023 16:20:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": ">>>>> \"Joe\" == Joe Conway <mail@joeconway.com> writes:\n\n > On 6/6/23 15:55, Tom Lane wrote:\n >> Robert Haas <robertmhaas@gmail.com> writes:\n >>> On Tue, Jun 6, 2023 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n >>>> Also +1, except that I find \"none\" a rather confusing choice of name.\n >>>> There *is* a provider, it's just PG itself not either libc or ICU.\n >>>> I thought Joe's suggestion of \"internal\" made more sense.\n >> \n >>> Or perhaps \"builtin\" or \"postgresql\".\n >> Either OK by me\n\n Joe> Same here\n\nI like either \"internal\" or \"builtin\" because they correctly identify\nthat no external resources are used. I'm not keen on \"postgresql\".\n\n-- \nAndrew.\n\n\n", "msg_date": "Tue, 06 Jun 2023 21:37:26 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 22.05.23 19:35, Jeff Davis wrote:\n> On Thu, 2023-05-11 at 13:07 +0200, Peter Eisentraut wrote:\n>> Here is my proposed patch for this.\n> \n> The commit message makes it sound like lc_collate/ctype are completely\n> obsolete, and I don't think that's quite right: they still represent\n> the server environment, which does still matter in some cases.\n> \n> I'd just say that they are too confusing (likely to be misused), and\n> becoming obsolete (or less relevant), or something along those lines.\n> \n> Otherwise, this is fine with me. I didn't do a detailed review because\n> it's just mechanical.\n\nI have committed this with some tuning of the commit message.\n\n\n\n", "msg_date": "Wed, 7 Jun 2023 17:03:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> The locale \"C\" is a special case, documented as a non-locale. So, if\n> LOCALE/--locale apply to ICU, then either ICU needs to handle locale\n> \"C\" in the expected way (v8 patch series); or when we see locale \"C\" we\n> need to somehow change the provider into something that can handle it\n> (v6 patch series changes it to the \"none\" provider).\n\nYes it's a special case but when doing initdb --locale=C, a user does\nnot need or want an ICU locale. They want the same thing than what v15\ndoes with the same arguments: a template0 database with\ndatlocprovider='c', datcollate='C', datctype='C', dateiculocale=NULL.\n\nThe simplest way to obtain that in v16 is to teach initdb that\n--locale=C without the --locale-provider option implies that\n--locale-provider=libc ([1])\n\nOTOH what you're proposing with the 0001 patch is much more involved\nin terms of tweaking the ICU code so that dateiculocale='C' and\ndatlocprovider='i' becomes a combination that provides the C semantics\nthat ICU doesn't have natively.\n\nI don't agree with the reasoning that to make progress with\nthe other uses of --locale, we need to start by either making ICU\nsupport C/POSIX (0001/0002), or add a new \"none/builtin\" provider\n(previous set of patches).\nv16 should not need it any more than v15 did, if v16 does the same as\nv15 on locale='C', that is not involve ICU at all.\n\n> Then that enables us to change LOCALE/--locale to apply to ICU, which\n> means that a simple command like \"initdb --locale=en_US\" does a\n> sensible thing regardless of the default provider.\n>\n> I understand you are skeptical of trying to apply an arbitrary locale\n> name to ICU, but if they don't specify the provider, what do you expect\n> to happen?\n\nIt's a hard question because it depends on what people have in their \nlocale environment combined with what they try to do.\nI think that initdb without any locale option should work well in\nthe majority of environments, but specifying a locale alone will not work\nwell in a number of cases, so users might end up concluding that they\nneed to specify not only the provider but lc_collate/lc_ctype.\n\n\n[1]\nhttps://www.postgresql.org/message-id/360c90b9-7c20-4cec-aade-38e6e3351c05@manitou-mail.org\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 07 Jun 2023 23:50:57 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 05.06.23 19:54, Jeff Davis wrote:\n> \n> New patch series attached. I plan to commit 0001 and 0002 soon, unless\n> there are objections.\n> \n> 0001 causes the \"C\" and \"POSIX\" locales to be treated with\n> memcmp/pg_ascii semantics in ICU, just like in libc. We also considered\n> a new \"none\" provider, but it's more invasive, and we can always\n> reconsider that in the v17 cycle.\n> \n> 0002 introduces an upgrade check for users who have explicitly\n> requested provider=icu and iculocale=C on older versions, and rejects\n> upgrading from v15 in that case to avoid index corruption. Having such\n> a collation is almost certainly a mistake by the user, because the\n> collator would not give the expected memcmp semantics.\n\nI'm dubious about these two.\n\n0003 seems like the correct direction. In createdb.c, the change you \nadd makes sense, but you should also remove the existing use of the \nlocale variable:\n\n- if (locale)\n- {\n- if (!lc_ctype)\n- lc_ctype = locale;\n- if (!lc_collate)\n- lc_collate = locale;\n- }\n-\n\n\n\n", "msg_date": "Wed, 7 Jun 2023 23:59:59 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 05.06.23 19:54, Jeff Davis wrote:\n> New patch series attached.\n\nCould you clarify what here is intended for 16 and what is for later? \nThis patch set keeps expanding and changing in each iteration.\n\nThere is a PG16 open item linked to this thread\n\n* The rules for choosing default ICU locale seem pretty unfriendly\n\nwhich I think would be addressed by an appropriately fixed up variant of \nyour patch 0003.\n\n(Or if not, what is the actual issue?)\n\nEverything else appears to be either new feature work or fixing \npre-existing prehavior, so is not in scope for PG16 and should be dealt \nwith elsewhere, so we can focus here on closing out this release.\n\n\n\n", "msg_date": "Thu, 8 Jun 2023 00:11:24 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Wed, 2023-06-07 at 23:50 +0200, Daniel Verite wrote:\n> The simplest way to obtain that in v16 is to teach initdb that\n> --locale=C without the --locale-provider option implies that\n> --locale-provider=libc ([1])\n\nAs I replied in that subthread, that creates a worse problem: if you\nonly change the provider when the locale is C, then what about when the\nlocale is *not* C?\n\n export LANG=en_US.UTF-8\n initdb -D data --locale=fr_FR.UTF-8\n ...\n provider: icu\n ICU locale: en-US\n\nI believe that case is an order of magnitude worse than the other cases\nyou brought up in that subthread.\n\nIt also leaves the fundamental problem in place that LOCALE only\napplies to the libc provider, which multiple people have agreed is not\nacceptable.\n\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 07 Jun 2023 16:11:44 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Thu, 2023-06-08 at 00:11 +0200, Peter Eisentraut wrote:\n> On 05.06.23 19:54, Jeff Davis wrote:\n> > New patch series attached.\n> \n> Could you clarify what here is intended for 16 and what is for later?\n\nI apologize about the patch churn here. I implemented several\napproaches to see what feedback I get, and now it looks like we're\nreturning to a previous idea (the \"builtin\" provider).\n\nIn v16:\n\n1. We need LOCALE to apply to all providers.\n\n2. We need LOCALE=C to give the memcmp/pg_ascii behavior in all cases\n(unless overridden by a separate LC_COLLATE or LC_CTYPE parameter).\n\nThose are the biggest problems raised in this thread, and the patches\nto accomplish those things are in scope for v16.\n\nAfter we sort those out, there are two loose ends:\n\n* What do we do in the case where the environment has LANG=C.UTF-8 (as\nsome buildfarm members do)? Is an error acceptable in that case?\n\n* Do we move icu_validation_level back to ERROR?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 07 Jun 2023 16:26:12 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "Hi,\n\n> On Wed, 2023-06-07 at 23:50 +0200, Daniel Verite wrote:\n>> The simplest way to obtain that in v16 is to teach initdb that\n>> --locale=C without the --locale-provider option implies that\n>> --locale-provider=libc ([1])\n> \n> As I replied in that subthread, that creates a worse problem: if you\n> only change the provider when the locale is C, then what about when the\n> locale is *not* C?\n> \n> export LANG=en_US.UTF-8\n> initdb -D data --locale=fr_FR.UTF-8\n> ...\n> provider: icu\n> ICU locale: en-US\n> \n> I believe that case is an order of magnitude worse than the other cases\n> you brought up in that subthread.\n> \n> It also leaves the fundamental problem in place that LOCALE only\n> applies to the libc provider, which multiple people have agreed is not\n> acceptable.\n\nDaniels comment:\n>> Yes it's a special case but when doing initdb --locale=C, a user does\n>> not need or want an ICU locale. They want the same thing than what v15\n>> does with the same arguments: a template0 database with\n>> datlocprovider='c', datcollate='C', datctype='C', dateiculocale=NULL.\n\nSo in this case the only way to keep the same behavior in v16 with \"initdb\n--locale=C\" (--no-locale) in v15 is, bulding PostgreSQL --without-icu?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 08 Jun 2023 09:30:33 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 6/7/23 19:26, Jeff Davis wrote:\n> * What do we do in the case where the environment has LANG=C.UTF-8 (as\n> some buildfarm members do)? Is an error acceptable in that case?\n\n\nIf I understand the discussion so far correctly, I think that case \nshould fall to the provider.\n\nIf it supports \"C.UTF-8\" explicitly as some distributions do, then use it.\n\nIf the provider has no such thing, throw an error.\n\nSomewhere we should document that \"C.UTF-8\" from the provider might not \nbe as stable or working as they expect.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 7 Jun 2023 20:52:46 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": ">> As I replied in that subthread, that creates a worse problem: if you\n>> only change the provider when the locale is C, then what about when the\n>> locale is *not* C?\n>> \n>> export LANG=en_US.UTF-8\n>> initdb -D data --locale=fr_FR.UTF-8\n>> ...\n>> provider: icu\n>> ICU locale: en-US\n>> \n>> I believe that case is an order of magnitude worse than the other cases\n>> you brought up in that subthread.\n>> \n>> It also leaves the fundamental problem in place that LOCALE only\n>> applies to the libc provider, which multiple people have agreed is not\n>> acceptable.\n\nNote that most of PostgreSQL users in Japan do initdb\n--no-locale. Almost never use other than C locale because the users do\nnot rely on system collation. Most database have an extra column which\nrepresents the pronunciation in Hiragana or Katakana.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 08 Jun 2023 10:45:35 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tTatsuo Ishii wrote:\n\n> >> Yes it's a special case but when doing initdb --locale=C, a user does\n> >> not need or want an ICU locale. They want the same thing than what v15\n> >> does with the same arguments: a template0 database with\n> >> datlocprovider='c', datcollate='C', datctype='C', dateiculocale=NULL.\n> \n> So in this case the only way to keep the same behavior in v16 with \"initdb\n> --locale=C\" (--no-locale) in v15 is, bulding PostgreSQL --without-icu?\n\nAFAIK the --no-locale case in v16 is fixed since:\n\ncommit 5cd1a5af4d17496a58678c8eb7ab792119c2d723\nAuthor: Jeff Davis <jdavis@postgresql.org>\nDate:\tFri Apr 21 13:11:18 2023 -0700\n\n Fix initdb --no-locale.\n\n Discussion: https://postgr.es/m/878relf7cb.fsf@news-spur.riddles.org.uk\n Reported-by: Andrew Gierth\n\n\nThe --locale=C case is still being discussed. To me it should\nproduce the same result than --no-locale and --locale=C in v15, that is,\n\"ICU is the default\" does not apply to that case, but currently\nit initializes the cluster with an ICU locale.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 08 Jun 2023 11:03:25 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> As I replied in that subthread, that creates a worse problem: if you\n> only change the provider when the locale is C, then what about when the\n> locale is *not* C?\n> \n> export LANG=en_US.UTF-8\n> initdb -D data --locale=fr_FR.UTF-8\n> ...\n> provider: icu\n> ICU locale: en-US\n\nWhat you're proposing with the 0003 patch still applies.\n\nIn the above case I think we would end up with:\n\nprovider=icu\nICU locale=fr-FR\nlc_collate=fr_FR.UTF-8\nlc_lctype=fr_FR.UTF-8\n\nwhich is reasonable.\n\n\nIn the following cases we would initialize a libc cluster instead of an\nICU cluster:\n\n- initdb --locale=C\n- initdb --locale=POSIX\n- LANG=C initdb\n- LANG=C.UTF-8 initdb\n- LANG=POSIX initdb\n- ... possibly other locales that we find are unsuitable for ICU\n\nThat is, the rule \"ICU by default\" really means \"ICU unless the locale\nthat we're being passed or getting from the environment\nhas semantics that ICU does not provide but we know libc provides,\nin which case we fall back to libc\".\n\nThe user who wants ICU imperatively should invoke\n--icu-locale=something or --locale=something --locale-provider=icu\nin which case we should not fallback to libc.\nWe still have to determine lc_collate and lc_ctype either from the\nenvironment or from the locale argument (I think we should\nfavor the environment), except if the user specifies\n--lc-collate=... lc-ctype=...\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 08 Jun 2023 12:08:12 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Wed, 2023-06-07 at 20:52 -0400, Joe Conway wrote:\n> If the provider has no such thing, throw an error.\n\nJust to be clear, that implies that users (and buildfarm members) with\nLANG=C.UTF-8 in their environment would not be able to run a plain\n\"initdb -D data\"; they'd get an error. It's hard for me to estimate how\nmany users might be inconvenienced by that, but it sounds like a risk.\n\nPerhaps for this specific case, and only in initdb, we change\nC.anything and POSIX.anything to the builtin provider? CREATE DATABASE\nand CREATE COLLATION could still reject such locales.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 08 Jun 2023 14:15:39 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 6/8/23 17:15, Jeff Davis wrote:\n> On Wed, 2023-06-07 at 20:52 -0400, Joe Conway wrote:\n>> If the provider has no such thing, throw an error.\n> \n> Just to be clear, that implies that users (and buildfarm members) with\n> LANG=C.UTF-8 in their environment would not be able to run a plain\n> \"initdb -D data\"; they'd get an error. It's hard for me to estimate how\n> many users might be inconvenienced by that, but it sounds like a risk.\n\nWell, but only if their libc provider does not have C.UTF-8, correct?\n\nI see\n----------------\nLinux Mint 21.1:\t/usr/lib/locale/C.utf8\nRHEL 8:\t\t\t/usr/lib/locale/C.utf8\nRHEL 9:\t\t\t/usr/lib/locale/C.utf8\nAL2:\t\t\t/usr/lib/locale/C.utf8\n\nHowever I do not see it on RHEL 7 :-(\n\n> Perhaps for this specific case, and only in initdb, we change\n> C.anything and POSIX.anything to the builtin provider?\n\nMight be best, with some kind of warning perhaps?\n\n> CREATE DATABASE and CREATE COLLATION could still reject such\n> locales.\n\nSeems to make sense.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 8 Jun 2023 17:59:43 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Tue, 2023-06-06 at 21:37 +0100, Andrew Gierth wrote:\n> > > > > \n> I like either \"internal\" or \"builtin\" because they correctly identify\n> that no external resources are used. I'm not keen on \"postgresql\".\n\n\"builtin\" seems to be the winner. New patch series attached with doc\nand test updates.\n\nThis has been a long discussion (it's a messy problem), but I think\nI've addressed the most important concerns raised. If you disagree with\nsomething, please indicate whether it's an objection, or a more minor\ndifference of opinion that I can weigh against other opinions. Also\nplease indicate if you think something is out of scope for 16.\n\nPatches 0001, 0002:\n\nThese patches implement the built-in provider and automatically change\nprovider=icu to provider=builtin when the locale is C. Other approaches\nwere considered:\n * Pretend that ICU can support the C locale, and use similar checks\nthroughout the code like the libc provider does: This was somewhat of a\nhack, and had potential issues with upgraded clusters, and several\npeople seemed to reject it.\n * Switch to the libc provider for the C locale: would make the libc\nprovider even more complicated and had some potential for confusion,\nand also has catalog representation problems when --locale is specified\nalong with --lc-ctype.\n\nUltimately we need to choose one approach, and the built-in provider\nseems the nicest (though most invasive). It reflects the reality that\nwe don't actually use libc or icu for the C locale, and it's nicer to\ndocument. The builtin provider seemed to get the most support.\n\n\nPatch 0003:\n\nMakes LOCALE apply to all providers. The overall feel after this patch\nis that \"locale\" now means the collation locale, and\nLC_COLLATE/LC_CTYPE are for the server environment. When using libc,\nLC_COLLATE and LC_CTYPE still work as they did before, but their\nrelationship to database collation feels more like a special case of\nthe libc provider. I believe most people favor this patch and I haven't\nseen recent objections.\n\n\nI didn't find any surprising behaviors, but there are a few that I'd\nlike to draw attention to:\n\n0. If you initdb with --locale-provider=libc, and don't specify ICU at\nany later point, then none of these changes should affect you and\nyou'll remain on libc. If someone notices otherwise, please let me\nknow.\n\n1. If you specify --locale-provider=builtin at initdb time, you *must*\nspecify --locale=C/POSIX, otherwise you get an error.\n\n2. Patch 0004 is possibly out of scope for 16, but it felt consistent\nwith the other UI changes and low risk. Please try with/without before\nobjecting.\n\n3. Daniel Verite felt that we should only change the provider from ICU\nto \"builtin\" for the C locale if the provider is defaulting to ICU; not\nif it's specified as ICU. I did not differentiate between specifying\nICU and defaulting to ICU because:\n a. \"libc\" unconditionally uses the built-in memcmp() logic for C, it\nnever actually uses libc\n b. If a user really wants the root locale or the en-US-u-va-posix\nlocale, they can specify those directly\n c. I don't see any plausible case where it helps a user to keep\nprovider=icu when locale=C.\n\n4. Joe Conway and Peter Eisentraut both felt that C.UTF-8 with\nprovider=icu should not be changed to use the builtin provider, and\ninstead passed on to ICU. I implemented a compromise where initdb will\nchange C.UTF-8 to the built-in provider; but CREATE DATABASE/COLLATION\nwill pass it along to ICU (which may support it as en-US-u-va-posix in\nsome versions, or may throw an error in other versions). My reasoning\nis that initdb is pulling from the environment, and we should try\nharder to succeed on any reasonable environmental settings (otherwise\ninitdb with default settings could fail); whereas we can be more strict\nwith CREATE DATABASE/COLLATION.\n\n5. For the built-in provider, initdb defaults to UTF-8 rather than\nSQL_ASCII. Otherwise, you would be unable to use ICU at all later,\nbecause ICU doesn't support SQL_ASCII.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Thu, 08 Jun 2023 17:36:50 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> I implemented a compromise where initdb will \n> change C.UTF-8 to the built-in provider\n\nThis handling of C.UTF-8 would be felt by users as simply broken.\n\nWith the v10 patches:\n\n$ initdb --locale=C.UTF-8\n\ninitdb: using locale provider \"builtin\" for ICU locale \"C.UTF-8\"\nThe database cluster will be initialized with this locale configuration:\n default collation provider: builtin\n default collation locale: C\n LC_COLLATE: C.UTF-8\n LC_CTYPE: C.UTF-8\n\nThis setup is not what the user has asked for and leads to that kind of\nwrong results:\n\n$ psql -c \"select upper('é')\"\n ?column? \n----------\n é\n\nwhereas in v15 we would get the correct result 'É'.\n\n\nThen once inside that cluster, trying to create a database:\n\n postgres=# create database test locale='C.UTF-8';\n ERROR: locale provider \"builtin\" does not support locale \"C.UTF-8\"\n HINT: The built-in locale provider only supports the \"C\" and \"POSIX\"\nlocales.\n\n\nThat hardly makes sense considering that initdb stated the opposite,\nthat the \"built-in provider\" was adequate for C.UTF-8\n\n\nIn general about the evolution of the patchset, your interpretation\nof \"defaulting to ICU\" seems to be \"avoid libc at any cost\", which IMV\nis unreasonably user-hostile.\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 09 Jun 2023 14:12:36 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-06-09 at 14:12 +0200, Daniel Verite wrote:\n> >  I implemented a compromise where initdb will \n> >  change C.UTF-8 to the built-in provider\n> \n> $ initdb --locale=C.UTF-8\n\n...\n\n> This setup is not what the user has asked for and leads to that kind\n> of\n> wrong results:\n> \n> $ psql -c \"select upper('é')\"\n>  ?column? \n> ----------\n>  é\n> \n> whereas in v15 we would get the correct result 'É'.\n\nI guess where I'm confused is: why would a user actually want their\ndatabase collation to be C.UTF-8? It's slower than C, our\nimplementation doesn't properly version it (as you pointed out), and\nthe semantics don't seem great ('Z' < 'a').\n\nIf the user specifies provider=libc, then of course we should honor\nthat and C.UTF-8 is a valid locale for libc.\n\nBut if they don't specify the provider, isn't it much more likely they\njust don't care much about the locale, and would be happier with C? \n\nPerhaps there's some better compromise here than the one I picked, but\nI see this as a fairly small problem in comparison to the big problems\nthat we're solving.\n\n\n> In general about the evolution of the patchset, your interpretation\n> of \"defaulting to ICU\" seems to be \"avoid libc at any cost\", which\n> IMV\n> is unreasonably user-hostile.\n\nThe user can easily get libc behavior by specifying --locale-\nprovider=libc, so I don't see how you reached this conclusion.\n\n\nLet me try to understand and address the points you raised here[1] in\nmore detail:\n\nIt looks like you are fine with 0003 applying LOCALE to whatever\nprovider is chosen, but you'd like to be smarter about choosing the\nprovider and to choose libc in at least some cases.\n\nThat is actually very much like option #2 in the list I presented\nhere[2], and has the same problems. How should the following behave?\n\n initdb --locale=C --lc-collate=fr_FR.utf8\n initdb --locale=en --lc-collate=fr_FR.utf8\n\nIf we switch to libc in the first case, then --locale will be ignored\nand the collation will be fr_FR.utf8. But we will leave the second case\nas ICU and the collation will be \"en\". I'm sure we can come up with\nsomething there, but it feels like there's more room for confusion\nalong this path, and the builtin provider seems cleaner.\n\nYou also suggested that we consider switching the provider to libc any\ntime ICU doesn't support something. I'm not sure whether you meant a\nstatic list (C, C.UTF-8, POSIX, ...?) or some kind of dynamic test. I'm\nskeptical of being too smart here, but I'd like to hear what you mean.\nI'm also not clear whether you think we should abandon the built-in\nprovider, or still select it for C/POSIX.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/7de2dc15-5211-45b3-afcb-71dcaf7a08bb@manitou-mail.org\n\n[2]\nhttps://www.postgresql.org/message-id/daa9f060aa2349ebc84444515efece49e7b32c5d.camel@j-davis.com\n\n\n\n", "msg_date": "Fri, 09 Jun 2023 08:55:40 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "\tJeff Davis wrote:\n\n> I guess where I'm confused is: why would a user actually want their\n> database collation to be C.UTF-8? It's slower than C, our\n> implementation doesn't properly version it (as you pointed out), and\n> the semantics don't seem great ('Z' < 'a').\n\nBecause when LC_CTYPE=C, characters outside of US ASCII are not\ncategorized properly. upper/lower/regexp matching/... produce\nincorrect results.\n\n> But if they don't specify the provider, isn't it much more likely they\n> just don't care much about the locale, and would be happier with C? \n\nConsider a pre-existing script doing initdb --locale=C.UTF-8\nSurely it does care about the locale, otherwise it would not specify\nit.\nAssuming that it would be better off with C is assuming that a\nnon-Unicode aware locale is better than the Unicode-aware locale\nthey're asking. I don't think it's reasonable.\n\n> The user can easily get libc behavior by specifying --locale-\n> provider=libc, so I don't see how you reached this conclusion.\n\nWhat would be user hostile is forcing users that don't need an ICU\nlocale to change their invocations of initdb/createdb to avoid\nregressions with v16. Most people would discover this after\nit breaks their apps.\n\n> It looks like you are fine with 0003 applying LOCALE to whatever\n> provider is chosen, but you'd like to be smarter about choosing the\n> provider and to choose libc in at least some cases.\n> \n> That is actually very much like option #2 in the list I presented\n> here[2], and has the same problems. How should the following behave?\n> \n> initdb --locale=C --lc-collate=fr_FR.utf8\n> initdb --locale=en --lc-collate=fr_FR.utf8\n\nThe same as v15.\n\n> If we switch to libc in the first case, then --locale will be ignored\n> and the collation will be fr_FR.utf8.\n\n$ initdb --locale=C --lc-collate=fr_FR.utf8\nv15 does that:\n\n The database cluster will be initialized with this locale configuration:\n provider:\t libc\n LC_COLLATE: fr_FR.utf8\n LC_CTYPE:\t C\n LC_MESSAGES: C\n LC_MONETARY: C\n LC_NUMERIC: C\n LC_TIME:\t C\n The default database encoding has accordingly been set to \"SQL_ASCII\".\n\n--locale is not ignored, it's overriden for LC_COLLATE only.\n\n> But we will leave the second case as ICU and the collation will be\n> \"en\".\n\nYes. To me the rule for \"ICU is the default\" in v16 should be: if the\n--locale argument points to a locale that we know ICU does not provide,\nwe fall back to the v15 behavior down to every detail, otherwise we let\nICU be the provider.\n\n> You also suggested that we consider switching the provider to libc any\n> time ICU doesn't support something. I'm not sure whether you meant a\n> static list (C, C.UTF-8, POSIX, ...?) or some kind of dynamic test.\n\nC, C.*, POSIX. I'm not sure if there are other cases.\n\n> I'm also not clear whether you think we should abandon the built-in\n> provider, or still select it for C/POSIX.\n\nI see it as going in v17, because it came after feature freeze and\nis not strictly necessary in v16.\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 12 Jun 2023 11:37:58 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 09.06.23 02:36, Jeff Davis wrote:\n> Patches 0001, 0002:\n> \n> These patches implement the built-in provider and automatically change\n> provider=icu to provider=builtin when the locale is C.\n\nI object to adding a new provider for PG16 (patch 0001). This is \nclearly a new feature, which wasn't even contemplated a few weeks ago.\n\n> * Switch to the libc provider for the C locale: would make the libc\n> provider even more complicated and had some potential for confusion,\n> and also has catalog representation problems when --locale is specified\n> along with --lc-ctype.\n\nI don't follow this concern. This could be done entirely within initdb. \n I mean, just change the default for --locale-provider if --locale=C is \ngiven. That's like 10 lines of code in initdb.c.\n\nI don't think I want CREATE DATABASE or CREATE COLLATION to have that \nlogic, nor do they really need it.\n\n> Patch 0003:\n> \n> Makes LOCALE apply to all providers. The overall feel after this patch\n> is that \"locale\" now means the collation locale, and\n> LC_COLLATE/LC_CTYPE are for the server environment. When using libc,\n> LC_COLLATE and LC_CTYPE still work as they did before, but their\n> relationship to database collation feels more like a special case of\n> the libc provider. I believe most people favor this patch and I haven't\n> seen recent objections.\n\nThis seems reasonable.\n\n> 1. If you specify --locale-provider=builtin at initdb time, you *must*\n> specify --locale=C/POSIX, otherwise you get an error.\n\nShouldn't the --locale option be ignored (or rejected) in that case. \nWhy insist on it being specified?\n\n> 2. Patch 0004 is possibly out of scope for 16, but it felt consistent\n> with the other UI changes and low risk. Please try with/without before\n> objecting.\n\nAlso clearly a new feature. Also the implications of various upgrade, \ndump/restore scenarios are not fully explored.\n\nI think it's an interesting idea, to make CREATE DATABASE and CREATE \nCOLLATION also default to icu of the respective higher scope has been \nset to icu. In fact, this makes me wonder now whether changing the \ndefault to icu in *only* initdb is sensible. But again, we'd need to \nsee the full matrix of upgrade scenarios laid out here.\n\n> 3. Daniel Verite felt that we should only change the provider from ICU\n> to \"builtin\" for the C locale if the provider is defaulting to ICU; not\n> if it's specified as ICU.\n\nCorrect, we shouldn't override what was explicitly specified.\n\n> I did not differentiate between specifying\n> ICU and defaulting to ICU because:\n> a. \"libc\" unconditionally uses the built-in memcmp() logic for C, it\n> never actually uses libc\n> b. If a user really wants the root locale or the en-US-u-va-posix\n> locale, they can specify those directly\n> c. I don't see any plausible case where it helps a user to keep\n> provider=icu when locale=C.\n\nIf the user specifies that, that's up to them to deal with the outcomes. \n Just changing it to something different seems wrong.\n\n> 4. Joe Conway and Peter Eisentraut both felt that C.UTF-8 with\n> provider=icu should not be changed to use the builtin provider, and\n> instead passed on to ICU. I implemented a compromise where initdb will\n> change C.UTF-8 to the built-in provider; but CREATE DATABASE/COLLATION\n> will pass it along to ICU (which may support it as en-US-u-va-posix in\n> some versions, or may throw an error in other versions). My reasoning\n> is that initdb is pulling from the environment, and we should try\n> harder to succeed on any reasonable environmental settings (otherwise\n> initdb with default settings could fail); whereas we can be more strict\n> with CREATE DATABASE/COLLATION.\n\nI'm not objecting to changing anything about C.UTF-8, but I am objecting \nto changing anything substantial in PG16.\n\n> 5. For the built-in provider, initdb defaults to UTF-8 rather than\n> SQL_ASCII. Otherwise, you would be unable to use ICU at all later,\n> because ICU doesn't support SQL_ASCII.\n\nAlso a behavior change that is not appropriate for PG16 at this stage.\n\n\n", "msg_date": "Mon, 12 Jun 2023 23:04:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Mon, 2023-06-12 at 23:04 +0200, Peter Eisentraut wrote:\n> > Patch 0003:\n> > \n> > Makes LOCALE apply to all providers. The overall feel after this\n> > patch\n> > is that \"locale\" now means the collation locale, and\n> > LC_COLLATE/LC_CTYPE are for the server environment. When using\n> > libc,\n> > LC_COLLATE and LC_CTYPE still work as they did before, but their\n> > relationship to database collation feels more like a special case\n> > of\n> > the libc provider. I believe most people favor this patch and I\n> > haven't\n> > seen recent objections.\n> \n> This seems reasonable.\n\nAttached a clean patch for this.\n\nIt seems to have widespread agreement so I plan to commit to v16 soon.\n\nTo clarify, this affects both initdb and CREATE DATABASE.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 14 Jun 2023 14:24:40 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Mon, 2023-06-12 at 23:04 +0200, Peter Eisentraut wrote:\n> I object to adding a new provider for PG16 (patch 0001).\n\nAdded to July CF for 17.\n\n> > 2. Patch 0004 is possibly out of scope for 16\n\n> Also clearly a new feature.\n\nAdded to July CF for 17.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 14 Jun 2023 22:07:13 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On 14.06.23 23:24, Jeff Davis wrote:\n> On Mon, 2023-06-12 at 23:04 +0200, Peter Eisentraut wrote:\n>>> Patch 0003:\n>>>\n>>> Makes LOCALE apply to all providers. The overall feel after this\n>>> patch\n>>> is that \"locale\" now means the collation locale, and\n>>> LC_COLLATE/LC_CTYPE are for the server environment. When using\n>>> libc,\n>>> LC_COLLATE and LC_CTYPE still work as they did before, but their\n>>> relationship to database collation feels more like a special case\n>>> of\n>>> the libc provider. I believe most people favor this patch and I\n>>> haven't\n>>> seen recent objections.\n>>\n>> This seems reasonable.\n> \n> Attached a clean patch for this.\n> \n> It seems to have widespread agreement so I plan to commit to v16 soon.\n> \n> To clarify, this affects both initdb and CREATE DATABASE.\n\nThis looks good to me.\n\nAttached is small fixup patch with some documentation tweaks and \nsimplifying some test code (also includes pgperltidy).", "msg_date": "Fri, 16 Jun 2023 16:50:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" }, { "msg_contents": "On Fri, 2023-06-16 at 16:50 +0200, Peter Eisentraut wrote:\n> This looks good to me.\n> \n> Attached is small fixup patch with some documentation tweaks and \n> simplifying some test code (also includes pgperltidy).\n\nThank you. Committed with your fixups.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 16 Jun 2023 10:42:10 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Order changes in PG16 since ICU introduction" } ]
[ { "msg_contents": "Hi,\n\nFirst-time potential contributor here. We recently had an incident due\nto a sudden 1000x slowdown of a Postgres query (from ~10ms to ~10s)\ndue to a join with a foreign key that was often null. We found that it\nwas caused by a merge join with an index scan on one join path --\nwhenever the non-null data happened to be such that the merge join\ncouldn't be terminated early, the index would proceed to scan all of\nthe null rows and filter each one out individually. Since this was an\ninner join, this was pointless; the nulls would never have matched the\njoin clause anyway.\n\nTest script to reproduce + example explain output:\nhttps://paste.depesz.com/s/VUj\n\nOnce you're aware of it for a given index, this is a solvable issue.\nWe solved it by adding a partial index. Including an IS NOT NULL\nclause in the query also seems to solve it.\n\nHowever, I've gotten the notion that it should be possible to solve\nthis on the Postgres side, with no user action required. When doing an\ninner-join merge join with a normal equality operator, we should\nalready \"know\" that we don't have to look at rows where the join keys\nare null, since any such comparison involving NULL will not return\ntrue. If we can just push that knowledge down to the index scan node,\nthen we should be able to avoid this entire problem, leaving one less\nperformance trap to trip people up.\n\nProof-of-concept patch (please ignore style):\nhttps://github.com/steinarvk-oda/postgres/pull/1/files\n\nI have a few questions:\n\n(1) Does this approach seem reasonable and worthwhile?\n(2) Can I determine programmatically at query planning time whether\nthe binary operator in an OpExpr has the property that all comparisons\ninvolving nulls will be non-true? Or, failing that, can I at least\nsomehow hardcode and identify the built-in equality operators (which\nhave this property)? (Integer equality and UUID equality probably\ncovers a lot when it comes to joins.)\n(3) Is there a systematic way to check whether adding an \"IS NOT NULL\"\ncondition would be redundant? E.g. if there is such a check in the\nquery already, adding another is just messy.\n(4) Should this sort of thing be made conditional on a high null_frac?\nOr is that unnecessary complexity, and the simplicity of just always\ndoing it would be better?\n\nThanks!\nSteinar Kaldager\n\n\n", "msg_date": "Fri, 21 Apr 2023 18:13:33 +0200", "msg_from": "Steinar Kaldager <steinar.kaldager@oda.com>", "msg_from_op": true, "msg_subject": "Improving worst-case merge join performance with often-null foreign\n key" }, { "msg_contents": "Steinar Kaldager <steinar.kaldager@oda.com> writes:\n> First-time potential contributor here. We recently had an incident due\n> to a sudden 1000x slowdown of a Postgres query (from ~10ms to ~10s)\n> due to a join with a foreign key that was often null. We found that it\n> was caused by a merge join with an index scan on one join path --\n> whenever the non-null data happened to be such that the merge join\n> couldn't be terminated early, the index would proceed to scan all of\n> the null rows and filter each one out individually. Since this was an\n> inner join, this was pointless; the nulls would never have matched the\n> join clause anyway.\n\nHmm. I don't entirely understand why the existing stop-at-nulls logic\nin nodeMergejoin.c didn't fix this for you. Maybe somebody has broken\nthat? See the commentary for MJEvalOuterValues/MJEvalInnerValues.\n\nPushing down an IS NOT NULL restriction could possibly be of value\nif the join is being done in the nulls-first direction, but that's\nan extreme minority use-case. I'm dubious that it'd be worth the\noverhead in general. It'd probably be more useful to make sure that\nthe planner's cost model is aware of this effect, so that it's prodded\nto use nulls-last not nulls-first sort order when there are enough\nnulls to make a difference.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 22 Apr 2023 11:21:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving worst-case merge join performance with often-null\n foreign key" }, { "msg_contents": "On Sat, Apr 22, 2023 at 11:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Steinar Kaldager <steinar.kaldager@oda.com> writes:\n> > First-time potential contributor here. We recently had an incident due\n> > to a sudden 1000x slowdown of a Postgres query (from ~10ms to ~10s)\n> > due to a join with a foreign key that was often null. We found that it\n> > was caused by a merge join with an index scan on one join path --\n> > whenever the non-null data happened to be such that the merge join\n> > couldn't be terminated early, the index would proceed to scan all of\n> > the null rows and filter each one out individually. Since this was an\n> > inner join, this was pointless; the nulls would never have matched the\n> > join clause anyway.\n>\n> Hmm. I don't entirely understand why the existing stop-at-nulls logic\n> in nodeMergejoin.c didn't fix this for you. Maybe somebody has broken\n> that? See the commentary for MJEvalOuterValues/MJEvalInnerValues.\n\n\nI think it's just because the MergeJoin didn't see a NULL foo_id value\nfrom test_bar tuples because all such tuples are removed by the filter\n'test_bar.active', thus it does not have a chance to stop at nulls.\n\n# select count(*) from test_bar where foo_id is null and active;\n count\n-------\n 0\n(1 row)\n\nInstead, the index scan on test_bar will have to scan all the tuples\nwith NULL foo_id because none of them satisfies the qual clause.\n\nThanks\nRichard\n\nOn Sat, Apr 22, 2023 at 11:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Steinar Kaldager <steinar.kaldager@oda.com> writes:\n> First-time potential contributor here. We recently had an incident due\n> to a sudden 1000x slowdown of a Postgres query (from ~10ms to ~10s)\n> due to a join with a foreign key that was often null. We found that it\n> was caused by a merge join with an index scan on one join path --\n> whenever the non-null data happened to be such that the merge join\n> couldn't be terminated early, the index would proceed to scan all of\n> the null rows and filter each one out individually. Since this was an\n> inner join, this was pointless; the nulls would never have matched the\n> join clause anyway.\n\nHmm.  I don't entirely understand why the existing stop-at-nulls logic\nin nodeMergejoin.c didn't fix this for you.  Maybe somebody has broken\nthat?  See the commentary for MJEvalOuterValues/MJEvalInnerValues.I think it's just because the MergeJoin didn't see a NULL foo_id valuefrom test_bar tuples because all such tuples are removed by the filter'test_bar.active', thus it does not have a chance to stop at nulls.# select count(*) from test_bar where foo_id is null and active; count-------     0(1 row)Instead, the index scan on test_bar will have to scan all the tupleswith NULL foo_id because none of them satisfies the qual clause.ThanksRichard", "msg_date": "Sun, 23 Apr 2023 17:29:59 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving worst-case merge join performance with often-null\n foreign key" }, { "msg_contents": "On Sun, Apr 23, 2023 at 5:29 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Sat, Apr 22, 2023 at 11:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Steinar Kaldager <steinar.kaldager@oda.com> writes:\n>> > First-time potential contributor here. We recently had an incident due\n>> > to a sudden 1000x slowdown of a Postgres query (from ~10ms to ~10s)\n>> > due to a join with a foreign key that was often null. We found that it\n>> > was caused by a merge join with an index scan on one join path --\n>> > whenever the non-null data happened to be such that the merge join\n>> > couldn't be terminated early, the index would proceed to scan all of\n>> > the null rows and filter each one out individually. Since this was an\n>> > inner join, this was pointless; the nulls would never have matched the\n>> > join clause anyway.\n>>\n>> Hmm. I don't entirely understand why the existing stop-at-nulls logic\n>> in nodeMergejoin.c didn't fix this for you. Maybe somebody has broken\n>> that? See the commentary for MJEvalOuterValues/MJEvalInnerValues.\n>\n>\n> I think it's just because the MergeJoin didn't see a NULL foo_id value\n> from test_bar tuples because all such tuples are removed by the filter\n> 'test_bar.active', thus it does not have a chance to stop at nulls.\n>\n> # select count(*) from test_bar where foo_id is null and active;\n> count\n> -------\n> 0\n> (1 row)\n>\n> Instead, the index scan on test_bar will have to scan all the tuples\n> with NULL foo_id because none of them satisfies the qual clause.\n>\n\nBTW, in Steinar's case the query runs much faster with nestloop with a\nparameterized inner path, since test_foo is small and test_bar is very\nlarge, and there is an index on test_bar.foo_id. You can see that by\n\"set enable_mergejoin to off\":\n\n# EXPLAIN (costs off)\nSELECT SUM(test_foo.id) FROM test_bar, test_foo WHERE test_bar.foo_id =\ntest_foo.id AND test_foo.active AND test_bar.active;\n QUERY PLAN\n--------------------------------------------------------------\n Aggregate\n -> Nested Loop\n -> Seq Scan on test_foo\n Filter: active\n -> Index Scan using test_bar_foo_id_idx on test_bar\n Index Cond: (foo_id = test_foo.id)\n Filter: active\n(7 rows)\n\nIn my box the total cost and execution time of mergejoin vs nestloop\nare:\n\n mergejoin nestloop\n\nCost estimate 1458.40 12355.15\nActual (best of 3) 3644.685 ms 13.114 ms\n\nSo it seems we have holes in cost estimate for mergejoin or nestloop\nwith parameterized inner path, or both.\n\nThanks\nRichard\n\nOn Sun, Apr 23, 2023 at 5:29 PM Richard Guo <guofenglinux@gmail.com> wrote:On Sat, Apr 22, 2023 at 11:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Steinar Kaldager <steinar.kaldager@oda.com> writes:\n> First-time potential contributor here. We recently had an incident due\n> to a sudden 1000x slowdown of a Postgres query (from ~10ms to ~10s)\n> due to a join with a foreign key that was often null. We found that it\n> was caused by a merge join with an index scan on one join path --\n> whenever the non-null data happened to be such that the merge join\n> couldn't be terminated early, the index would proceed to scan all of\n> the null rows and filter each one out individually. Since this was an\n> inner join, this was pointless; the nulls would never have matched the\n> join clause anyway.\n\nHmm.  I don't entirely understand why the existing stop-at-nulls logic\nin nodeMergejoin.c didn't fix this for you.  Maybe somebody has broken\nthat?  See the commentary for MJEvalOuterValues/MJEvalInnerValues.I think it's just because the MergeJoin didn't see a NULL foo_id valuefrom test_bar tuples because all such tuples are removed by the filter'test_bar.active', thus it does not have a chance to stop at nulls.# select count(*) from test_bar where foo_id is null and active; count-------     0(1 row)Instead, the index scan on test_bar will have to scan all the tupleswith NULL foo_id because none of them satisfies the qual clause.BTW, in Steinar's case the query runs much faster with nestloop with aparameterized inner path, since test_foo is small and test_bar is verylarge, and there is an index on test_bar.foo_id.  You can see that by\"set enable_mergejoin to off\":# EXPLAIN (costs off)SELECT SUM(test_foo.id) FROM test_bar, test_foo WHERE test_bar.foo_id = test_foo.id AND test_foo.active AND test_bar.active;                          QUERY PLAN-------------------------------------------------------------- Aggregate   ->  Nested Loop         ->  Seq Scan on test_foo               Filter: active         ->  Index Scan using test_bar_foo_id_idx on test_bar               Index Cond: (foo_id = test_foo.id)               Filter: active(7 rows)In my box the total cost and execution time of mergejoin vs nestloopare:                        mergejoin       nestloopCost estimate           1458.40         12355.15Actual (best of 3)      3644.685 ms     13.114 msSo it seems we have holes in cost estimate for mergejoin or nestloopwith parameterized inner path, or both.ThanksRichard", "msg_date": "Mon, 24 Apr 2023 15:24:14 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving worst-case merge join performance with often-null\n foreign key" }, { "msg_contents": "On Sun, Apr 23, 2023 at 11:30 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Sat, Apr 22, 2023 at 11:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm. I don't entirely understand why the existing stop-at-nulls logic\n>> in nodeMergejoin.c didn't fix this for you. Maybe somebody has broken\n>> that? See the commentary for MJEvalOuterValues/MJEvalInnerValues.\n>\n>\n> I think it's just because the MergeJoin didn't see a NULL foo_id value\n> from test_bar tuples because all such tuples are removed by the filter\n> 'test_bar.active', thus it does not have a chance to stop at nulls.\n\nAgreed, this is also my understanding.\n\nNote that this isn't just a contrived test case, it's also the\nsituation we ran into in prod. (We had a table with a lot of old\ninactive rows with null values for Historical Reasons, essentially\nkept for accounting/archival purposes. Newer, active, rows all had the\nforeign key column set to non-null.)\n\nI had initially considered whether this could be fixed in the\nmerge-join execution code instead of by altering the plan, but at\nfirst glance that feels like it might be a more awkward fit. It's easy\nenough to stop the merge join if a null actually appears, but because\nof the filter, no null will ever appear. You'd have to somehow break\nthe \"stream of values\" abstraction and look at where the values are\nactually coming from and/or which values would have appeared if they\nweren't filtered out. I don't know the codebase well, but to me that\nfeels fairly hacky compared to altering the plan for the index scan.\n\nSteinar\n\n\n", "msg_date": "Tue, 25 Apr 2023 14:51:00 +0200", "msg_from": "Steinar Kaldager <steinar.kaldager@oda.com>", "msg_from_op": true, "msg_subject": "Re: Improving worst-case merge join performance with often-null\n foreign key" } ]
[ { "msg_contents": "Hi,\n\nI find the inclusion of the un-parenthesized syntax for VACUUM, ANALYZE,\nand EXPLAIN in the docs makes it harder to understand the preferred,\nparenthesized syntax (see [1] as an example).\n\nOver in [2], it was suggested that moving the un-parenthesized syntax to\nthe \"Compatibility\" section of their respective docs pages would be\nokay. Patch attached.\n\nI left out CLUSTER since that syntax change is so new.\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/devel/sql-analyze.html\n[2] https://www.postgresql.org/message-id/3024596.1681940741%40sss.pgh.pa.us", "msg_date": "Fri, 21 Apr 2023 18:29:16 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Fri, Apr 21, 2023 at 06:29:16PM -0400, Melanie Plageman wrote:\n> Over in [2], it was suggested that moving the un-parenthesized syntax to\n> the \"Compatibility\" section of their respective docs pages would be\n> okay. Patch attached.\n\nI think that's reasonable.\n\n> I left out CLUSTER since that syntax change is so new.\n\nI think it'd be okay to move this one, too. The parenthesized syntax has\nbeen available since v14, and I suspect CLUSTER isn't terribly widely used,\nanyway. (Side note: it looks like \"CLUSTER (VERBOSE)\" doesn't work, which\nseems weird to me.)\n\nMost of these commands have an existing note about the deprecated syntax.\nCould those be removed with this change?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 21 Apr 2023 15:55:22 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Fri, Apr 21, 2023 at 6:55 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Apr 21, 2023 at 06:29:16PM -0400, Melanie Plageman wrote:\n> > Over in [2], it was suggested that moving the un-parenthesized syntax to\n> > the \"Compatibility\" section of their respective docs pages would be\n> > okay. Patch attached.\n>\n> I think that's reasonable.\n>\n> > I left out CLUSTER since that syntax change is so new.\n>\n> I think it'd be okay to move this one, too. The parenthesized syntax has\n> been available since v14, and I suspect CLUSTER isn't terribly widely used,\n> anyway. (Side note: it looks like \"CLUSTER (VERBOSE)\" doesn't work, which\n> seems weird to me.)\n\nAttached v2 includes changes for CLUSTER syntax docs. I wasn't quite\nsure that we can move down CLUSTER [VERBOSE] syntax to the compatibility\nsection since CLUSTER (VERBOSE) doesn't work. This seems like it should\nwork (VACUUM and ANALYZE do). I put extra \"[ ]\" around the main CLUSTER\nsyntax but I don't know that this is actually the same as documenting\nthat CLUSTER VERBOSE does work but CLUSTER (VERBOSE) doesn't.\n\n> Most of these commands have an existing note about the deprecated syntax.\n> Could those be removed with this change?\n\nAh, yes. Good catch! I have removed these.\n\n- Melanie", "msg_date": "Fri, 21 Apr 2023 19:29:59 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Fri, Apr 21, 2023 at 07:29:59PM -0400, Melanie Plageman wrote:\n> Attached v2 includes changes for CLUSTER syntax docs. I wasn't quite\n> sure that we can move down CLUSTER [VERBOSE] syntax to the compatibility\n> section since CLUSTER (VERBOSE) doesn't work. This seems like it should\n> work (VACUUM and ANALYZE do). I put extra \"[ ]\" around the main CLUSTER\n> syntax but I don't know that this is actually the same as documenting\n> that CLUSTER VERBOSE does work but CLUSTER (VERBOSE) doesn't.\n\nAFAICT there isn't a strong reason to disallow CLUSTER (VERBOSE). The\nfollowing short patch seems to do the trick:\n\ndiff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\nindex acf6cf4866..215c4ba39c 100644\n--- a/src/backend/parser/gram.y\n+++ b/src/backend/parser/gram.y\n@@ -11603,6 +11603,15 @@ ClusterStmt:\n n->params = $3;\n $$ = (Node *) n;\n }\n+ | CLUSTER '(' utility_option_list ')'\n+ {\n+ ClusterStmt *n = makeNode(ClusterStmt);\n+\n+ n->relation = NULL;\n+ n->indexname = NULL;\n+ n->params = $3;\n+ $$ = (Node *) n;\n+ }\n | CLUSTER opt_verbose\n {\n ClusterStmt *n = makeNode(ClusterStmt);\n\n>> Most of these commands have an existing note about the deprecated syntax.\n>> Could those be removed with this change?\n> \n> Ah, yes. Good catch! I have removed these.\n\nThanks. I'll take a closer look at the patch soon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 21 Apr 2023 16:45:15 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "I've attached two patches. 0001 adds a parenthesized CLUSTER syntax that\ndoesn't require a table name. 0002 is your patch with a couple of small\nadjustments.\n\nOn Fri, Apr 21, 2023 at 07:29:59PM -0400, Melanie Plageman wrote:\n> Attached v2 includes changes for CLUSTER syntax docs. I wasn't quite\n> sure that we can move down CLUSTER [VERBOSE] syntax to the compatibility\n> section since CLUSTER (VERBOSE) doesn't work. This seems like it should\n> work (VACUUM and ANALYZE do). I put extra \"[ ]\" around the main CLUSTER\n> syntax but I don't know that this is actually the same as documenting\n> that CLUSTER VERBOSE does work but CLUSTER (VERBOSE) doesn't.\n\nI'm not sure about moving CLUSTER [VERBOSE] to the compatibility section,\neither. At the very least, I don't think what I have in 0002 is accurate.\nIt claims that syntax was used before v14, but it's still the only way to\ndo a \"verbose\" CLUSTER without a table name in v15 and v16, too. Perhaps\nwe should break it apart, or maybe we can just say it was used before v17.\nWDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 21 Apr 2023 21:44:51 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Fri, Apr 21, 2023 at 09:44:51PM -0700, Nathan Bossart wrote:\n> I've attached two patches. 0001 adds a parenthesized CLUSTER syntax that\n> doesn't require a table name. 0002 is your patch with a couple of small\n> adjustments.\n> \n> On Fri, Apr 21, 2023 at 07:29:59PM -0400, Melanie Plageman wrote:\n> > Attached v2 includes changes for CLUSTER syntax docs. I wasn't quite\n> > sure that we can move down CLUSTER [VERBOSE] syntax to the compatibility\n> > section since CLUSTER (VERBOSE) doesn't work. This seems like it should\n> > work (VACUUM and ANALYZE do). I put extra \"[ ]\" around the main CLUSTER\n> > syntax but I don't know that this is actually the same as documenting\n> > that CLUSTER VERBOSE does work but CLUSTER (VERBOSE) doesn't.\n> \n> I'm not sure about moving CLUSTER [VERBOSE] to the compatibility section,\n> either. At the very least, I don't think what I have in 0002 is accurate.\n> It claims that syntax was used before v14, but it's still the only way to\n> do a \"verbose\" CLUSTER without a table name in v15 and v16, too. Perhaps\n> we should break it apart, or maybe we can just say it was used before v17.\n> WDYT?\n\nIf you are planning to wait and commit the change to support CLUSTER\n(VERBOSE) until July, then you can consolidate the two and say before\nv17. If you plan to commit it before then (making it available in v16),\nI would move CLUSTER [VERBOSE] down to Compatibility and say that both\nof the un-parenthesized syntaxes were used before v16. Since all of the\nsyntaxes still work, I think it is probably more confusing to split them\napart so granularly. The parenthesized syntax was effectively not\n\"feature complete\" without your patch to support CLUSTER (VERBOSE).\n\nAlso, isn't this:\n CLUSTER [VERBOSE] [ <qualified_name> [ USING <index_name> ] ]\ninclusive of this:\n CLUSTER [VERBOSE]\nSo, it would have been correct for them to be consolidated in the\nexisting documentation?\n\n> From 48fff177a8f0096c99c77b4e1368cc73f7e86585 Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <nbossart@postgresql.org>\n> Date: Fri, 21 Apr 2023 20:16:25 -0700\n> Subject: [PATCH v3 1/2] Support parenthesized syntax for CLUSTER without a\n> table name.\n> \n> b5913f6 added a parenthesized syntax for CLUSTER, but it requires\n> specifying a table name. This is unlike commands such as VACUUM\n> and ANALYZE, which do not require specifying a table in the\n> parenthesized syntax. This change resolves this inconsistency.\n> \n> The documentation for the CLUSTER syntax has also been consolidated\n> in anticipation of a follow-up change that will move the\n> unparenthesized syntax to the \"Compatibility\" section.\n\nI suppose we should decide if unparenthesized is a word or if we are\nsticking with the hyphen.\n\n> doc/src/sgml/ref/cluster.sgml | 5 ++---\n> src/backend/parser/gram.y | 21 ++++++++++++++-------\n> 2 files changed, 16 insertions(+), 10 deletions(-)\n> \n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n...\n> @@ -11593,7 +11592,17 @@ ClusterStmt:\n> \t\t\t\t\t\tn->params = lappend(n->params, makeDefElem(\"verbose\", NULL, @2));\n> \t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t}\n> +\t\t\t| CLUSTER opt_verbose\n> +\t\t\t\t{\n> +\t\t\t\t\tClusterStmt *n = makeNode(ClusterStmt);\n> \n> +\t\t\t\t\tn->relation = NULL;\n> +\t\t\t\t\tn->indexname = NULL;\n> +\t\t\t\t\tn->params = NIL;\n> +\t\t\t\t\tif ($2)\n> +\t\t\t\t\t\tn->params = lappend(n->params, makeDefElem(\"verbose\", NULL, @2));\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n\nMaybe it is worth moving the un-parenthesized options all to the end and\nspecifying what version they were needed for.\n\n\n> \t\t\t| CLUSTER '(' utility_option_list ')' qualified_name cluster_index_specification\n> \t\t\t\t{\n> \t\t\t\t\tClusterStmt *n = makeNode(ClusterStmt);\n> @@ -11603,15 +11612,13 @@ ClusterStmt:\n> \t\t\t\t\tn->params = $3;\n> \t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t}\n> -\t\t\t| CLUSTER opt_verbose\n> +\t\t\t| CLUSTER '(' utility_option_list ')'\n\nIt is too bad we can't do this the way VACUUM and ANALYZE do -- but\nsince qualified_name is required if USING is included, I suppose we\ncan't.\n\n> diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml\n> index 20c6f9939f..ea42ec30bd 100644\n> --- a/doc/src/sgml/ref/analyze.sgml\n> +++ b/doc/src/sgml/ref/analyze.sgml\n> @@ -22,7 +22,6 @@ PostgreSQL documentation\n> <refsynopsisdiv>\n> <synopsis>\n> ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n> -ANALYZE [ VERBOSE ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n...\n> @@ -346,6 +338,14 @@ ANALYZE [ VERBOSE ] [ <replaceable class=\"parameter\">table_and_columns</replacea\n> <para>\n> There is no <command>ANALYZE</command> statement in the SQL standard.\n> </para>\n> +\n> + <para>\n> + The following syntax was used before <productname>PostgreSQL</productname>\n> + version 11 and is still supported:\n\nGood call on specifying that the order matters.\n\n> +<synopsis>\n> +ANALYZE [ VERBOSE ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n> +</synopsis>\n> + </para>\n\n- Melanie\n\n\n", "msg_date": "Sat, 22 Apr 2023 10:38:58 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Sat, Apr 22, 2023 at 10:38:58AM -0400, Melanie Plageman wrote:\n> If you are planning to wait and commit the change to support CLUSTER\n> (VERBOSE) until July, then you can consolidate the two and say before\n> v17. If you plan to commit it before then (making it available in v16),\n> I would move CLUSTER [VERBOSE] down to Compatibility and say that both\n> of the un-parenthesized syntaxes were used before v16. Since all of the\n> syntaxes still work, I think it is probably more confusing to split them\n> apart so granularly. The parenthesized syntax was effectively not\n> \"feature complete\" without your patch to support CLUSTER (VERBOSE).\n\nI think this can wait for v17, but if there's a strong argument for doing\nsome of this sooner, we can reevaluate.\n\n> Also, isn't this:\n> CLUSTER [VERBOSE] [ <qualified_name> [ USING <index_name> ] ]\n> inclusive of this:\n> CLUSTER [VERBOSE]\n> So, it would have been correct for them to be consolidated in the\n> existing documentation?\n\nYes. This appears to go pretty far back. I traced it to 8bc717c (2002).\nIt looks like the VACUUM syntaxes have been consolidated since 37d2f76\n(1998). So AFAICT this small inconsistency has been around for a while.\n\n>> The documentation for the CLUSTER syntax has also been consolidated\n>> in anticipation of a follow-up change that will move the\n>> unparenthesized syntax to the \"Compatibility\" section.\n> \n> I suppose we should decide if unparenthesized is a word or if we are\n> sticking with the hyphen.\n\nThe existing uses in the docs all omit the hypen, but I see it both ways in\nsome web searches. Other than keeping the Postgres docs consistent, I\ndon't have a terribly ѕtrong opinion here.\n\n>> +\t\t\t| CLUSTER opt_verbose\n>> +\t\t\t\t{\n>> +\t\t\t\t\tClusterStmt *n = makeNode(ClusterStmt);\n>> \n>> +\t\t\t\t\tn->relation = NULL;\n>> +\t\t\t\t\tn->indexname = NULL;\n>> +\t\t\t\t\tn->params = NIL;\n>> +\t\t\t\t\tif ($2)\n>> +\t\t\t\t\t\tn->params = lappend(n->params, makeDefElem(\"verbose\", NULL, @2));\n>> +\t\t\t\t\t$$ = (Node *) n;\n>> +\t\t\t\t}\n> \n> Maybe it is worth moving the un-parenthesized options all to the end and\n> specifying what version they were needed for.\n\nGood idea.\n\n>> \t\t\t| CLUSTER '(' utility_option_list ')' qualified_name cluster_index_specification\n>> \t\t\t\t{\n>> \t\t\t\t\tClusterStmt *n = makeNode(ClusterStmt);\n>> @@ -11603,15 +11612,13 @@ ClusterStmt:\n>> \t\t\t\t\tn->params = $3;\n>> \t\t\t\t\t$$ = (Node *) n;\n>> \t\t\t\t}\n>> -\t\t\t| CLUSTER opt_verbose\n>> +\t\t\t| CLUSTER '(' utility_option_list ')'\n> \n> It is too bad we can't do this the way VACUUM and ANALYZE do -- but\n> since qualified_name is required if USING is included, I suppose we\n> can't.\n\nIt might be possible to extract the name and index part to a separate\noptional rule.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 24 Apr 2023 09:52:21 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Mon, Apr 24, 2023 at 09:52:21AM -0700, Nathan Bossart wrote:\n> I think this can wait for v17, but if there's a strong argument for doing\n> some of this sooner, we can reevaluate.\n\nFWIW, I agree to hold on this stuff for v17~ once it opens for\nbusiness. There is no urgency here.\n--\nMichael", "msg_date": "Tue, 25 Apr 2023 15:18:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Tue, Apr 25, 2023 at 03:18:44PM +0900, Michael Paquier wrote:\n> On Mon, Apr 24, 2023 at 09:52:21AM -0700, Nathan Bossart wrote:\n>> I think this can wait for v17, but if there's a strong argument for doing\n>> some of this sooner, we can reevaluate.\n> \n> FWIW, I agree to hold on this stuff for v17~ once it opens for\n> business. There is no urgency here.\n\nThere's still some time before we'll be able to commit any of these, but\nhere is an attempt at addressing all the feedback thus far.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 15 May 2023 12:36:51 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Mon, May 15, 2023 at 12:36:51PM -0700, Nathan Bossart wrote:\n> There's still some time before we'll be able to commit any of these, but\n> here is an attempt at addressing all the feedback thus far.\n\n- The parenthesized syntax was added in\n- <productname>PostgreSQL</productname> 9.0; the unparenthesized\n- syntax is deprecated.\n[...]\n+ | CLUSTER '(' utility_option_list ')'\n+ {\n+ ClusterStmt *n = makeNode(ClusterStmt);\n+\n+ n->relation = NULL;\n+ n->indexname = NULL;\n+ n->params = $3;\n+ $$ = (Node *) n;\n+ }\n\nHmm. This is older than the oldest version we have to support for\npg_upgrade and co. Would it be worth switching clusterdb to use the\nparenthesized grammar, adding on the way a test for this new grammar\nflavor without a table in the TAP tests (too costly for the main\nregression test suite)?\n--\nMichael", "msg_date": "Tue, 16 May 2023 15:28:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Tue, May 16, 2023 at 03:28:17PM +0900, Michael Paquier wrote:\n> Hmm. This is older than the oldest version we have to support for\n> pg_upgrade and co. Would it be worth switching clusterdb to use the\n> parenthesized grammar, adding on the way a test for this new grammar\n> flavor without a table in the TAP tests (too costly for the main\n> regression test suite)?\n\nSince all currently-supported versions require a table name for the\nparenthesized syntax, this would cause v17's clusterdb to fail on older\nservers when no table name is specified. That seems like a deal-breaker to\nme.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 May 2023 08:40:12 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Mon, May 15, 2023 at 12:36:51PM -0700, Nathan Bossart wrote:\n> On Tue, Apr 25, 2023 at 03:18:44PM +0900, Michael Paquier wrote:\n>> FWIW, I agree to hold on this stuff for v17~ once it opens for\n>> business. There is no urgency here.\n> \n> There's still some time before we'll be able to commit any of these, but\n> here is an attempt at addressing all the feedback thus far.\n\nI think these patches are in decent shape, so I'd like to commit them soon,\nbut I will wait at least a couple more weeks in case anyone has additional\nfeedback.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 30 Jun 2023 11:25:57 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" }, { "msg_contents": "On Fri, Jun 30, 2023 at 11:25:57AM -0700, Nathan Bossart wrote:\n> I think these patches are in decent shape, so I'd like to commit them soon,\n> but I will wait at least a couple more weeks in case anyone has additional\n> feedback.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 19 Jul 2023 15:41:32 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move un-parenthesized syntax docs to \"compatibility\" for few SQL\n commands" } ]
[ { "msg_contents": "Attached is a proof-of-concept/work-in-progress patch set that adds\nfunctions for \"vectors\" repreѕented with one-dimensional float8 arrays.\nThese functions may be used in a variety of applications, but I am\nproposing them with the AI/ML use-cases in mind. I am posting this early\nin the v17 cycle in hopes of gathering feedback prior to PGCon.\n\nWith the accessibility of AI/ML tools such as large language models (LLMs),\nthere has been a demand for storing and manipulating high-dimensional\nvectors in PostgreSQL, particularly around nearest-neighbor queries. Many\nof these vectors have more than 1500 dimensions. The cube extension [0]\nprovides some of the distance functionality (e.g., taxicab, Euclidean, and\nChebyshev), but it is missing some popular functions (e.g., cosine\nsimilarity, dot product), and it is limited to 100 dimensions. We could\nextend cube to support more dimensions, but this would require reworking\nits indexing code and filling in gaps between the cube data type and the\narray types. For some previous discussion about using the cube extension\nfor this kind of data, see [1].\n\nfloat8[] is well-supported and allows for effectively unlimited dimensions\nof data. float8 matches the common output format of many AI embeddings,\nand it allows us or extensions to implement indexing methods around these\nfunctions. This patch set does not yet contain indexing support, but we\nare exploring using GiST or GIN for the use-cases in question. It might\nalso be desirable to add support for other linear algebra operations (e.g.,\noperations on matrices). The attached patches likely only scratch the\nsurface of the \"vector search\" use-case.\n\nThe patch set is broken up as follows:\n\n * 0001 does some minor refactoring of dsqrt() in preparation for 0002.\n * 0002 adds several vector-related functions, including distance functions\n and a kmeans++ implementation.\n * 0003 adds support for optionally using the OpenBLAS library, which is an\n implementation of the Basic Linear Algebra Subprograms [2]\n specification. Basic testing with this library showed a small\n performance boost, although perhaps not enough to justify giving this\n patch serious consideration.\n\nOf course, there are many open questions. For example, should PostgreSQL\nsupport this stuff out-of-the-box in the first place? And should we\nintroduce a vector data type or SQL domains for treating float8[] as\nvectors? IMHO these vector search use-cases are an exciting opportunity\nfor the PostgreSQL project, so I am eager to hear what folks think.\n\n[0] https://www.postgresql.org/docs/current/cube.html\n[1] https://postgr.es/m/2271927.1593097400%40sss.pgh.pa.us\n[2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 21 Apr 2023 17:07:23 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "vector search support" }, { "msg_contents": "Hi Nathan,\n\nI find the patches really interesting. Personally, as Data/MLOps Engineer,\nI'm involved in a project where we use embedding techniques to generate\nvectors from documents, and use clustering and kNN searches to find similar\ndocuments basing on spatial neighbourhood of generated vectors.\n\nWe finally opted for ElasticSearch as search engine, considering that it\nwas providing what we needed:\n\n* support to store dense vectors\n* support for kNN searches (last version of ElasticSearch allows this)\n\nAn internal benchmark showed us that we were able to achieve the expected\nperformance, although we are still lacking some points:\n\n* clustering of vectors (this has to be done outside the search engine,\nusing DBScan for our use case)\n* concurrency in updating the ElasticSearch indexes storing the dense\nvectors\n\nI found these patches really interesting, considering that they would solve\nsome of open issues when storing dense vectors. Index support would help a\nlot with searches though.\n\nNot sure if it's the best to include in PostgreSQL core, but would be\nfantastic to have it as an extension.\n\nAll the best,\nGiuseppe.\n\nOn Sat, 22 Apr 2023, 01:07 Nathan Bossart, <nathandbossart@gmail.com> wrote:\n\n> Attached is a proof-of-concept/work-in-progress patch set that adds\n> functions for \"vectors\" repreѕented with one-dimensional float8 arrays.\n> These functions may be used in a variety of applications, but I am\n> proposing them with the AI/ML use-cases in mind. I am posting this early\n> in the v17 cycle in hopes of gathering feedback prior to PGCon.\n>\n> With the accessibility of AI/ML tools such as large language models (LLMs),\n> there has been a demand for storing and manipulating high-dimensional\n> vectors in PostgreSQL, particularly around nearest-neighbor queries. Many\n> of these vectors have more than 1500 dimensions. The cube extension [0]\n> provides some of the distance functionality (e.g., taxicab, Euclidean, and\n> Chebyshev), but it is missing some popular functions (e.g., cosine\n> similarity, dot product), and it is limited to 100 dimensions. We could\n> extend cube to support more dimensions, but this would require reworking\n> its indexing code and filling in gaps between the cube data type and the\n> array types. For some previous discussion about using the cube extension\n> for this kind of data, see [1].\n>\n> float8[] is well-supported and allows for effectively unlimited dimensions\n> of data. float8 matches the common output format of many AI embeddings,\n> and it allows us or extensions to implement indexing methods around these\n> functions. This patch set does not yet contain indexing support, but we\n> are exploring using GiST or GIN for the use-cases in question. It might\n> also be desirable to add support for other linear algebra operations (e.g.,\n> operations on matrices). The attached patches likely only scratch the\n> surface of the \"vector search\" use-case.\n>\n> The patch set is broken up as follows:\n>\n> * 0001 does some minor refactoring of dsqrt() in preparation for 0002.\n> * 0002 adds several vector-related functions, including distance functions\n> and a kmeans++ implementation.\n> * 0003 adds support for optionally using the OpenBLAS library, which is an\n> implementation of the Basic Linear Algebra Subprograms [2]\n> specification. Basic testing with this library showed a small\n> performance boost, although perhaps not enough to justify giving this\n> patch serious consideration.\n>\n> Of course, there are many open questions. For example, should PostgreSQL\n> support this stuff out-of-the-box in the first place? And should we\n> introduce a vector data type or SQL domains for treating float8[] as\n> vectors? IMHO these vector search use-cases are an exciting opportunity\n> for the PostgreSQL project, so I am eager to hear what folks think.\n>\n> [0] https://www.postgresql.org/docs/current/cube.html\n> [1] https://postgr.es/m/2271927.1593097400%40sss.pgh.pa.us\n> [2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\nHi Nathan, I find the patches really interesting. Personally, as Data/MLOps Engineer, I'm involved in a project where we use embedding techniques to generate vectors from documents, and use clustering and kNN searches to find similar documents basing on spatial neighbourhood of generated vectors. We finally opted for ElasticSearch as search engine, considering that it was providing what we needed:* support to store dense vectors* support for kNN searches (last version of ElasticSearch allows this)An internal benchmark showed us that we were able to achieve the expected performance, although we are still lacking some points:* clustering of vectors (this has to be done outside the search engine, using DBScan for our use case)* concurrency in updating the ElasticSearch indexes storing the dense vectorsI found these patches really interesting, considering that they would solve some of open issues when storing dense vectors. Index support would help a lot with searches though. Not sure if it's the best to include in PostgreSQL core, but would be fantastic to have it as an extension.All the best,Giuseppe.On Sat, 22 Apr 2023, 01:07 Nathan Bossart, <nathandbossart@gmail.com> wrote:Attached is a proof-of-concept/work-in-progress patch set that adds\nfunctions for \"vectors\" repreѕented with one-dimensional float8 arrays.\nThese functions may be used in a variety of applications, but I am\nproposing them with the AI/ML use-cases in mind.  I am posting this early\nin the v17 cycle in hopes of gathering feedback prior to PGCon.\n\nWith the accessibility of AI/ML tools such as large language models (LLMs),\nthere has been a demand for storing and manipulating high-dimensional\nvectors in PostgreSQL, particularly around nearest-neighbor queries.  Many\nof these vectors have more than 1500 dimensions.  The cube extension [0]\nprovides some of the distance functionality (e.g., taxicab, Euclidean, and\nChebyshev), but it is missing some popular functions (e.g., cosine\nsimilarity, dot product), and it is limited to 100 dimensions.  We could\nextend cube to support more dimensions, but this would require reworking\nits indexing code and filling in gaps between the cube data type and the\narray types.  For some previous discussion about using the cube extension\nfor this kind of data, see [1].\n\nfloat8[] is well-supported and allows for effectively unlimited dimensions\nof data.  float8 matches the common output format of many AI embeddings,\nand it allows us or extensions to implement indexing methods around these\nfunctions.  This patch set does not yet contain indexing support, but we\nare exploring using GiST or GIN for the use-cases in question.  It might\nalso be desirable to add support for other linear algebra operations (e.g.,\noperations on matrices).  The attached patches likely only scratch the\nsurface of the \"vector search\" use-case.\n\nThe patch set is broken up as follows:\n\n * 0001 does some minor refactoring of dsqrt() in preparation for 0002.\n * 0002 adds several vector-related functions, including distance functions\n   and a kmeans++ implementation.\n * 0003 adds support for optionally using the OpenBLAS library, which is an\n   implementation of the Basic Linear Algebra Subprograms [2]\n   specification.  Basic testing with this library showed a small\n   performance boost, although perhaps not enough to justify giving this\n   patch serious consideration.\n\nOf course, there are many open questions.  For example, should PostgreSQL\nsupport this stuff out-of-the-box in the first place?  And should we\nintroduce a vector data type or SQL domains for treating float8[] as\nvectors?  IMHO these vector search use-cases are an exciting opportunity\nfor the PostgreSQL project, so I am eager to hear what folks think.\n\n[0] https://www.postgresql.org/docs/current/cube.html\n[1] https://postgr.es/m/2271927.1593097400%40sss.pgh.pa.us\n[2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 26 Apr 2023 14:31:37 +0100", "msg_from": "Giuseppe Broccolo <g.broccolo.7@gmail.com>", "msg_from_op": false, "msg_subject": "Re: vector search support" }, { "msg_contents": "Hi Nathan,\n\nA nice side effect of using the float8[] to represent vectors is that it allows for vectors of different sizes to coexist in the same column.\n\nWe most frequently see (pgvector) vector columns being used for storing ML embeddings. Given that different models produce embeddings with a different number of dimensions, the need to specify a vector’s size in DDL tightly couples the schema to a single model. Support for variable length vectors would be a great way to decouple those concepts. It would also be a differentiating feature from existing vector stores.\n\nOne drawback is that variable length vectors complicates indexing for similarity search because similarity measures require vectors of consistent length. Partial indexes are a possible solution to that challenge\n\n-------\nOliver Rice\nSupabase: https://supabase.com/\n\n\n> On Apr 21, 2023, at 7:07 PM, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> Attached is a proof-of-concept/work-in-progress patch set that adds\n> functions for \"vectors\" repreѕented with one-dimensional float8 arrays.\n> These functions may be used in a variety of applications, but I am\n> proposing them with the AI/ML use-cases in mind. I am posting this early\n> in the v17 cycle in hopes of gathering feedback prior to PGCon.\n> \n> With the accessibility of AI/ML tools such as large language models (LLMs),\n> there has been a demand for storing and manipulating high-dimensional\n> vectors in PostgreSQL, particularly around nearest-neighbor queries. Many\n> of these vectors have more than 1500 dimensions. The cube extension [0]\n> provides some of the distance functionality (e.g., taxicab, Euclidean, and\n> Chebyshev), but it is missing some popular functions (e.g., cosine\n> similarity, dot product), and it is limited to 100 dimensions. We could\n> extend cube to support more dimensions, but this would require reworking\n> its indexing code and filling in gaps between the cube data type and the\n> array types. For some previous discussion about using the cube extension\n> for this kind of data, see [1].\n> \n> float8[] is well-supported and allows for effectively unlimited dimensions\n> of data. float8 matches the common output format of many AI embeddings,\n> and it allows us or extensions to implement indexing methods around these\n> functions. This patch set does not yet contain indexing support, but we\n> are exploring using GiST or GIN for the use-cases in question. It might\n> also be desirable to add support for other linear algebra operations (e.g.,\n> operations on matrices). The attached patches likely only scratch the\n> surface of the \"vector search\" use-case.\n> \n> The patch set is broken up as follows:\n> \n> * 0001 does some minor refactoring of dsqrt() in preparation for 0002.\n> * 0002 adds several vector-related functions, including distance functions\n> and a kmeans++ implementation.\n> * 0003 adds support for optionally using the OpenBLAS library, which is an\n> implementation of the Basic Linear Algebra Subprograms [2]\n> specification. Basic testing with this library showed a small\n> performance boost, although perhaps not enough to justify giving this\n> patch serious consideration.\n> \n> Of course, there are many open questions. For example, should PostgreSQL\n> support this stuff out-of-the-box in the first place? And should we\n> introduce a vector data type or SQL domains for treating float8[] as\n> vectors? IMHO these vector search use-cases are an exciting opportunity\n> for the PostgreSQL project, so I am eager to hear what folks think.\n> \n> [0] https://www.postgresql.org/docs/current/cube.html\n> [1] https://postgr.es/m/2271927.1593097400%40sss.pgh.pa.us\n> [2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms\n> \n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n> <v1-0001-Refactor-dsqrt.patch>\n\n\nHi Nathan,A nice side effect of using the float8[] to represent vectors is that it allows for vectors of different sizes to coexist in the same column.We most frequently see (pgvector) vector columns being used for storing ML embeddings. Given that different models produce embeddings with a different number of dimensions, the need to specify a vector’s size in DDL tightly couples the schema to a single model. Support for variable length vectors would be a great way to decouple those concepts. It would also be a differentiating feature from existing vector stores.One drawback is that variable length vectors complicates indexing for similarity search because similarity measures require vectors of consistent length. Partial indexes are a possible solution to that challenge-------Oliver RiceSupabase: https://supabase.com/On Apr 21, 2023, at 7:07 PM, Nathan Bossart <nathandbossart@gmail.com> wrote:Attached is a proof-of-concept/work-in-progress patch set that addsfunctions for \"vectors\" repreѕented with one-dimensional float8 arrays.These functions may be used in a variety of applications, but I amproposing them with the AI/ML use-cases in mind.  I am posting this earlyin the v17 cycle in hopes of gathering feedback prior to PGCon.With the accessibility of AI/ML tools such as large language models (LLMs),there has been a demand for storing and manipulating high-dimensionalvectors in PostgreSQL, particularly around nearest-neighbor queries.  Manyof these vectors have more than 1500 dimensions.  The cube extension [0]provides some of the distance functionality (e.g., taxicab, Euclidean, andChebyshev), but it is missing some popular functions (e.g., cosinesimilarity, dot product), and it is limited to 100 dimensions.  We couldextend cube to support more dimensions, but this would require reworkingits indexing code and filling in gaps between the cube data type and thearray types.  For some previous discussion about using the cube extensionfor this kind of data, see [1].float8[] is well-supported and allows for effectively unlimited dimensionsof data.  float8 matches the common output format of many AI embeddings,and it allows us or extensions to implement indexing methods around thesefunctions.  This patch set does not yet contain indexing support, but weare exploring using GiST or GIN for the use-cases in question.  It mightalso be desirable to add support for other linear algebra operations (e.g.,operations on matrices).  The attached patches likely only scratch thesurface of the \"vector search\" use-case.The patch set is broken up as follows: * 0001 does some minor refactoring of dsqrt() in preparation for 0002. * 0002 adds several vector-related functions, including distance functions   and a kmeans++ implementation. * 0003 adds support for optionally using the OpenBLAS library, which is an   implementation of the Basic Linear Algebra Subprograms [2]   specification.  Basic testing with this library showed a small   performance boost, although perhaps not enough to justify giving this   patch serious consideration.Of course, there are many open questions.  For example, should PostgreSQLsupport this stuff out-of-the-box in the first place?  And should weintroduce a vector data type or SQL domains for treating float8[] asvectors?  IMHO these vector search use-cases are an exciting opportunityfor the PostgreSQL project, so I am eager to hear what folks think.[0] https://www.postgresql.org/docs/current/cube.html[1] https://postgr.es/m/2271927.1593097400%40sss.pgh.pa.us[2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms-- Nathan BossartAmazon Web Services: https://aws.amazon.com<v1-0001-Refactor-dsqrt.patch>", "msg_date": "Thu, 25 May 2023 12:48:02 -0500", "msg_from": "Oliver Rice <oliver@oliverrice.com>", "msg_from_op": false, "msg_subject": "Re: vector search support" }, { "msg_contents": "Hi,\r\n\r\nOn 4/21/23 8:07 PM, Nathan Bossart wrote:\r\n> Attached is a proof-of-concept/work-in-progress patch set that adds\r\n> functions for \"vectors\" repreѕented with one-dimensional float8 arrays.\r\n> These functions may be used in a variety of applications, but I am\r\n> proposing them with the AI/ML use-cases in mind. I am posting this early\r\n> in the v17 cycle in hopes of gathering feedback prior to PGCon.\r\n\r\nThanks for proposing this. Looking forward to discussing more in person. \r\nThere's definitely demand to use PostgreSQL to store / search over \r\nvector data, and I do think we need to improve upon this in core.\r\n\r\n> With the accessibility of AI/ML tools such as large language models (LLMs),\r\n> there has been a demand for storing and manipulating high-dimensional\r\n> vectors in PostgreSQL, particularly around nearest-neighbor queries. Many\r\n> of these vectors have more than 1500 dimensions. \r\n\r\n1536 seems to be a popular one from LLMs, but I've been seeing much \r\nhigher dimensionality (8K, 16K etc). My hunch is that at a practical \r\nlevel, apps are going to favor data sets / sources that use a reduced \r\ndimensionality, but I wouldn't be shocked if we see vectors of all sizes.\r\n\r\n> The cube extension [0]\r\n> provides some of the distance functionality (e.g., taxicab, Euclidean, and\r\n> Chebyshev), but it is missing some popular functions (e.g., cosine\r\n> similarity, dot product), and it is limited to 100 dimensions. We could\r\n> extend cube to support more dimensions, but this would require reworking\r\n> its indexing code and filling in gaps between the cube data type and the\r\n> array types. For some previous discussion about using the cube extension\r\n> for this kind of data, see [1].\r\n\r\nI've stared at the cube code quite a bit over the past few months. There \r\nare definitely some clever methods in it for handling searches over \r\n(now) lower dimensionality data, but I generally agree we should add \r\nfunctionality that's specific to ARRAY types.\r\n\r\nI'll start making specific comments on the patches below.\r\n\r\n\r\n> float8[] is well-supported and allows for effectively unlimited dimensions\r\n> of data. float8 matches the common output format of many AI embeddings,\r\n> and it allows us or extensions to implement indexing methods around these\r\n> functions. This patch set does not yet contain indexing support, but we\r\n> are exploring using GiST or GIN for the use-cases in question. It might\r\n> also be desirable to add support for other linear algebra operations (e.g.,\r\n> operations on matrices). The attached patches likely only scratch the\r\n> surface of the \"vector search\" use-case.\r\n> \r\n> The patch set is broken up as follows:\r\n> \r\n> * 0001 does some minor refactoring of dsqrt() in preparation for 0002.\r\n\r\nThis seems pretty benign and may as well do anyway, though we may need \r\nto expand on it based on comments on next patch. Question on:\r\n\r\n+static inline float8\r\n+float8_sqrt(const float8 val)\r\n+{\r\n+\tfloat8\t\tresult;\r\n+\r\n+\tif (unlikely(val < 0))\r\n\r\nShould this be:\r\n\r\n if (unlikely(float8_lt(val, 0))\r\n\r\nSimilarly:\r\n\r\n+\tif (unlikely(result == 0.0) && val != 0.0)\r\n\r\n if (unlikely(float8_eq(result,0.0)) && float8_ne(val, 0.0))\r\n\r\n\r\n> * 0002 adds several vector-related functions, including distance functions\r\n> and a kmeans++ implementation.\r\n\r\nNice. Generally I like this patch. The functions seems to match the most \r\ncommonly used vector distance functions I'm seeing, and it includes a \r\nfunction that can let a user specify a constraint on an ARRAY column so \r\nthey can ensure it contains valid vectors.\r\n\r\nWhile I think supporting float8 is useful, I've been seeing a mix of \r\ndata types in the different AI/ML vector embeddings, i.e. float4 / \r\nfloat8. Additionally, it could be helpful to support integers as well, \r\nparticularly based on some of the dimensionality reduction techniques \r\nI've seen. I think this holds double true for kmeans, which is often \r\nused in those calculations.\r\n\r\nI'd suggest ensure these functions support:\r\n\r\n* float4, float8\r\n* int2, int4, int8\r\n\r\nThere's probably some nuance of how we document this too, given our \r\ndocs[1] specify real / double precision, and smallint, int, bigint.\r\n\r\n(Separately, we mention the int2/int4/int8 aliases in [1], but not \r\nfloat4/float8, which seems like a small addition we should make).\r\n\r\nIf you agree, I'd be happy to review more closely once that's implemented.\r\n\r\nOther things:\r\n\r\n* kmeans -- we're using kmeans++, should the function name reflect that? \r\nDo you think we could end up with a different kmeans algo in the future? \r\nMaybe we allow the user to specify the kmeans algo from the function \r\nname (with the default / only option today being kmeans++)?\r\n\r\n> * 0003 adds support for optionally using the OpenBLAS library, which is an\r\n> implementation of the Basic Linear Algebra Subprograms [2]\r\n> specification. Basic testing with this library showed a small\r\n> performance boost, although perhaps not enough to justify giving this\r\n> patch serious consideration.\r\n\r\nIt'd be good to see what else we could use OpenBLAS with. Maybe that's a \r\ndiscussion for PGCon.\r\n\r\n> Of course, there are many open questions. For example, should PostgreSQL\r\n> support this stuff out-of-the-box in the first place? \r\n\r\nYes :) One can argue an extension (and pgvector[2] already does a lot \r\nhere), but I think native support would be generally helpful for users. \r\nIt does remove the friction of starting out.\r\n\r\nThere's also an interesting use-case downthread (I'll comment on that \r\nthere) that demonstrates why it's helpful to have variability in vector \r\nsize in an ARRAY column, which is an argument for supporting it there.\r\n\r\n> And should we\r\n> introduce a vector data type or SQL domains for treating float8[] as\r\n> vectors? IMHO these vector search use-cases are an exciting opportunity\r\n> for the PostgreSQL project, so I am eager to hear what folks think.\r\n\r\nHaving a vector type could give us some advantages in how we \r\nstore/search over the data. For example, we perform validation checks up \r\nfront, normalize the vector, etc. and any index implementations will \r\nhave less work to do on that front. We may also be able to give more \r\noptions to tune how the vector is stored, e.g. perform inversion on \r\ninsert/update.\r\n\r\nAgain, it's a fair argument that this can be done in an extension, but \r\nhistorically we've seen reduced friction when we add support in core. \r\nIt'd also make building additional functionality easier, whether in core \r\nor an extension.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/docs/current/datatype-numeric.html\r\n[2] https://github.com/pgvector/pgvector", "msg_date": "Fri, 26 May 2023 10:24:21 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: vector search support" }, { "msg_contents": "On 5/25/23 1:48 PM, Oliver Rice wrote:\r\n\r\n> A nice side effect of using the float8[] to represent vectors is that it \r\n> allows for vectors of different sizes to coexist in the same column.\r\n> \r\n> We most frequently see (pgvector) vector columns being used for storing \r\n> ML embeddings. Given that different models produce embeddings with a \r\n> different number of dimensions, the need to specify a vector’s size in \r\n> DDL tightly couples the schema to a single model. Support for variable \r\n> length vectors would be a great way to decouple those concepts. It would \r\n> also be a differentiating feature from existing vector stores.\r\n\r\nI hadn't thought of that, given most of what I've seen (or at least my \r\npersonal bias in designing systems) is you keep a vector of one \r\ndimensionality in a column. But this sounds like where having native \r\nsupport in a variable array would help.\r\n\r\n> One drawback is that variable length vectors complicates indexing for \r\n> similarity search because similarity measures require vectors of \r\n> consistent length. Partial indexes are a possible solution to that challenge\r\n\r\nYeah, that presents a challenge. This may also be an argument for a \r\nvector data type, since that would eliminate the need to check for \r\nconsistent dimensionality on the indexing.\r\n\r\nJonathan", "msg_date": "Fri, 26 May 2023 10:32:18 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: vector search support" }, { "msg_contents": "On 4/26/23 9:31 AM, Giuseppe Broccolo wrote:\r\n> Hi Nathan,\r\n> \r\n> I find the patches really interesting. Personally, as Data/MLOps \r\n> Engineer, I'm involved in a project where we use embedding techniques to \r\n> generate vectors from documents, and use clustering and kNN searches to \r\n> find similar documents basing on spatial neighbourhood of generated \r\n> vectors.\r\n\r\nThanks! This seems to be a pretty common use-case these days.\r\n\r\n> We finally opted for ElasticSearch as search engine, considering that it \r\n> was providing what we needed:\r\n> \r\n> * support to store dense vectors\r\n> * support for kNN searches (last version of ElasticSearch allows this)\r\n\r\nI do want to note that we can implement indexing techniques with GiST \r\nthat perform K-NN searches with the \"distance\" support function[1], so \r\nadding the fundamental functions to help with this around known vector \r\nsearch techniques could add this functionality. We already have this \r\ntoday with \"cube\", but as Nathan mentioned, it's limited to 100 dims.\r\n\r\n> An internal benchmark showed us that we were able to achieve the \r\n> expected performance, although we are still lacking some points:\r\n> \r\n> * clustering of vectors (this has to be done outside the search engine, \r\n> using DBScan for our use case)\r\n\r\n From your experience, have you found any particular clustering \r\nalgorithms better at driving a good performance/recall tradeoff?\r\n\r\n> * concurrency in updating the ElasticSearch indexes storing the dense \r\n> vectors\r\n\r\nI do think concurrent updates of vector-based indexes is one area \r\nPostgreSQL can ultimately be pretty good at, whether in core or in an \r\nextension.\r\n\r\n> I found these patches really interesting, considering that they would \r\n> solve some of open issues when storing dense vectors. Index support \r\n> would help a lot with searches though.\r\n\r\nGreat -- thanks for the feedback,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/docs/devel/gist-extensibility.html", "msg_date": "Fri, 26 May 2023 10:37:57 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: vector search support" }, { "msg_contents": "Hi Jonathan,\n\nOn 5/26/23 3:38 PM, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n\n> On 4/26/23 9:31 AM, Giuseppe Broccolo wrote:\n> > We finally opted for ElasticSearch as search engine, considering that it\n> > was providing what we needed:\n> >\n> > * support to store dense vectors\n> > * support for kNN searches (last version of ElasticSearch allows this)\n>\n> I do want to note that we can implement indexing techniques with GiST\n> that perform K-NN searches with the \"distance\" support function[1], so\n> adding the fundamental functions to help with this around known vector\n> search techniques could add this functionality. We already have this\n> today with \"cube\", but as Nathan mentioned, it's limited to 100 dims.\n>\n\nYes, I was aware of this. It would be enough to define the required support\nfunctions for GiST\nindexing (I was a bit in the loop when it was tried to add PG14 presorting\nsupport to GiST indexing\nin PostGIS[1]). That would be really helpful indeed. I was just mentioning\nit because I know about\nother teams using ElasticSearch as a storage of dense vectors only for this.\n\n\n> > An internal benchmark showed us that we were able to achieve the\n> > expected performance, although we are still lacking some points:\n> >\n> > * clustering of vectors (this has to be done outside the search engine,\n> > using DBScan for our use case)\n>\n> From your experience, have you found any particular clustering\n> algorithms better at driving a good performance/recall tradeoff?\n>\n\nNope, it really depends on the use case: the point of using DBScan above\nwas mainly because it's a way of clustering without knowing a priori the\nnumber\nof clusters the algorithm should be able to retrieve, which is actually a\nparameter\nneeded for Kmeans. Depending on the use case, DBScan might have better\nperformance in noisy datasets (i.e. entries that really do not belong to a\ncluster in\nparticular). Noise in vectors obtained with embedding models is quite\nnormal,\nespecially when the embedding model is not properly tuned/trained.\n\nIn our use case, DBScan was more or less the best choice, without biasing\nthe\nexpected clusters.\n\nAlso PostGIS includes an implementation of DBScan for its geometries[2].\n\n\n> > * concurrency in updating the ElasticSearch indexes storing the dense\n> > vectors\n>\n> I do think concurrent updates of vector-based indexes is one area\n> PostgreSQL can ultimately be pretty good at, whether in core or in an\n> extension.\n\n\nOh, it would save a lot of overhead in updating indexed vectors! It's\nsomething needed\nwhen embedding models are re-trained, vectors are re-generated and indexes\nneed to\nbe updated.\n\nRegards,\nGiuseppe.\n\n[1]\nhttps://github.com/postgis/postgis/blob/a4f354398e52ad7ed3564c47773701e4b6b87ae8/doc/release_notes.xml#L284\n[2]\nhttps://github.com/postgis/postgis/blob/ce75a0e81aec2e8a9fad2649ff7b230327acb64b/postgis/lwgeom_window.c#L117\n\nHi Jonathan,On 5/26/23 3:38 PM, Jonathan S. Katz <jkatz@postgresql.org> wrote:On 4/26/23 9:31 AM, Giuseppe Broccolo wrote:> We finally opted for ElasticSearch as search engine, considering that it \n> was providing what we needed:\n> \n> * support to store dense vectors\n> * support for kNN searches (last version of ElasticSearch allows this)\n\nI do want to note that we can implement indexing techniques with GiST \nthat perform K-NN searches with the \"distance\" support function[1], so \nadding the fundamental functions to help with this around known vector \nsearch techniques could add this functionality. We already have this \ntoday with \"cube\", but as Nathan mentioned, it's limited to 100 dims.Yes, I was aware of this. It would be enough to define the required support functions for GiSTindexing (I was a bit in the loop when it was tried to add PG14 presorting support to GiST indexingin PostGIS[1]). That would be really helpful indeed. I was just mentioning it because I know aboutother teams using ElasticSearch as a storage of dense vectors only for this. \n> An internal benchmark showed us that we were able to achieve the \n> expected performance, although we are still lacking some points:\n> \n> * clustering of vectors (this has to be done outside the search engine, \n> using DBScan for our use case)\n\n From your experience, have you found any particular clustering \nalgorithms better at driving a good performance/recall tradeoff?Nope, it really depends on the use case: the point of using DBScan abovewas mainly because it's a way of clustering without knowing a priori the numberof clusters the algorithm should be able to retrieve, which is actually a parameterneeded for Kmeans. Depending on the use case, DBScan might have betterperformance in noisy datasets (i.e. entries that really do not belong to a cluster inparticular). Noise in vectors obtained with embedding models is quite normal,especially when the embedding model is not properly tuned/trained.In our use case, DBScan was more or less the best choice, without biasing theexpected clusters.Also PostGIS includes an implementation of DBScan for its geometries[2]. \n> * concurrency in updating the ElasticSearch indexes storing the dense \n> vectors\n\nI do think concurrent updates of vector-based indexes is one area \nPostgreSQL can ultimately be pretty good at, whether in core or in an \nextension.Oh, it would save a lot of overhead in updating indexed vectors! It's something neededwhen embedding models are re-trained, vectors are re-generated and indexes need tobe updated.Regards,Giuseppe. [1] https://github.com/postgis/postgis/blob/a4f354398e52ad7ed3564c47773701e4b6b87ae8/doc/release_notes.xml#L284[2] https://github.com/postgis/postgis/blob/ce75a0e81aec2e8a9fad2649ff7b230327acb64b/postgis/lwgeom_window.c#L117", "msg_date": "Mon, 29 May 2023 14:18:03 +0100", "msg_from": "Giuseppe Broccolo <g.broccolo.7@gmail.com>", "msg_from_op": false, "msg_subject": "Re: vector search support" }, { "msg_contents": "Hi Nathan,\n\nI noticed you implemented a closest_vector function which returns the\nclosest vector to a given one using the\nEuclidean distance: would it make sense to change the implementation in\norder to include also different distance\ndefinitions rather than the Euclidean one (for instance, cosine\nsimilarity)? Depending on the use cases, some\nmetrics could make more sense than others.\n\nGiuseppe.\n\nOn 4/22/23 1:07 AM, Nathan Bossart <nathandbossart@gmail.com> wrote:\n\n> Attached is a proof-of-concept/work-in-progress patch set that adds\n> functions for \"vectors\" repreѕented with one-dimensional float8 arrays.\n> These functions may be used in a variety of applications, but I am\n> proposing them with the AI/ML use-cases in mind. I am posting this early\n> in the v17 cycle in hopes of gathering feedback prior to PGCon.\n>\n> With the accessibility of AI/ML tools such as large language models (LLMs),\n> there has been a demand for storing and manipulating high-dimensional\n> vectors in PostgreSQL, particularly around nearest-neighbor queries. Many\n> of these vectors have more than 1500 dimensions. The cube extension [0]\n> provides some of the distance functionality (e.g., taxicab, Euclidean, and\n> Chebyshev), but it is missing some popular functions (e.g., cosine\n> similarity, dot product), and it is limited to 100 dimensions. We could\n> extend cube to support more dimensions, but this would require reworking\n> its indexing code and filling in gaps between the cube data type and the\n> array types. For some previous discussion about using the cube extension\n> for this kind of data, see [1].\n>\n> float8[] is well-supported and allows for effectively unlimited dimensions\n> of data. float8 matches the common output format of many AI embeddings,\n> and it allows us or extensions to implement indexing methods around these\n> functions. This patch set does not yet contain indexing support, but we\n> are exploring using GiST or GIN for the use-cases in question. It might\n> also be desirable to add support for other linear algebra operations (e.g.,\n> operations on matrices). The attached patches likely only scratch the\n> surface of the \"vector search\" use-case.\n>\n> The patch set is broken up as follows:\n>\n> * 0001 does some minor refactoring of dsqrt() in preparation for 0002.\n> * 0002 adds several vector-related functions, including distance functions\n> and a kmeans++ implementation.\n> * 0003 adds support for optionally using the OpenBLAS library, which is an\n> implementation of the Basic Linear Algebra Subprograms [2]\n> specification. Basic testing with this library showed a small\n> performance boost, although perhaps not enough to justify giving this\n> patch serious consideration.\n>\n> Of course, there are many open questions. For example, should PostgreSQL\n> support this stuff out-of-the-box in the first place? And should we\n> introduce a vector data type or SQL domains for treating float8[] as\n> vectors? IMHO these vector search use-cases are an exciting opportunity\n> for the PostgreSQL project, so I am eager to hear what folks think.\n>\n> [0] https://www.postgresql.org/docs/current/cube.html\n> [1] https://postgr.es/m/2271927.1593097400%40sss.pgh.pa.us\n> [2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\nHi Nathan,I noticed you implemented a closest_vector function which returns the closest vector to a given one using theEuclidean distance: would it make sense to change the implementation in order to include also different distancedefinitions rather than the Euclidean one (for instance, cosine similarity)? Depending on the use cases, somemetrics could make more sense than others.Giuseppe.On 4/22/23 1:07 AM, Nathan Bossart <nathandbossart@gmail.com> wrote:Attached is a proof-of-concept/work-in-progress patch set that adds\nfunctions for \"vectors\" repreѕented with one-dimensional float8 arrays.\nThese functions may be used in a variety of applications, but I am\nproposing them with the AI/ML use-cases in mind.  I am posting this early\nin the v17 cycle in hopes of gathering feedback prior to PGCon.\n\nWith the accessibility of AI/ML tools such as large language models (LLMs),\nthere has been a demand for storing and manipulating high-dimensional\nvectors in PostgreSQL, particularly around nearest-neighbor queries.  Many\nof these vectors have more than 1500 dimensions.  The cube extension [0]\nprovides some of the distance functionality (e.g., taxicab, Euclidean, and\nChebyshev), but it is missing some popular functions (e.g., cosine\nsimilarity, dot product), and it is limited to 100 dimensions.  We could\nextend cube to support more dimensions, but this would require reworking\nits indexing code and filling in gaps between the cube data type and the\narray types.  For some previous discussion about using the cube extension\nfor this kind of data, see [1].\n\nfloat8[] is well-supported and allows for effectively unlimited dimensions\nof data.  float8 matches the common output format of many AI embeddings,\nand it allows us or extensions to implement indexing methods around these\nfunctions.  This patch set does not yet contain indexing support, but we\nare exploring using GiST or GIN for the use-cases in question.  It might\nalso be desirable to add support for other linear algebra operations (e.g.,\noperations on matrices).  The attached patches likely only scratch the\nsurface of the \"vector search\" use-case.\n\nThe patch set is broken up as follows:\n\n * 0001 does some minor refactoring of dsqrt() in preparation for 0002.\n * 0002 adds several vector-related functions, including distance functions\n   and a kmeans++ implementation.\n * 0003 adds support for optionally using the OpenBLAS library, which is an\n   implementation of the Basic Linear Algebra Subprograms [2]\n   specification.  Basic testing with this library showed a small\n   performance boost, although perhaps not enough to justify giving this\n   patch serious consideration.\n\nOf course, there are many open questions.  For example, should PostgreSQL\nsupport this stuff out-of-the-box in the first place?  And should we\nintroduce a vector data type or SQL domains for treating float8[] as\nvectors?  IMHO these vector search use-cases are an exciting opportunity\nfor the PostgreSQL project, so I am eager to hear what folks think.\n\n[0] https://www.postgresql.org/docs/current/cube.html\n[1] https://postgr.es/m/2271927.1593097400%40sss.pgh.pa.us\n[2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 29 May 2023 14:51:30 +0100", "msg_from": "Giuseppe Broccolo <g.broccolo.7@gmail.com>", "msg_from_op": false, "msg_subject": "Re: vector search support" } ]
[ { "msg_contents": "Hi,\n\nI think there could be a mistake in freespace/README. There are\nseveral places where it says about ~4000 slots per FSM page for the\ndefault BLKSZ:\n\n\"\"\"\nFor example, assuming each FSM page can hold information about 4 pages (in\nreality, it holds (BLCKSZ - headers) / 2, or ~4000 with default BLCKSZ),\n\"\"\"\n\nand:\n\n\"\"\"\nTo keep things simple, the tree is always constant height. To cover the\nmaximum relation size of 2^32-1 blocks, three levels is enough with the default\nBLCKSZ (4000^3 > 2^32).\n\"\"\"\n\nLet's determine the amount of levels in each FSM page first. I'm going\nto use Python for this. Note that range(0,13) returns 13 numbers from\n0 to 12:\n\n```\n>>> sum([pow(2,n) for n in range(0,13) ])\n8191\n>>> 8*1024\n8192\n```\n\n13 levels are not going to fit since we need extra 24 bytes per\nPageHeaderData and a few more bytes for an int value fp_next_slot.\nWhich gives us 12 levels and the number of slots:\n\n```\n>>> # there are pow(2,0) == 1 byte on the 1st level of the tree\n>>> pow(2,12 - 1)\n2048\n```\n\nThe number of levels in the entire, high-level tree, seems to be correct:\n\n```\n>>> pow(2048,3) > pow(2,32) - 1\nTrue\n```\n\nHopefully I didn't miss or misunderstood anything.\n\nThoughts?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 22 Apr 2023 12:58:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Mistake in freespace/README?" }, { "msg_contents": "Hi,\n\n> Hopefully I didn't miss or misunderstood anything.\n\nAnd for sure I did. Particularly the fact that the tree inside the FSM\npage is not a perfect binary tree:\n\n\"\"\"\n 0\n 1 2\n 3 4 5 6\n7 8 9 A B\n\"\"\"\n\nSo there are 13 levels and approximately 4K slots per FSM page after all:\n\n```\n>>> 8*1024-24-4 - sum([pow(2,n) for n in range(0,12) ])\n4069\n```\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 22 Apr 2023 13:21:46 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Mistake in freespace/README?" } ]
[ { "msg_contents": "This is a proposal for a new transaction characteristic. I haven't\nwritten any code, yet, and am interested in hearing if others may find\nthis feature useful.\n\nMany a times we start a transaction that we never intend to commit;\nfor example, for testing, or for EXPLAIN ANALYZE, or after detecting\nunexpected results but still interested in executing more commands\nwithout risking commit, etc.\n\nA user would like to declare their intent to eventually abort the\ntransaction as soon as possible, so that the transaction does not\naccidentally get committed.\n\nThis feature would allow the user to mark a transaction such that it\ncan never be committed. We must allow such marker to be placed when\nthe transaction is being started, or while it's in progress.\n\nOnce marked uncommittable, do not allow the marker to be removed.\nHence, once deemed uncommittable, the transaction cannot be committed,\neven intentionally. This protects against cases where one script\nincludes another (e.g. psql's \\i command), and the included script may\nhave statements that turn this marker back on.\n\nAny command that ends a transaction (END, COMMIT, ROLLBACK) must\nresult in a rollback.\n\nAll of these properties seem useful for savepoints, too. But I want to\nfocus on just the top-level transactions, first.\n\nI feel like the BEGIN and SET TRANSACTION commands would be the right\nplaces to introduce this feature.\n\nBEGIN [ work | transaction ] [ [ NOT ] COMMITTABLE ];\nSET TRANSACTION [ [ NOT ] COMMITTABLE ];\n\nI'm not yet sure if the COMMIT AND CHAIN command should carry this\ncharacteristic to the next transaction.\n\nThoughts?\n\nBest regards,\nGurjeet https://Gurje.et\nhttp://aws.amazon.com\n\n\n", "msg_date": "Sat, 22 Apr 2023 08:01:04 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Mark a transaction uncommittable" }, { "msg_contents": "On Sat, 22 Apr 2023 at 11:01, Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> This is a proposal for a new transaction characteristic. I haven't\n> written any code, yet, and am interested in hearing if others may find\n> this feature useful.\n>\n> Many a times we start a transaction that we never intend to commit;\n> for example, for testing, or for EXPLAIN ANALYZE, or after detecting\n> unexpected results but still interested in executing more commands\n> without risking commit, etc.\n>\n> A user would like to declare their intent to eventually abort the\n> transaction as soon as possible, so that the transaction does not\n> accidentally get committed.\n>\n\nI have an application for this: creating various dev/test versions of data\nfrom production.\n\nStart by restoring a copy of production from backup. Then successively\ncreate several altered versions of the data and save them to a place where\ndevelopers can pick them up. For example, you might have one version which\nhas all data old than 1 year deleted, and another where 99% of the\nstudents/customers/whatever are deleted. Anonymization could also be\napplied. This would give you realistic (because it ultimately originates\nfrom production) test data.\n\nThis could be done by starting a non-committable transaction, making the\nadjustments, then doing a pg_dump in the same transaction (using --snapshot\nto allow it to see that transaction). Then rollback, and repeat for the\nother versions. This saves repeatedly restoring the (probably very large)\nproduction data each time.\n\nWhat I’m not sure about is how long it takes to rollback a transaction. I'm\nassuming that it’s very quick compared to restoring from backup.\n\nIt would be nice if pg_basebackup could also have the --snapshot option.\n\nOn Sat, 22 Apr 2023 at 11:01, Gurjeet Singh <gurjeet@singh.im> wrote:This is a proposal for a new transaction characteristic. I haven't\nwritten any code, yet, and am interested in hearing if others may find\nthis feature useful.\n\nMany a times we start a transaction that we never intend to commit;\nfor example, for testing, or for EXPLAIN ANALYZE, or after detecting\nunexpected results but still interested in executing more commands\nwithout risking commit,  etc.\n\nA user would like to declare their intent to eventually abort the\ntransaction as soon as possible, so that the transaction does not\naccidentally get committed.I have an application for this: creating various dev/test versions of data from production.Start by restoring a copy of production from backup. Then successively create several altered versions of the data and save them to a place where developers can pick them up. For example, you might have one version which has all data old than 1 year deleted, and another where 99% of the students/customers/whatever are deleted. Anonymization could also be applied. This would give you realistic (because it ultimately originates from production) test data.This could be done by starting a non-committable transaction, making the adjustments, then doing a pg_dump in the same transaction (using --snapshot to allow it to see that transaction). Then rollback, and repeat for the other versions. This saves repeatedly restoring the (probably very large) production data each time.What I’m not sure about is how long it takes to rollback a transaction. I'm assuming that it’s very quick compared to restoring from backup.It would be nice if pg_basebackup could also have the --snapshot option.", "msg_date": "Sat, 22 Apr 2023 12:53:23 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mark a transaction uncommittable" }, { "msg_contents": "Hi,\n\nOn Sat, Apr 22, 2023 at 12:53:23PM -0400, Isaac Morland wrote:\n>\n> I have an application for this: creating various dev/test versions of data\n> from production.\n>\n> Start by restoring a copy of production from backup. Then successively\n> create several altered versions of the data and save them to a place where\n> developers can pick them up. For example, you might have one version which\n> has all data old than 1 year deleted, and another where 99% of the\n> students/customers/whatever are deleted. Anonymization could also be\n> applied. This would give you realistic (because it ultimately originates\n> from production) test data.\n>\n> This could be done by starting a non-committable transaction, making the\n> adjustments, then doing a pg_dump in the same transaction (using --snapshot\n> to allow it to see that transaction). Then rollback, and repeat for the\n> other versions. This saves repeatedly restoring the (probably very large)\n> production data each time.\n\nThere already are tools to handle those use cases. Looks for instance at\nhttps://github.com/mla/pg_sample to backup a consistent subset of the data, or\nhttps://github.com/rjuju/pg_anonymize to transparently pg_dump (or\ninteractively query) anonymized data.\n\nBoth tool also works when connected on a physical standby, while trying to\nupdate data before dumping them wouldn't.\n\n\n", "msg_date": "Sun, 23 Apr 2023 07:33:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mark a transaction uncommittable" }, { "msg_contents": "On Sat, Apr 22, 2023 at 8:01 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> This is a proposal for a new transaction characteristic. I haven't\n> written any code, yet, and am interested in hearing if others may find\n> this feature useful.\n\nPlease see attached the patch that introduces this new feature. The\npatch includes all the code changes that I could foresee; that is, it\nincludes server changes as well as changes for psql's auto-completion.\nThe patch does not include doc changes, or any tests. Please see a\nsample session at the end of the email, though.\n\nThe patch introduces a new keyword, COMMITTABLE, and uses that to\nintroduce the new transaction attribute via BEGIN [TRANSACTION] and\nSET TRANSACTION commands. The new code follows the semantics of the\n[NOT] DEFERRABLE attribute of a transaction almost exactly.\n\n> Many a times we start a transaction that we never intend to commit;\n> for example, for testing, or for EXPLAIN ANALYZE, or after detecting\n> unexpected results but still interested in executing more commands\n> without risking commit, etc.\n>\n> A user would like to declare their intent to eventually abort the\n> transaction as soon as possible, so that the transaction does not\n> accidentally get committed.\n>\n> This feature would allow the user to mark a transaction such that it\n> can never be committed. We must allow such marker to be placed when\n> the transaction is being started, or while it's in progress.\n\nThe user can mark the transaction as uncommittable either when\nstarting the transaction, or while it is still in progress.\n\n> Once marked uncommittable, do not allow the marker to be removed.\n> Hence, once deemed uncommittable, the transaction cannot be committed,\n> even intentionally. This protects against cases where one script\n> includes another (e.g. psql's \\i command), and the included script may\n> have statements that turn this marker back on.\n\nAlthough the patch implements this desired behavior from the initial\nproposal, I'm no longer convinced of the need to prevent user from\nre-enabling committability of the transaction.\n\n> Any command that ends a transaction (END, COMMIT, ROLLBACK) must\n> result in a rollback.\n>\n> All of these properties seem useful for savepoints, too. But I want to\n> focus on just the top-level transactions, first.\n\nThe patch does not change any behaviour related to savepoints. Having\nmade it work for the top-level transaction, and having seen the\nsavepoint/subtransaction code as I came across it as I developed this\npatch, I feel that it will be very tricky to implement this behavior\nsafely for savepoints. Moreover, having thought more about the\npossible use cases, I don't think implementing uncommittability of\nsavepoints will be of much use in the real world.\n\n> I feel like the BEGIN and SET TRANSACTION commands would be the right\n> places to introduce this feature.\n>\n> BEGIN [ work | transaction ] [ [ NOT ] COMMITTABLE ];\n> SET TRANSACTION [ [ NOT ] COMMITTABLE ];\n\nI tried to avoid adding a new keyword (COMMITTABLE) to the grammar,\nbut could not think of a better alternative. E.g. DISALLOW COMMIT\nsounds like a good alternative, but DISALLOW is currently not a\nkeyword, so this form doesn't buy us anything.\n\n> I'm not yet sure if the COMMIT AND CHAIN command should carry this\n> characteristic to the next transaction.\n\nAfter a little consideration, in the spirit of POLA, I have not done\nanything special to change the default behaviour of COMMIT/ROLLBACK\nAND CHAIN.\n\nAny feedback is welcome. Please see below an example session\ndemonstrating this feature.\n\npostgres=# begin transaction committable;\nBEGIN\n\npostgres=# commit;\nCOMMIT\n\npostgres=# begin transaction not committable;\nBEGIN\n\npostgres=# commit;\nWARNING: transaction is not committable\nROLLBACK\n\npostgres=# begin transaction not committable;\nBEGIN\n\npostgres=# set transaction_committable = true;\n-- for clarity, we may want to emit additional \"WARNING: cannot make\ntransaction committable\", although the patch currently doesn't do so.\nERROR: invalid value for parameter \"transaction_committable\": 1\n\npostgres=# commit;\nROLLBACK\n\npostgres=# begin transaction not committable;\nBEGIN\n\npostgres=# set transaction committable ;\n-- for clarity, we may want to emit additional \"WARNING: cannot make\ntransaction committable\", although the patch currently doesn't do so.\nERROR: invalid value for parameter \"transaction_committable\": 1\n\npostgres=# set transaction committable ;\nERROR: current transaction is aborted, commands ignored until end of\ntransaction block\n\npostgres=# commit;\nROLLBACK\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Mon, 5 Jun 2023 00:22:15 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Mark a transaction uncommittable" }, { "msg_contents": "On Mon, 2023-06-05 at 00:22 -0700, Gurjeet Singh wrote:\n> On Sat, Apr 22, 2023 at 8:01 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> > \n> > This is a proposal for a new transaction characteristic. I haven't\n> > written any code, yet, and am interested in hearing if others may find\n> > this feature useful.\n> \n> Please see attached the patch that introduces this new feature.\n\nCan you explain why *you* would find this feature useful?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 05 Jun 2023 09:32:33 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Mark a transaction uncommittable" }, { "msg_contents": "On Mon, Jun 5, 2023 at 12:32 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Mon, 2023-06-05 at 00:22 -0700, Gurjeet Singh wrote:\n> > On Sat, Apr 22, 2023 at 8:01 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> > >\n> > > This is a proposal for a new transaction characteristic. I haven't\n> > > written any code, yet, and am interested in hearing if others may find\n> > > this feature useful.\n> >\n> > Please see attached the patch that introduces this new feature.\n>\n> Can you explain why *you* would find this feature useful?\n\nThe idea came to me while I was reading a blog post, where the author\nhad to go to great lengths to explain to the reader why the queries\nwould be disastrous, if run on production database, and that they\nshould run those queries inside a transaction, and they _must_\nrollback the transaction.\n\nHaving written my fair share of tutorials, and blogs, I know how\nhelpful it would be to start a transaction that the reader can't\naccidentally commit.\n\nAs others have noted in this thread, this feature can be useful in\nother situations, as well, like when you're trying to export a\nsanitized copy of a production database. Especially in such a\nsituation you do not want those sanitization operations to ever be\ncommitted on the source database.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Mon, 5 Jun 2023 13:24:26 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Mark a transaction uncommittable" }, { "msg_contents": "On Sat, Apr 22, 2023 at 4:33 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Sat, Apr 22, 2023 at 12:53:23PM -0400, Isaac Morland wrote:\n> >\n> > I have an application for this: creating various dev/test versions of data\n> > from production.\n> >\n> > Start by restoring a copy of production from backup. Then successively\n> > create several altered versions of the data and save them to a place where\n> > developers can pick them up. For example, you might have one version which\n> > has all data old than 1 year deleted, and another where 99% of the\n> > students/customers/whatever are deleted. Anonymization could also be\n> > applied. This would give you realistic (because it ultimately originates\n> > from production) test data.\n> >\n> > This could be done by starting a non-committable transaction, making the\n> > adjustments, then doing a pg_dump in the same transaction (using --snapshot\n> > to allow it to see that transaction). Then rollback, and repeat for the\n> > other versions. This saves repeatedly restoring the (probably very large)\n> > production data each time.\n>\n> There already are tools to handle those use cases. Looks for instance at\n> https://github.com/mla/pg_sample to backup a consistent subset of the data, or\n> https://github.com/rjuju/pg_anonymize to transparently pg_dump (or\n> interactively query) anonymized data.\n>\n> Both tool also works when connected on a physical standby, while trying to\n> update data before dumping them wouldn't.\n\nLike everything in life, there are pros and cons to every approach.\n\npg_anonymize is an extension that may not be installed on the database\nyou're working with. And pg_sample (and similar utilities) may not\nhave a way to extract or sanitize the exact data you want.\n\nWith this feature built into Postgres, you'd not need any external\nutilities or extensions. The benefits of features built into Postgres\nare that the users can come up with ways of leveraging such a feature\nin future in a way that we can't envision today.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Mon, 5 Jun 2023 13:27:52 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Mark a transaction uncommittable" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHello\r\n\r\nIt is one of those features that a handful of people would find useful in specific user cases. I think it is a nice to have feature that safeguards your production database against unwanted commits when troubleshooting production problems. Your patch applies fine on master and I am able to run a couple tests on it and it seems to do as described. I noticed that the patch has a per-session variable \"default_transaction_committable\" that could make all transaction committable or uncommittable on the session even without specifying \"begin transaction not committable;\" I am wondering if we should have a configurable default at all as I think it should always defaults to true and unchangable. If an user wants a uncommittable transaction, he/she will need to explicitly specify that during \"begin\". Having another option to change default behavior for all transactions may be a little unsafe, it is possible someone could purposely change this default to false on a production session that needs transactions to absolutely commit, causing damages there. \r\n\r\nthank you\r\nCary Huang\r\n------------------\r\nHighgo Software Canada\r\nwww.highgo.ca", "msg_date": "Tue, 06 Jun 2023 19:03:31 +0000", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Mark a transaction uncommittable" }, { "msg_contents": "> On 5 Jun 2023, at 09:22, Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> Please see attached the patch that introduces this new feature.\n\nThis patch fails the guc_check test due to something missing in the sample\nconfiguration. Please send a rebased version with this test fixed.\n\n[08:31:26.680] --- stdout ---\n[08:31:26.680] # executing test in /tmp/cirrus-ci-build/build/testrun/test_misc/003_check_guc group test_misc test 003_check_guc\n[08:31:26.680] not ok 1 - no parameters missing from postgresql.conf.sample\n[08:31:26.680] ok 2 - no parameters missing from guc_tables.c\n[08:31:26.680] ok 3 - no parameters marked as NOT_IN_SAMPLE in postgresql.conf.sample\n[08:31:26.680] 1..3\n[08:31:26.680] # test failed\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 3 Jul 2023 16:07:53 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Mark a transaction uncommittable" } ]
[ { "msg_contents": "Hi,\n\nBest way to restrict query on many columns is to have index on these\ncolumns.\nBUT with faster and faster IOPS is it not already feasable to make\nmerge\njoin on indexes ? Then it could be possible to have just index on\nsingle columns\nfor more rare queries. Index which have column and oid/primary key.\nThen make merge join on all columns restreicted/sorted.\nSame for full text search (word,relevance,id)\n\nBest regards,\n   Maek Mosiewicz\n\nHi,Best way to restrict query on many columns is to have index on these columns.BUT with faster and faster IOPS is it not already feasable to make mergejoin on indexes ? Then it could be possible to have just index on single columnsfor more rare queries. Index which have column and oid/primary key.Then make merge join on all columns restreicted/sorted.Same for full text search (word,relevance,id)Best regards,   Maek Mosiewicz", "msg_date": "Sat, 22 Apr 2023 17:15:35 +0200", "msg_from": "Marek Mosiewicz <marek.mosiewicz@jotel.com.pl>", "msg_from_op": true, "msg_subject": "Index merge join" } ]
[ { "msg_contents": "\"$SUBJECT\" may sound like a Zen kōan, but buffile.c does a few useful\nthings other than buffering (I/O time tracking, inter-process file\nexchange though file sets, segmentation, etc). A couple of places\nsuch as logtape.c, sharedtuplestore.c and gistbuildbuffers.c do\nblock-oriented I/O in temporary files, but unintentionally also pay\nthe price of buffile.c's buffering scheme for no good reason, namely:\n\n * extra memory for the buffer\n * extra memcpy() to/from that buffer\n * I/O chopped up into BLCKSZ-sized system calls\n\nMelanie and I were bouncing some ideas around about the hash join\nbatch memory usage problem, and I thought it might be good to write\ndown these thoughts about lower level buffer management stuff in a new\nthread. They don't address the hard algorithmic issues in that topic\n(namely: at some point, doubling the number of partition files will\ntake more buffer space than can be saved by halving the size of the\nhash table, parallel or not), but we probably should remove a couple\nof the constants that make the parallel case worse. As a side benefit\nit would be nice to also fix some obvious silly efficiency problems\nfor all users of BufFiles.\n\nFirst, here is a toy patch to try to skip the buffering layer in\nBufFile when possible, without changing any interfaces. Other ways\nwould of course be possible, including a new separate module with a\ndifferent name. The approach taken here is to defer allocation of\nBufFile::buffer, and use a fast path that just passes I/O straight\nthrough to the OS without touching the sides, until the first\nnon-block-aligned I/O is seen, at which point we switch to the less\nefficient mode. Don't take this code too seriously, and I'm not sure\nwhat to think about short reads/writes yet; it's just a cheap demo to\naccompany this problem statement.\n\nWe could go further (and this one is probably more important).\nSharedTuplestore works in 32KB chunks, which are the \"parallel grain\"\nfor distributing data to workers that read the data back later\n(usually the workers try to read from different files to avoid\nstepping on each others' toes, but when there aren't enough files left\nthey gang up and read from one file in parallel, and we need some way\nto scatter the data to different workers; these chunks are analogous\nto the heap table blocks handed out by Parallel Seq Scan). With the\nbufferless-BufFile optimisation, data is written out directly from\nsharedtuplestore.c's buffer to the OS in a 32KB pwrite(), which is a\nnice improvement, but that's 32KB of buffer space that non-parallel HJ\ndoesn't have. We could fix that by making the chunk size dynamic, and\nturning it down as low as 8KB when the number of partitions shoots up\nto silly numbers by some heuristics. (I guess the buffer size and the\nchunk size don't have to be strictly the same, but it'd probably make\nthe code simpler...) With those changes, PHJ would use strictly no\nmore buffer memory than HJ, unless npartitions is fairly low when it\nshould be OK to do so (and we could even consider making it larger\nsometimes for bigger I/Os).\n\n(Obviously it's just terrible in every way when we have very high\nnumbers of partitions, and this just mitigates one symptom. For\nexample, once we get to our max fds limit, we're re-opening and\nre-closing files at high speed, which is bonkers, and the kernel has\nan identical buffer space management problem on its side of the wall.\nIn the past we've talked about something a bit more logtape.c-like\n(multiple streams in one file with our own extent management) which\nmight help a bit. But... it feels like what we really need is a\nhigher level algorithmic solution that lets us cap the number of\npartitions, as chronicled elsewhere.)\n\nWe could go further again. The above concerns the phase where we're\nwriting data out to many partitions, which requires us to have\nin-memory state for all of them. But during the reading back phase,\nwe read from just a single partition at a time, so we don't have the\nmemory explosion problem. For dumb implementation reasons\n(essentially because it was styled on the older pre-parallel HJ code),\nSharedTuplestore reads one tuple at a time from buffile.c with\nBufFileRead(), so while reading it *needs* BufFile's internal\nbuffering, which means 8KB pread() calls, no optimisation for you. We\ncould change it to read in whole 32KB chunks at a time (or dynamic\nsize as mentioned already), via the above faster bufferless path. The\nnext benefit would be that, with whole chunks in *its* buffer, it\ncould emit MinimalTuples using a pointer directly into that buffer,\nskipping another memcpy() (unless the tuple is too big and involves\noverflow chunks, a rare case). (Perhaps there could even be some\nclever way to read whole chunks directly into the hash table memory\nchunks at once, instead of copying tuples in one-by-one, not\nexamined.)\n\nAnother closely related topic that came up in other recent I/O work:\nHow can we arrange for all of these buffers to be allocated more\nefficiently? In fact we know how many partitions there are at a\nhigher level, and we'd ideally make one\npalloc_aligned(PG_IO_ALIGN_SIZE) for all of them, instead of many\nindividual smaller pallocs. Aside from wasted CPU cycles, individual\npallocs have invisible space overheads at both libc and palloc level,\nand non-I/O-aligned buffers have invisible costs at I/O system call\nlevel (the kernel must hold/pin/something an extra neighbouring VM\npage while copying in/out of user space, if not nicely aligned).\nHowever, palloc_aligned()'s strategy is to over-allocate by 4KB and\nthen round the pointer up, so we'd ideally like to do that just once\nfor a whole array, not once for each BufFile (you can see this problem\nin the attached patch, which does that that for individual BufFile\nobjects, but it'd make the problem Jehan-Guillaume wrote about\nslightly worse, which is why I didn't do that in commit faeedbcef\n\"Introduce PG_IO_ALIGN_SIZE and align all I/O buffers.\"). This seems\nto suggest a different design where the client code can optionally\nprovide its own buffer space, perhaps something as simple as\nBufFileSetBuffer(), and we'd need to make one big allocation and pass\nin pointers to its elements for the raw batch files (for HJ), and come\nup with something similar for SharedTuplestore's internal chunk buffer\n(for PHJ)).", "msg_date": "Sun, 23 Apr 2023 13:09:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Bufferless buffered files" } ]
[ { "msg_contents": "Hi\r\n\r\nmaybe I found a bug in xmlserialize\r\n\r\nSELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>' AS\r\nvarchar INDENT);\r\n\r\n(2023-04-23 07:27:53) postgres=# SELECT xmlserialize(DOCUMENT\r\n'<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);\r\n┌─────────────────────────┐\r\n│ xmlserialize │\r\n╞═════════════════════════╡\r\n│ <foo> ↵│\r\n│ <bar> ↵│\r\n│ <val x=\"y\">42</val>↵│\r\n│ </bar> ↵│\r\n│ </foo> ↵│\r\n│ │\r\n└─────────────────────────┘\r\n(1 row)\r\n\r\nLooks so there is an extra empty row.\r\n\r\nRegards\r\n\r\nPavel\r\n\nHimaybe I found a bug in xmlserializeSELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);(2023-04-23 07:27:53) postgres=# SELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);┌─────────────────────────┐│      xmlserialize       │╞═════════════════════════╡│ <foo>                  ↵││   <bar>                ↵││     <val x=\"y\">42</val>↵││   </bar>               ↵││ </foo>                 ↵││                         │└─────────────────────────┘(1 row)Looks so there is an extra empty row.RegardsPavel", "msg_date": "Sun, 23 Apr 2023 07:31:04 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "xmlserialize bug - extra empty row at the end" }, { "msg_contents": "On 23.04.23 07:31, Pavel Stehule wrote:\n> Hi\n>\n> maybe I found a bug in xmlserialize\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar><val \n> x=\"y\">42</val></bar></foo>' AS varchar INDENT);\n>\n> (2023-04-23 07:27:53) postgres=# SELECT xmlserialize(DOCUMENT \n> '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);\n> ┌─────────────────────────┐\n> │      xmlserialize       │\n> ╞═════════════════════════╡\n> │ <foo>                  ↵│\n> │   <bar>                ↵│\n> │     <val x=\"y\">42</val>↵│\n> │   </bar>               ↵│\n> │ </foo>                 ↵│\n> │                         │\n> └─────────────────────────┘\n> (1 row)\n>\n> Looks so there is an extra empty row.\n>\n> Regards\n>\n> Pavel\n\nHi Pavel,\n\nGood catch! It looks like it comes directly from libxml2.\n\nxmlDocPtr doc = xmlReadDoc(BAD_CAST \"<foo><bar><val \nx=\\\"y\\\">42</val></bar></foo>\", NULL, NULL, 0 );\nxmlBufferPtr buf = NULL;\nxmlSaveCtxtPtr ctxt = NULL;\n\nbuf = xmlBufferCreate();\nctxt = xmlSaveToBuffer(buf, NULL, XML_SAVE_NO_DECL | XML_SAVE_FORMAT);\n\nxmlSaveDoc(ctxt, doc);\nxmlSaveClose(ctxt);\n\nprintf(\"'%s'\",buf->content);\n\n==>\n\n'<foo>\n   <bar>\n     <val x=\"y\">42</val>\n   </bar>\n</foo>\n'\n\nI'll do some digging to see if there is a good way to get rid of this \nnewline or if we need to chose a different dump function.\n\nThanks!\n\nBest, Jim\n\n\n\n", "msg_date": "Sun, 23 Apr 2023 14:02:17 +0200", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "> On Sun, Apr 23, 2023 at 02:02:17PM +0200, Jim Jones wrote:\n> On 23.04.23 07:31, Pavel Stehule wrote:\n> > Hi\n> >\n> > maybe I found a bug in xmlserialize\n> >\n> > SELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>'\n> > AS varchar INDENT);\n> >\n> > (2023-04-23 07:27:53) postgres=# SELECT xmlserialize(DOCUMENT\n> > '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);\n> > ┌─────────────────────────┐\n> > │      xmlserialize       │\n> > ╞═════════════════════════╡\n> > │ <foo>                  ↵│\n> > │   <bar>                ↵│\n> > │     <val x=\"y\">42</val>↵│\n> > │   </bar>               ↵│\n> > │ </foo>                 ↵│\n> > │                         │\n> > └─────────────────────────┘\n> > (1 row)\n> >\n> > Looks so there is an extra empty row.\n> >\n> > Regards\n> >\n> > Pavel\n>\n> Hi Pavel,\n>\n> Good catch! It looks like it comes directly from libxml2.\n>\n> xmlDocPtr doc = xmlReadDoc(BAD_CAST \"<foo><bar><val\n> x=\\\"y\\\">42</val></bar></foo>\", NULL, NULL, 0 );\n> xmlBufferPtr buf = NULL;\n> xmlSaveCtxtPtr ctxt = NULL;\n>\n> buf = xmlBufferCreate();\n> ctxt = xmlSaveToBuffer(buf, NULL, XML_SAVE_NO_DECL | XML_SAVE_FORMAT);\n>\n> xmlSaveDoc(ctxt, doc);\n> xmlSaveClose(ctxt);\n>\n> printf(\"'%s'\",buf->content);\n>\n> ==>\n>\n> '<foo>\n>   <bar>\n>     <val x=\"y\">42</val>\n>   </bar>\n> </foo>\n> '\n\nIt looks like this happens only if xml type is DOCUMENT, and thus\nxmlSaveDoc is used to save the doc directly. I might be wrong, but after\na quick look at the corresponding libxml functionality it seems that a\nnew line is getting added if the element type is not XML_XINCLUDE_{START|END},\nwhich is unfortunate if correct.\n\n\n", "msg_date": "Sun, 23 Apr 2023 14:25:35 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "On Sun, 23 Apr 2023 at 01:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> Hi\n>\n> maybe I found a bug in xmlserialize\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>'\n> AS varchar INDENT);\n>\n> (2023-04-23 07:27:53) postgres=# SELECT xmlserialize(DOCUMENT\n> '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);\n> ┌─────────────────────────┐\n> │ xmlserialize │\n> ╞═════════════════════════╡\n> │ <foo> ↵│\n> │ <bar> ↵│\n> │ <val x=\"y\">42</val>↵│\n> │ </bar> ↵│\n> │ </foo> ↵│\n> │ │\n> └─────────────────────────┘\n> (1 row)\n>\n> Looks so there is an extra empty row.\n>\n\nI wouldn't necessarily worry about this much. There is not, as such, an\nextra blank line at the end; rather it is conventional that a text file\nshould end with a newline character. That is, conventionally every single\nline in a text file ends with a newline character, meaning the only text\nfile that doesn't end with a newline is the empty file. You can see this in\ntools like diff, which explicitly report \"no newline at end of file\" if the\nfile ends with a different character.\n\nIf you were to save the value to a file you would probably want it the way\nit is.\n\nThat being said, this is a database column result and I agree it would look\nmore elegant if the blank line in the display were not there. I might go so\nfar as to change the psql display routines to not leave a blank line after\nthe content in the event it ends with a newline.\n\nOn Sun, 23 Apr 2023 at 01:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:Himaybe I found a bug in xmlserializeSELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);(2023-04-23 07:27:53) postgres=# SELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);┌─────────────────────────┐│      xmlserialize       │╞═════════════════════════╡│ <foo>                  ↵││   <bar>                ↵││     <val x=\"y\">42</val>↵││   </bar>               ↵││ </foo>                 ↵││                         │└─────────────────────────┘(1 row)Looks so there is an extra empty row.I wouldn't necessarily worry about this much. There is not, as such, an extra blank line at the end; rather it is conventional that a text file should end with a newline character. That is, conventionally every single line in a text file ends with a newline character, meaning the only text file that doesn't end with a newline is the empty file. You can see this in tools like diff, which explicitly report \"no newline at end of file\" if the file ends with a different character.If you were to save the value to a file you would probably want it the way it is.That being said, this is a database column result and I agree it would look more elegant if the blank line in the display were not there. I might go so far as to change the psql display routines to not leave a blank line after the content in the event it ends with a newline.", "msg_date": "Sun, 23 Apr 2023 08:41:55 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> On 23.04.23 07:31, Pavel Stehule wrote:\n>> Looks so there is an extra empty row.\n\n> Good catch! It looks like it comes directly from libxml2.\n\nIs it really a bug? If libxml2 itself is putting in that newline,\nI'm not sure we should take it on ourselves to strip it.\n\n> I'll do some digging to see if there is a good way to get rid of this \n> newline or if we need to chose a different dump function.\n\nIf we do want to strip it, I'd just add a couple lines of code\nto delete any trailing newlines at the end of the process.\nCompare, eg, pchomp().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Apr 2023 10:48:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "ne 23. 4. 2023 v 14:42 odesílatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> On Sun, 23 Apr 2023 at 01:31, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> maybe I found a bug in xmlserialize\n>>\n>> SELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>'\n>> AS varchar INDENT);\n>>\n>> (2023-04-23 07:27:53) postgres=# SELECT xmlserialize(DOCUMENT\n>> '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);\n>> ┌─────────────────────────┐\n>> │ xmlserialize │\n>> ╞═════════════════════════╡\n>> │ <foo> ↵│\n>> │ <bar> ↵│\n>> │ <val x=\"y\">42</val>↵│\n>> │ </bar> ↵│\n>> │ </foo> ↵│\n>> │ │\n>> └─────────────────────────┘\n>> (1 row)\n>>\n>> Looks so there is an extra empty row.\n>>\n>\n> I wouldn't necessarily worry about this much. There is not, as such, an\n> extra blank line at the end; rather it is conventional that a text file\n> should end with a newline character. That is, conventionally every single\n> line in a text file ends with a newline character, meaning the only text\n> file that doesn't end with a newline is the empty file. You can see this in\n> tools like diff, which explicitly report \"no newline at end of file\" if the\n> file ends with a different character.\n>\n> If you were to save the value to a file you would probably want it the way\n> it is.\n>\n> That being said, this is a database column result and I agree it would\n> look more elegant if the blank line in the display were not there. I might\n> go so far as to change the psql display routines to not leave a blank line\n> after the content in the event it ends with a newline.\n>\n\npsql shows to display content without changes. I don't think it should be\nfixed on the client side.\n\nBut it can be easily fixed on the server side. I think there is some code\nthat tries to clean the end lines already.\n\nregards\n\nPavel\n\nne 23. 4. 2023 v 14:42 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:On Sun, 23 Apr 2023 at 01:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:Himaybe I found a bug in xmlserializeSELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);(2023-04-23 07:27:53) postgres=# SELECT xmlserialize(DOCUMENT '<foo><bar><val x=\"y\">42</val></bar></foo>' AS varchar INDENT);┌─────────────────────────┐│      xmlserialize       │╞═════════════════════════╡│ <foo>                  ↵││   <bar>                ↵││     <val x=\"y\">42</val>↵││   </bar>               ↵││ </foo>                 ↵││                         │└─────────────────────────┘(1 row)Looks so there is an extra empty row.I wouldn't necessarily worry about this much. There is not, as such, an extra blank line at the end; rather it is conventional that a text file should end with a newline character. That is, conventionally every single line in a text file ends with a newline character, meaning the only text file that doesn't end with a newline is the empty file. You can see this in tools like diff, which explicitly report \"no newline at end of file\" if the file ends with a different character.If you were to save the value to a file you would probably want it the way it is.That being said, this is a database column result and I agree it would look more elegant if the blank line in the display were not there. I might go so far as to change the psql display routines to not leave a blank line after the content in the event it ends with a newline.psql shows to display content without changes. I don't think it should be fixed on the client side.But it can be easily fixed on the server side.  I think there is some code that tries to clean the end lines already.regardsPavel", "msg_date": "Sun, 23 Apr 2023 16:48:37 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "ne 23. 4. 2023 v 16:48 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Jim Jones <jim.jones@uni-muenster.de> writes:\n> > On 23.04.23 07:31, Pavel Stehule wrote:\n> >> Looks so there is an extra empty row.\n>\n> > Good catch! It looks like it comes directly from libxml2.\n>\n> Is it really a bug? If libxml2 itself is putting in that newline,\n> I'm not sure we should take it on ourselves to strip it.\n>\n\nMaybe It is not a bug, but it can be messy. \"json_pretty\" doesn't do it.\n\n\n\n>\n> > I'll do some digging to see if there is a good way to get rid of this\n> > newline or if we need to chose a different dump function.\n>\n> If we do want to strip it, I'd just add a couple lines of code\n> to delete any trailing newlines at the end of the process.\n> Compare, eg, pchomp().\n>\n\nyes\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nne 23. 4. 2023 v 16:48 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Jim Jones <jim.jones@uni-muenster.de> writes:\n> On 23.04.23 07:31, Pavel Stehule wrote:\n>> Looks so there is an extra empty row.\n\n> Good catch! It looks like it comes directly from libxml2.\n\nIs it really a bug?  If libxml2 itself is putting in that newline,\nI'm not sure we should take it on ourselves to strip it.Maybe It is not a bug, but it can be messy. \"json_pretty\" doesn't do it. \n\n> I'll do some digging to see if there is a good way to get rid of this \n> newline or if we need to chose a different dump function.\n\nIf we do want to strip it, I'd just add a couple lines of code\nto delete any trailing newlines at the end of the process.\nCompare, eg, pchomp().yesPavel \n\n                        regards, tom lane", "msg_date": "Sun, 23 Apr 2023 16:50:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> That being said, this is a database column result and I agree it would look\n> more elegant if the blank line in the display were not there.\n\nYeah, that would basically be the argument for editorializing on libxml2's\nresult. It's a weak argument, but not entirely without merit.\n\n> I might go so\n> far as to change the psql display routines to not leave a blank line after\n> the content in the event it ends with a newline.\n\npsql has *no* business changing what it displays: if what came from the\nserver has a trailing newline, that had better be made visible. Even if\nwe thought it was a good idea, it's about 25 years too late to reconsider\nthat. However, xmlserialize()'s new indenting behavior isn't set in\nstone yet, so we could contemplate chomping newlines within that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Apr 2023 10:52:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "On Sun, 23 Apr 2023 at 10:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n>\n\n\n> > I might go so\n> > far as to change the psql display routines to not leave a blank line\n> after\n> > the content in the event it ends with a newline.\n>\n> psql has *no* business changing what it displays: if what came from the\n> server has a trailing newline, that had better be made visible. Even if\n> we thought it was a good idea, it's about 25 years too late to reconsider\n> that. However, xmlserialize()'s new indenting behavior isn't set in\n> stone yet, so we could contemplate chomping newlines within that.\n\n\nThe trailing newline is made visible by the little bent arrow character\nthat appears at the right hand side. So you could still tell whether the\nvalue ended with a trailing newline. I agree that simply dropping the\ntrailing newline before displaying the value would be a very bad idea.\n\nOn Sun, 23 Apr 2023 at 10:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n \n> I might go so\n> far as to change the psql display routines to not leave a blank line after\n> the content in the event it ends with a newline.\n\npsql has *no* business changing what it displays: if what came from the\nserver has a trailing newline, that had better be made visible.  Even if\nwe thought it was a good idea, it's about 25 years too late to reconsider\nthat.  However, xmlserialize()'s new indenting behavior isn't set in\nstone yet, so we could contemplate chomping newlines within that.The trailing newline is made visible by the little bent arrow character that appears at the right hand side. So you could still tell whether the value ended with a trailing newline. I agree that simply dropping the trailing newline before displaying the value would be a very bad idea.", "msg_date": "Sun, 23 Apr 2023 12:03:00 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "Dne ne 23. 4. 2023 18:03 uživatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> On Sun, 23 Apr 2023 at 10:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Isaac Morland <isaac.morland@gmail.com> writes:\n>>\n>\n>\n>> > I might go so\n>> > far as to change the psql display routines to not leave a blank line\n>> after\n>> > the content in the event it ends with a newline.\n>>\n>> psql has *no* business changing what it displays: if what came from the\n>> server has a trailing newline, that had better be made visible. Even if\n>> we thought it was a good idea, it's about 25 years too late to reconsider\n>> that. However, xmlserialize()'s new indenting behavior isn't set in\n>> stone yet, so we could contemplate chomping newlines within that.\n>\n>\n> The trailing newline is made visible by the little bent arrow character\n> that appears at the right hand side. So you could still tell whether the\n> value ended with a trailing newline. I agree that simply dropping the\n> trailing newline before displaying the value would be a very bad idea.\n>\n\nWhat is benefit or usage of this trailing newline?\n\nPersonally, when I do some formatting I prefer adding newlines on client\nside before necessity of removing.\n\n\n\n>\n\nDne ne 23. 4. 2023 18:03 uživatel Isaac Morland <isaac.morland@gmail.com> napsal:On Sun, 23 Apr 2023 at 10:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n \n> I might go so\n> far as to change the psql display routines to not leave a blank line after\n> the content in the event it ends with a newline.\n\npsql has *no* business changing what it displays: if what came from the\nserver has a trailing newline, that had better be made visible.  Even if\nwe thought it was a good idea, it's about 25 years too late to reconsider\nthat.  However, xmlserialize()'s new indenting behavior isn't set in\nstone yet, so we could contemplate chomping newlines within that.The trailing newline is made visible by the little bent arrow character that appears at the right hand side. So you could still tell whether the value ended with a trailing newline. I agree that simply dropping the trailing newline before displaying the value would be a very bad idea.What is benefit or usage of this trailing newline?Personally, when I do some formatting I prefer adding newlines on client side before necessity of removing.", "msg_date": "Sun, 23 Apr 2023 18:28:35 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "On Sun, 23 Apr 2023 at 12:28, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n>\n> Dne ne 23. 4. 2023 18:03 uživatel Isaac Morland <isaac.morland@gmail.com>\n> napsal:\n>\n>> On Sun, 23 Apr 2023 at 10:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>>> Isaac Morland <isaac.morland@gmail.com> writes:\n>>>\n>>\n>>\n>>> > I might go so\n>>> > far as to change the psql display routines to not leave a blank line\n>>> after\n>>> > the content in the event it ends with a newline.\n>>>\n>>> psql has *no* business changing what it displays: if what came from the\n>>> server has a trailing newline, that had better be made visible. Even if\n>>> we thought it was a good idea, it's about 25 years too late to reconsider\n>>> that. However, xmlserialize()'s new indenting behavior isn't set in\n>>> stone yet, so we could contemplate chomping newlines within that.\n>>\n>>\n>> The trailing newline is made visible by the little bent arrow character\n>> that appears at the right hand side. So you could still tell whether the\n>> value ended with a trailing newline. I agree that simply dropping the\n>> trailing newline before displaying the value would be a very bad idea.\n>>\n>\n> What is benefit or usage of this trailing newline?\n>\n\nWhen creating a text file, it is conventional to end it with a newline.\nEvery single line of the file is ended with a newline, including the last\nline of a file. Various tools deal with text files which are missing the\nnewline on the last line in various ways depending on context. If you \"cat\"\na file which is missing its trailing newline, the command prompt naturally\nends up on the same line of the display as the characters after the last\nnewline. Tools like “less” often adjust their display so the presence or\nabsence of the trailing newline makes no difference.\n\nSo it’s not so much about benefit or usage, it’s about what text files\nnormally contain. Newline characters are used as line terminators, not line\nseparators.\n\nOf course, it's conventional for a database value not to end with a\nnewline. If I store a person's name in the database, it would be weird to\nappend a newline to the end. Here we have serialized XML which we tend to\nthink of storing in a text file — where one would expect it to end with a\nnewline — but we're displaying it in a table cell as part of the output of\na database query, where one typically does not expect values to end with a\nnewline (although they can, and psql displays values differently even if\nthey differ only in the presence or absence of a newline at the end).\n\nIf you were to load a typical text file into a column of a database row and\ndisplay it using psql, you would see the same phenomenon.\n\nMy inclination would be, if we're just calling to a long-standardized\nlibrary routine, to just accept its output as is. If a program is saving\nthe output to a text file, that would be the expected behaviour. If not,\nthen we need to document that the output of our function is the output of\nthe library function, minus the trailing newline.\n\nPersonally, when I do some formatting I prefer adding newlines on client\n> side before necessity of removing.\n>\n\nOn Sun, 23 Apr 2023 at 12:28, Pavel Stehule <pavel.stehule@gmail.com> wrote:Dne ne 23. 4. 2023 18:03 uživatel Isaac Morland <isaac.morland@gmail.com> napsal:On Sun, 23 Apr 2023 at 10:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n \n> I might go so\n> far as to change the psql display routines to not leave a blank line after\n> the content in the event it ends with a newline.\n\npsql has *no* business changing what it displays: if what came from the\nserver has a trailing newline, that had better be made visible.  Even if\nwe thought it was a good idea, it's about 25 years too late to reconsider\nthat.  However, xmlserialize()'s new indenting behavior isn't set in\nstone yet, so we could contemplate chomping newlines within that.The trailing newline is made visible by the little bent arrow character that appears at the right hand side. So you could still tell whether the value ended with a trailing newline. I agree that simply dropping the trailing newline before displaying the value would be a very bad idea.What is benefit or usage of this trailing newline?When creating a text file, it is conventional to end it with a newline. Every single line of the file is ended with a newline, including the last line of a file. Various tools deal with text files which are missing the newline on the last line in various ways depending on context. If you \"cat\" a file which is missing its trailing newline, the command prompt naturally ends up on the same line of the display as the characters after the last newline. Tools like “less” often adjust their display so the presence or absence of the trailing newline makes no difference.So it’s not so much about benefit or usage, it’s about what text files normally contain. Newline characters are used as line terminators, not line separators.Of course, it's conventional for a database value not to end with a newline. If I store a person's name in the database, it would be weird to append a newline to the end. Here we have serialized XML which we tend to think of storing in a text file — where one would expect it to end with a newline — but we're displaying it in a table cell as part of the output of a database query, where one typically does not expect values to end with a newline (although they can, and psql displays values differently even if they differ only in the presence or absence of a newline at the end).If you were to load a typical text file into a column of a database row and display it using psql, you would see the same phenomenon.My inclination would be, if we're just calling to a long-standardized library routine, to just accept its output as is. If a program is saving the output to a text file, that would be the expected behaviour. If not, then we need to document that the output of our function is the output of the library function, minus the trailing newline.Personally, when I do some formatting I prefer adding newlines on client side before necessity of removing.", "msg_date": "Sun, 23 Apr 2023 13:46:45 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "> My inclination would be, if we're just calling to a long-standardized \n> library routine, to just accept its output as is. If a program is \n> saving the output to a text file, that would be the expected \n> behaviour. If not, then we need to document that the output of our \n> function is the output of the library function, minus the trailing \n> newline.\n\nAfter some digging on the matter, I'd also tend to leave the output as \nis, but I also do understand the other arguments - specially the \nconsistency with jsonb_pretty().\n\nIf we agree to remove it, the change wouldn't be substantial :) I guess \nwe could just pchomp it in the end of the function, as suggested by Tom. \nAttached a draft patch.\n\nBest, Jim", "msg_date": "Sun, 23 Apr 2023 23:56:39 +0200", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> If we agree to remove it, the change wouldn't be substantial :) I guess \n> we could just pchomp it in the end of the function, as suggested by Tom. \n> Attached a draft patch.\n\nI wouldn't actually *use* pchomp here, because that induces an unnecessary\ncopy of the result string. I had in mind more like copying pchomp's code\nto count up the trailing newline(s) and then pass a corrected length\nto cstring_to_text_with_len.\n\nYou could simplify matters by doing that in all cases, too. It should\nnever find anything to remove in the non-indented case, but the check\nshould be of negligible cost in context.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Apr 2023 21:18:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" }, { "msg_contents": "On 24.04.23 03:18, Tom Lane wrote:\n> I wouldn't actually *use* pchomp here, because that induces an \n> unnecessary\n> copy of the result string. I had in mind more like copying pchomp's code\n> to count up the trailing newline(s) and then pass a corrected length\n> to cstring_to_text_with_len.\nChanged.\n> You could simplify matters by doing that in all cases, too. It should\n> never find anything to remove in the non-indented case, but the check\n> should be of negligible cost in context.\n\nI'm not sure I understood it correctly.\n\nThe non-indented cases should never find anything and indented cases \nwith CONTENT strings do not add trailing newlines, so this is only \napplicable with DOCUMENT .. INDENT, right?\n\nSomething like this would suffice?\n\nif(xmloption_arg != XMLOPTION_DOCUMENT)\n     result = (text *) xmlBuffer_to_xmltype(buf);\nelse\n{\n     int    len = xmlBufferLength(buf);\n     const char *xmloutput = (const char *) xmlBufferContent(buf);\n\n     while (len > 0 && xmloutput[len - 1] == '\\n')\n         len--;\n\n     result = cstring_to_text_with_len(xmloutput, len);\n}\n\nIf we really agree on manually removing the trailing newlines I will \nopen a CF entry for this.\n\nBest, Jim", "msg_date": "Mon, 24 Apr 2023 10:29:27 +0200", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": false, "msg_subject": "Re: xmlserialize bug - extra empty row at the end" } ]
[ { "msg_contents": "My work on page-level freezing for PostgreSQL 16 has some remaining\nloose ends to tie up with the documentation. The \"Routine Vacuuming\"\nsection of the docs has no mention of page-level freezing. It also\ndoesn't mention the FPI optimization added by commit 1de58df4. This\nisn't a small thing to leave out; I fully expect that the FPI\noptimization will very significantly alter when and how VACUUM\nfreezes. The cadence will look quite a lot different.\n\nIt seemed almost impossible to fit in discussion of page-level\nfreezing to the existing structure. In part this is because the\nexisting documentation emphasizes the worst case scenario, rather than\ntalking about freezing as a maintenance task that affects physical\nheap pages in roughly the same way as pruning does. There isn't a\nclean separation of things that would allow me to just add a paragraph\nabout the FPI thing.\n\nObviously it's important that the system never enters xidStopLimit\nmode -- not being able to allocate new XIDs is a huge problem. But it\nseems unhelpful to define that as the only goal of freezing, or even\nthe main goal. To me this seems similar to defining the goal of\ncleaning up bloat as avoiding completely running out of disk space;\nwhile it may be \"the single most important thing\" in some general\nsense, it isn't all that important in most individual cases. There are\nmany very bad things that will happen before that extreme worst case\nis hit, which are far more likely to be the real source of pain.\n\nThere are also very big structural problems with \"Routine Vacuuming\",\nthat I also propose to do something about. Honestly, it's a huge mess\nat this point. It's nobody's fault in particular; there has been\naccretion after accretion added, over many years. It is time to\nfinally bite the bullet and do some serious restructuring. I'm hoping\nthat I don't get too much push back on this, because it's already very\ndifficult work.\n\nAttached patch series shows what I consider to be a much better\noverall structure. To make this convenient to take a quick look at, I\nalso attach a prebuilt version of routine-vacuuming.html (not the only\npage that I've changed, but the most important set of changes by far).\n\nThis initial version is still quite lacking in overall polish, but I\nbelieve that it gets the general structure right. That's what I'd like\nto get feedback on right now: can I get agreement with me about the\ngeneral nature of the problem? Does this high level direction seem\nlike the right one?\n\nThe following list is a summary of the major changes that I propose:\n\n1. Restructures the order of items to match the actual processing\norder within VACUUM (and ANALYZE), rather than jumping from VACUUM to\nANALYZE and then back to VACUUM.\n\nThis flows a lot better, which helps with later items that deal with\nfreezing/wraparound.\n\n2. Renamed \"Preventing Transaction ID Wraparound Failures\" to\n\"Freezing to manage the transaction ID space\". Now we talk about\nwraparound as a subtopic of freezing, not vice-versa. (This is a\ncomplete rewrite, as described by later items in this list).\n\n3. All of the stuff about modulo-2^32 arithmetic is moved to the\nstorage chapter, where we describe the heap tuple header format.\n\nIt seems crazy to me that the second sentence in our discussion of\nwraparound/freezing is still:\n\n\"But since transaction IDs have limited size (32 bits) a cluster that\nruns for a long time (more than 4 billion transactions) would suffer\ntransaction ID wraparound: the XID counter wraps around to zero, and\nall of a sudden transactions that were in the past appear to be in the\nfuture\"\n\nHere we start the whole discussion of wraparound (a particularly\ndelicate topic) by describing how VACUUM used to work 20 years ago,\nbefore the invention of freezing. That was the last time that a\nPostgreSQL cluster could run for 4 billion XIDs without freezing. The\ninvariant is that we activate xidStopLimit mode protections to avoid a\n\"distance\" between any two unfrozen XIDs that exceeds about 2 billion\nXIDs. So why on earth are we talking about 4 billion XIDs? This is the\nmost confusing, least useful way of describing freezing that I can\nthink of.\n\n4. No more separate section for MultiXactID freezing -- that's\ndiscussed as part of the discussion of page-level freezing.\n\nPage-level freezing takes place without regard to the trigger\ncondition for freezing. So the new approach to freezing has a fixed\nidea of what it means to freeze a given page (what physical\nmodifications it entails). This means that having a separate sect3\nsubsection for MultiXactIds now makes no sense (if it ever did).\n\n5. The top-level list of maintenance tasks has a new addition: \"To\ntruncate obsolescent transaction status information, when possible\".\n\nIt makes a lot of sense to talk about this as something that happens\nlast (or last among those steps that take place during VACUUM). It's\nfar less important than avoiding xidStopLimit outages, obviously\n(using some extra disk space is almost certainly the least of your\nworries when you're near to xidStopLimit). The current documentation\nseems to take precisely the opposite view, when it says the following:\n\n\"The sole disadvantage of increasing autovacuum_freeze_max_age (and\nvacuum_freeze_table_age along with it) is that the pg_xact and\npg_commit_ts subdirectories of the database cluster will take more\nspace\"\n\nThis sentence is dangerously bad advice. It is precisely backwards. At\nthe same time, we'd better say something about the need to truncate\npg_xact/clog here. Besides all this, the new section for this is a far\nmore accurate reflection of what's really going on: most individual\nVACUUMs (even most aggressive VACUUMs) won't ever truncate\npg_xact/clog (or the other relevant SLRUs). Truncation only happens\nafter a VACUUM that advances the relfrozenxid of the table which\npreviously had the oldest relfrozenxid among all tables in the entire\ncluster -- so we need to talk about it as an issue with the high\nwatermark storage for pg_xact.\n\n6. Rename the whole \"Routine Vacuuming\" section to \"Autovacuum\nMaintenance Tasks\".\n\nThis is what we should be emphasizing over manually run VACUUMs.\nBesides, the current title just seems wrong -- we're talking about\nANALYZE just as much as VACUUM.\n\nThoughts?\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 24 Apr 2023 14:57:57 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Overhauling \"Routine Vacuuming\" docs,\n particularly its handling of freezing" }, { "msg_contents": "On Tue, Apr 25, 2023 at 4:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> There are also very big structural problems with \"Routine Vacuuming\",\n> that I also propose to do something about. Honestly, it's a huge mess\n> at this point. It's nobody's fault in particular; there has been\n> accretion after accretion added, over many years. It is time to\n> finally bite the bullet and do some serious restructuring. I'm hoping\n> that I don't get too much push back on this, because it's already very\n> difficult work.\n\nNow is a great time to revise this section, in my view. (I myself am about\nready to get back to testing and writing for the task of removing that\n\"obnoxious hint\".)\n\n> Attached patch series shows what I consider to be a much better\n> overall structure. To make this convenient to take a quick look at, I\n> also attach a prebuilt version of routine-vacuuming.html (not the only\n> page that I've changed, but the most important set of changes by far).\n>\n> This initial version is still quite lacking in overall polish, but I\n> believe that it gets the general structure right. That's what I'd like\n> to get feedback on right now: can I get agreement with me about the\n> general nature of the problem? Does this high level direction seem\n> like the right one?\n\nI believe the high-level direction is sound, and some details have been\ndiscussed before.\n\n> The following list is a summary of the major changes that I propose:\n>\n> 1. Restructures the order of items to match the actual processing\n> order within VACUUM (and ANALYZE), rather than jumping from VACUUM to\n> ANALYZE and then back to VACUUM.\n>\n> This flows a lot better, which helps with later items that deal with\n> freezing/wraparound.\n\nSeems logical.\n\n> 2. Renamed \"Preventing Transaction ID Wraparound Failures\" to\n> \"Freezing to manage the transaction ID space\". Now we talk about\n> wraparound as a subtopic of freezing, not vice-versa. (This is a\n> complete rewrite, as described by later items in this list).\n\n+1\n\n> 3. All of the stuff about modulo-2^32 arithmetic is moved to the\n> storage chapter, where we describe the heap tuple header format.\n\nIt does seem to be an excessive level of detail for this chapter, so +1.\nSpeaking of excessive detail, however...(skipping ahead)\n\n+ <note>\n+ <para>\n+ There is no fundamental difference between a\n+ <command>VACUUM</command> run during anti-wraparound\n+ autovacuuming and a <command>VACUUM</command> that happens to\n+ use the aggressive strategy (whether run by autovacuum or\n+ manually issued).\n+ </para>\n+ </note>\n\nI don't see the value of this, from the user's perspective, of mentioning\nthis at all, much less for it to be called out as a Note. Imagine a user\nwho has been burnt by non-cancellable vacuums. How would they interpret\nthis statement?\n\n> It seems crazy to me that the second sentence in our discussion of\n> wraparound/freezing is still:\n>\n> \"But since transaction IDs have limited size (32 bits) a cluster that\n> runs for a long time (more than 4 billion transactions) would suffer\n> transaction ID wraparound: the XID counter wraps around to zero, and\n> all of a sudden transactions that were in the past appear to be in the\n> future\"\n\nHah!\n\n> 4. No more separate section for MultiXactID freezing -- that's\n> discussed as part of the discussion of page-level freezing.\n>\n> Page-level freezing takes place without regard to the trigger\n> condition for freezing. So the new approach to freezing has a fixed\n> idea of what it means to freeze a given page (what physical\n> modifications it entails). This means that having a separate sect3\n> subsection for MultiXactIds now makes no sense (if it ever did).\n\nI have no strong opinion on that.\n\n> 5. The top-level list of maintenance tasks has a new addition: \"To\n> truncate obsolescent transaction status information, when possible\".\n\n+1\n\n> 6. Rename the whole \"Routine Vacuuming\" section to \"Autovacuum\n> Maintenance Tasks\".\n>\n> This is what we should be emphasizing over manually run VACUUMs.\n> Besides, the current title just seems wrong -- we're talking about\n> ANALYZE just as much as VACUUM.\n\nSeems more accurate. On top of that, \"Routine vacuuming\" slightly implies\nmanual vacuums.\n\nI've only taken a cursory look, but will look more closely as time permits.\n\n(Side note: My personal preference for rough doc patches would be to leave\nout spurious whitespace changes. That not only includes indentation, but\nalso paragraphs where many of the words haven't changed at all, but every\nline has changed to keep the paragraph tidy. Seems like more work for both\nthe author and the reviewer.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Apr 25, 2023 at 4:58 AM Peter Geoghegan <pg@bowt.ie> wrote:>> There are also very big structural problems with \"Routine Vacuuming\",> that I also propose to do something about. Honestly, it's a huge mess> at this point. It's nobody's fault in particular; there has been> accretion after accretion added, over many years. It is time to> finally bite the bullet and do some serious restructuring. I'm hoping> that I don't get too much push back on this, because it's already very> difficult work.Now is a great time to revise this section, in my view. (I myself am about ready to get back to testing and writing for the task of removing that \"obnoxious hint\".)> Attached patch series shows what I consider to be a much better> overall structure. To make this convenient to take a quick look at, I> also attach a prebuilt version of routine-vacuuming.html (not the only> page that I've changed, but the most important set of changes by far).>> This initial version is still quite lacking in overall polish, but I> believe that it gets the general structure right. That's what I'd like> to get feedback on right now: can I get agreement with me about the> general nature of the problem? Does this high level direction seem> like the right one?I believe the high-level direction is sound, and some details have been discussed before.> The following list is a summary of the major changes that I propose:>> 1. Restructures the order of items to match the actual processing> order within VACUUM (and ANALYZE), rather than jumping from VACUUM to> ANALYZE and then back to VACUUM.>> This flows a lot better, which helps with later items that deal with> freezing/wraparound.Seems logical.> 2. Renamed \"Preventing Transaction ID Wraparound Failures\" to> \"Freezing to manage the transaction ID space\". Now we talk about> wraparound as a subtopic of freezing, not vice-versa. (This is a> complete rewrite, as described by later items in this list).+1> 3. All of the stuff about modulo-2^32 arithmetic is moved to the> storage chapter, where we describe the heap tuple header format.It does seem to be an excessive level of detail for this chapter, so +1. Speaking of excessive detail, however...(skipping ahead)+    <note>+     <para>+      There is no fundamental difference between a+      <command>VACUUM</command> run during anti-wraparound+      autovacuuming and a <command>VACUUM</command> that happens to+      use the aggressive strategy (whether run by autovacuum or+      manually issued).+     </para>+    </note>I don't see the value of this, from the user's perspective, of mentioning this at all, much less for it to be called out as a Note. Imagine a user who has been burnt by non-cancellable vacuums. How would they interpret this statement?> It seems crazy to me that the second sentence in our discussion of> wraparound/freezing is still:>> \"But since transaction IDs have limited size (32 bits) a cluster that> runs for a long time (more than 4 billion transactions) would suffer> transaction ID wraparound: the XID counter wraps around to zero, and> all of a sudden transactions that were in the past appear to be in the> future\"Hah!> 4. No more separate section for MultiXactID freezing -- that's> discussed as part of the discussion of page-level freezing.>> Page-level freezing takes place without regard to the trigger> condition for freezing. So the new approach to freezing has a fixed> idea of what it means to freeze a given page (what physical> modifications it entails). This means that having a separate sect3> subsection for MultiXactIds now makes no sense (if it ever did).I have no strong opinion on that.> 5. The top-level list of maintenance tasks has a new addition: \"To> truncate obsolescent transaction status information, when possible\".+1> 6. Rename the whole \"Routine Vacuuming\" section to \"Autovacuum> Maintenance Tasks\".>> This is what we should be emphasizing over manually run VACUUMs.> Besides, the current title just seems wrong -- we're talking about> ANALYZE just as much as VACUUM.Seems more accurate. On top of that, \"Routine vacuuming\" slightly implies manual vacuums.I've only taken a cursory look, but will look more closely as time permits.(Side note: My personal preference for rough doc patches would be to leave out spurious whitespace changes. That not only includes indentation, but also paragraphs where many of the words haven't changed at all, but every line has changed to keep the paragraph tidy. Seems like more work for both the author and the reviewer.)--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 26 Apr 2023 14:16:28 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Wed, Apr 26, 2023 at 12:16 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Now is a great time to revise this section, in my view. (I myself am about ready to get back to testing and writing for the task of removing that \"obnoxious hint\".)\n\nAlthough I didn't mention the issue with single user mode in my\nintroductory email (the situation there is just appalling IMV), it\nseems like I might not be able to ignore that problem while I'm\nworking on this patch. Declaring that as out of scope for this doc\npatch series (on pragmatic grounds) feels awkward. I have to work\naround something that is just wrong. For now, the doc patch just has\nan \"XXX\" item about it. (Hopefully I'll think of a more natural way of\nnot fixing it.)\n\n> > This initial version is still quite lacking in overall polish, but I\n> > believe that it gets the general structure right. That's what I'd like\n> > to get feedback on right now: can I get agreement with me about the\n> > general nature of the problem? Does this high level direction seem\n> > like the right one?\n>\n> I believe the high-level direction is sound, and some details have been discussed before.\n\nI'm relieved that you think so. I was a bit worried that I'd get\nbogged down, having already invested a lot of time in this.\n\nAttached is v2. It has the same high level direction as v1, but is a\nlot more polished. Still not committable, to be sure. But better than\nv1.\n\nI'm also attaching a prebuilt copy of routine-vacuuming.html, as with\nv1 -- hopefully that's helpful.\n\n> > 3. All of the stuff about modulo-2^32 arithmetic is moved to the\n> > storage chapter, where we describe the heap tuple header format.\n>\n> It does seem to be an excessive level of detail for this chapter, so +1. Speaking of excessive detail, however...(skipping ahead)\n\nMy primary objection to talking about modulo-2^32 stuff first is not\nthat it's an excessive amount of detail (though it definitely is). My\nobjection is that it places emphasis on exactly the thing that *isn't*\nsupposed to matter, under the design of freezing -- greatly confusing\nthe reader (even sophisticated readers). Discussion of so-called\nwraparound should start with logical concepts, such as xmin XIDs being\ntreated as \"infinitely far in the past\" once frozen. The physical data\nstructures do matter too, but even there the emphasis should be on\nheap pages being \"self-contained\", in the sense that SQL queries won't\nneed to access pg_xact to read the rows from the pages going forward\n(even on standbys).\n\nWhy do we call wraparound wraparound, anyway? The 32-bit XID space is\ncircular! The whole point of the design is that unsigned integer\nwraparound is meaningless -- there isn't really a point in \"the\ncircle\" that you should think of as the start point or end point.\n(We're probably stuck with the term \"wraparound\" for now, so I'm not\nproposing that it be changed here, purely on pragmatic grounds.)\n\n> + <note>\n> + <para>\n> + There is no fundamental difference between a\n> + <command>VACUUM</command> run during anti-wraparound\n> + autovacuuming and a <command>VACUUM</command> that happens to\n> + use the aggressive strategy (whether run by autovacuum or\n> + manually issued).\n> + </para>\n> + </note>\n>\n> I don't see the value of this, from the user's perspective, of mentioning this at all, much less for it to be called out as a Note. Imagine a user who has been burnt by non-cancellable vacuums. How would they interpret this statement?\n\nI meant that it isn't special from the point of view of vacuumlazy.c.\nI do see your point, though. I've taken that out in v2.\n\n(I happen to believe that the antiwraparound autocancellation behavior\nis very unhelpful as currently implemented, which biased my view of\nthis.)\n\n> > 4. No more separate section for MultiXactID freezing -- that's\n> > discussed as part of the discussion of page-level freezing.\n> >\n> > Page-level freezing takes place without regard to the trigger\n> > condition for freezing. So the new approach to freezing has a fixed\n> > idea of what it means to freeze a given page (what physical\n> > modifications it entails). This means that having a separate sect3\n> > subsection for MultiXactIds now makes no sense (if it ever did).\n>\n> I have no strong opinion on that.\n\nMost of the time, when antiwraparound autovacuums are triggered by\nautovacuum_multixact_freeze_max_age, in a way that is noticeable (say\na large table), VACUUM will in all likelihood end up processing\nexactly 0 multis. What you'll get is pretty much an \"early\" aggressive\nVACUUM, which isn't such a big deal (especially with page-level\nfreezing). You can already get an \"early\" aggressive VACUUM due to\nhitting vacuum_freeze_table_age before autovacuum_freeze_max_age is\never reached (in fact it's the common case, now that we have\ninsert-driven autovacuums).\n\nSo I'm trying to suggest that an aggressive VACUUM is the same\nregardless of the trigger condition. To a lesser extent, I'm trying to\nmake the user aware that the mechanical difference between aggressive\nand non-aggressive is fairly minor, even if the consequences of that\ndifference are quite noticeable. (Though maybe they're less noticeable\nwith the v16 work in place.)\n\n> I've only taken a cursory look, but will look more closely as time permits.\n\nI would really appreciate that. This is not easy work.\n\nI suspect that the docs talk about wraparound using extremely alarming\nlanguage possible because at one point it really was necessary to\nscare users into running VACUUM to avoid data loss. This was before\nautovacuum, and before the invention of vxids, and even before the\ninvention of freezing. It was up to you as a user to VACUUM your\ndatabase using cron, and if you didn't then eventually data loss could\nresult.\n\nObviously these docs were updated many times over the years, but I\nmaintain that the basic structure from 20 years ago is still present\nin a way that it really shouldn't be.\n\n> (Side note: My personal preference for rough doc patches would be to leave out spurious whitespace changes.\n\nI've tried to keep them out (or at least break the noisy whitespace\nchanges out into their own commit). I might have missed a few of them\nin v1, which are fixed in v2.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Wed, 26 Apr 2023 10:57:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Thu, Apr 27, 2023 at 12:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Apr 26, 2023 at 12:16 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > Now is a great time to revise this section, in my view. (I myself am\nabout ready to get back to testing and writing for the task of removing\nthat \"obnoxious hint\".)\n>\n> Although I didn't mention the issue with single user mode in my\n> introductory email (the situation there is just appalling IMV), it\n> seems like I might not be able to ignore that problem while I'm\n> working on this patch. Declaring that as out of scope for this doc\n> patch series (on pragmatic grounds) feels awkward. I have to work\n> around something that is just wrong. For now, the doc patch just has\n> an \"XXX\" item about it. (Hopefully I'll think of a more natural way of\n> not fixing it.)\n\nIf it helps, I've gone ahead with some testing and polishing on that, and\nit's close to ready, I think (CC'd you). I'd like that piece to be separate\nand small enough to be backpatchable (at least in theory).\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 27, 2023 at 12:58 AM Peter Geoghegan <pg@bowt.ie> wrote:>> On Wed, Apr 26, 2023 at 12:16 AM John Naylor> <john.naylor@enterprisedb.com> wrote:> > Now is a great time to revise this section, in my view. (I myself am about ready to get back to testing and writing for the task of removing that \"obnoxious hint\".)>> Although I didn't mention the issue with single user mode in my> introductory email (the situation there is just appalling IMV), it> seems like I might not be able to ignore that problem while I'm> working on this patch. Declaring that as out of scope for this doc> patch series (on pragmatic grounds) feels awkward. I have to work> around something that is just wrong. For now, the doc patch just has> an \"XXX\" item about it. (Hopefully I'll think of a more natural way of> not fixing it.)If it helps, I've gone ahead with some testing and polishing on that, and it's close to ready, I think (CC'd you). I'd like that piece to be separate and small enough to be backpatchable (at least in theory).--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sat, 29 Apr 2023 15:17:35 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Sat, Apr 29, 2023 at 1:17 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> > Although I didn't mention the issue with single user mode in my\n> > introductory email (the situation there is just appalling IMV), it\n> > seems like I might not be able to ignore that problem while I'm\n> > working on this patch. Declaring that as out of scope for this doc\n> > patch series (on pragmatic grounds) feels awkward. I have to work\n> > around something that is just wrong. For now, the doc patch just has\n> > an \"XXX\" item about it. (Hopefully I'll think of a more natural way of\n> > not fixing it.)\n>\n> If it helps, I've gone ahead with some testing and polishing on that, and it's close to ready, I think (CC'd you). I'd like that piece to be separate and small enough to be backpatchable (at least in theory).\n\nThat's great news. Not least because it unblocks this patch series of mine.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 29 Apr 2023 14:42:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Thu, Apr 27, 2023 at 12:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> [v2]\n\nI've done a more careful read-through, but I'll need a couple more, I\nimagine.\n\nI'll first point out some things I appreciate, and I'm glad are taken care\nof as part of this work:\n\n- Pushing the talk of scheduled manual vacuums to the last, rather than\nfirst, para in the intro\n- No longer pretending that turning off autovacuum is somehow normal\n- Removing the egregiously outdated practice of referring to VACUUM FULL as\na \"variant\" of VACUUM\n- Removing the mention of ALTER TABLE that has no earthly business in this\nchapter -- for that, rewriting the table is a side effect to try to avoid,\nnot a tool in our smorgasbord for removing severe bloat.\n\nSome suggestions:\n\n- The section \"Recovering Disk Space\" now has 5 tips/notes/warnings in a\nrow. This is good information, but I wonder about:\n\n\"Note: Although VACUUM FULL is technically an option of the VACUUM command,\nVACUUM FULL uses a completely different implementation. VACUUM FULL is\nessentially a variant of CLUSTER. (The name VACUUM FULL is historical; the\noriginal implementation was somewhat closer to standard VACUUM.)\"\n\n...maybe move this to a second paragraph in the warning about VACUUM FULL\nand CLUSTER?\n\n- The sentence \"The XID cutoff point that VACUUM uses...\" reads a bit\nabruptly and unmotivated (although it is important). Part of the reason for\nthis is that the hyperlink \"transaction ID number (XID)\" which points to\nthe glossary is further down the page than this first mention.\n\n- \"VACUUM often marks certain pages frozen, indicating that all eligible\nrows on the page were inserted by a transaction that committed sufficiently\nfar in the past that the effects of the inserting transaction are certain\nto be visible to all current and future transactions.\"\n -> This sentence is much harder to understand than the one it replaces.\nAlso, this is the first time \"eligible\" is mentioned. It may not need a\nseparate definition, but in this form it's rather circular.\n\n- \"freezing plays a crucial role in enabling _management of the XID\naddress_ space by VACUUM\"\n -> \"management of the XID address space\" links to the\naggressive-strategy sub-section below, but it's a strange link title\nbecause the section we're in is itself titled \"Freezing to manage the\ntransaction ID space\".\n\n- \"The maximum “distance” that the system can tolerate...\"\n -> The next sentence goes on to show the \"age\" function, so using\ndifferent terms is a bit strange. Mixing the established age term with an\nin-quotes \"distance\" could perhaps be done once in a definition, but then\nall uses should stick to age.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 27, 2023 at 12:58 AM Peter Geoghegan <pg@bowt.ie> wrote:>> [v2]I've done a more careful read-through, but I'll need a couple more, I imagine.I'll first point out some things I appreciate, and I'm glad are taken care of as part of this work:- Pushing the talk of scheduled manual vacuums to the last, rather than first, para in the intro- No longer pretending that turning off autovacuum is somehow normal- Removing the egregiously outdated practice of referring to VACUUM FULL as a \"variant\" of VACUUM- Removing the mention of ALTER TABLE that has no earthly business in this chapter -- for that, rewriting the table is a side effect to try to avoid, not a tool in our smorgasbord for removing severe bloat.Some suggestions:- The section \"Recovering Disk Space\" now has 5 tips/notes/warnings in a row. This is good information, but I wonder about:\"Note: Although VACUUM FULL is technically an option of the VACUUM command, VACUUM FULL uses a completely different implementation. VACUUM FULL is essentially a variant of CLUSTER. (The name VACUUM FULL is historical; the original implementation was somewhat closer to standard VACUUM.)\"...maybe move this to a second paragraph in the warning about VACUUM FULL and CLUSTER?- The sentence \"The XID cutoff point that VACUUM uses...\" reads a bit abruptly and unmotivated (although it is important). Part of the reason for this is that the hyperlink \"transaction ID number (XID)\" which points to the glossary is further down the page than this first mention.- \"VACUUM often marks certain pages frozen, indicating that all eligible rows on the page were inserted by a transaction that committed sufficiently far in the past that the effects of the inserting transaction are certain to be visible to all current and future transactions.\"    -> This sentence is much harder to understand than the one it replaces. Also, this is the first time \"eligible\" is mentioned. It may not need a separate definition, but in this form it's rather circular.- \"freezing plays a crucial role in enabling _management of the XID address_ space by VACUUM\"    -> \"management of the XID address space\" links to the aggressive-strategy sub-section below, but it's a strange link title because the section we're in is itself titled \"Freezing to manage the transaction ID space\".- \"The maximum “distance” that the system can tolerate...\"    -> The next sentence goes on to show the \"age\" function, so using different terms is a bit strange. Mixing the established age term with an in-quotes \"distance\" could perhaps be done once in a definition, but then all uses should stick to age.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sun, 30 Apr 2023 10:54:19 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Sat, Apr 29, 2023 at 8:54 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I've done a more careful read-through, but I'll need a couple more, I imagine.\n\nYeah, it's tough to get this stuff right.\n\n> I'll first point out some things I appreciate, and I'm glad are taken care of as part of this work:\n>\n> - Pushing the talk of scheduled manual vacuums to the last, rather than first, para in the intro\n> - No longer pretending that turning off autovacuum is somehow normal\n> - Removing the egregiously outdated practice of referring to VACUUM FULL as a \"variant\" of VACUUM\n> - Removing the mention of ALTER TABLE that has no earthly business in this chapter -- for that, rewriting the table is a side effect to try to avoid, not a tool in our smorgasbord for removing severe bloat.\n>\n> Some suggestions:\n>\n> - The section \"Recovering Disk Space\" now has 5 tips/notes/warnings in a row.\n\nIt occurs to me that all of this stuff (TRUNCATE, VACUUM FULL, and so\non) isn't \"routine\" at all. And so maybe this is the wrong chapter for\nthis entirely. The way I dealt with it in v2 wasn't very worked out --\nI just knew that I had to do something, but hadn't given much thought\nto what actually made sense.\n\nI wonder if it would make sense to move all of that stuff into its own\nnew sect1 of \"Chapter 29. Monitoring Disk Usage\" -- something along\nthe lines of \"what to do about bloat when all else fails, when the\nproblem gets completely out of hand\". Naturally we'd link to this new\nsection from \"Routine Vacuuming\". What do you think of that general\napproach?\n\n> This is good information, but I wonder about:\n> (Various points)\n\nThat's good feedback. I'll get to this in a couple of days.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 29 Apr 2023 21:18:33 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Wed, Apr 26, 2023 at 1:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Why do we call wraparound wraparound, anyway? The 32-bit XID space is\n> circular! The whole point of the design is that unsigned integer\n> wraparound is meaningless -- there isn't really a point in \"the\n> circle\" that you should think of as the start point or end point.\n> (We're probably stuck with the term \"wraparound\" for now, so I'm not\n> proposing that it be changed here, purely on pragmatic grounds.)\n\nTo me, the fact that the XID space is circular is the whole point of\ntalking about wraparound. If the XID space were non-circular, it could\nnever try to reuse the XID values that have previously been used, and\nthis entire class of problems would go away. Because it is circular,\nit's possible for the XID counter to arrive back at a place that it's\nbeen before i.e. it can wrap around.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 May 2023 11:03:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023 at 8:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> To me, the fact that the XID space is circular is the whole point of\n> talking about wraparound.\n\nThe word wraparound is ambiguous. It's not the same thing as\nxidStopLimit in my view. It's literal integer wraparound.\n\nIf you think of XIDs as having a native 64-bit representation, while\nusing a truncated 32-bit on-disk representation in tuple headers\n(which is the view promoted by the doc patch), then XIDs cannot wrap\naround. There is still no possibility of \"the future becoming the\npast\" (assuming no use of single user mode), either, because even in\nthe worst case we have xidStopLimit to make sure that the database\ndoesn't become corrupt. Why talk about what's *not* happening in a\nplace of prominence?\n\nWe'll still talk about literal integer wraparound with the doc patch,\nbut it's part of a discussion of the on-disk format in a distant\nchapter. It's just an implementation detail, which is of no practical\nconsequence. The main discussion need only say something succinct and\nvague about the use of a truncated representation (lacking a separate\nepoch) in tuple headers eventually forcing freezing.\n\n> If the XID space were non-circular, it could\n> never try to reuse the XID values that have previously been used, and\n> this entire class of problems would go away. Because it is circular,\n> it's possible for the XID counter to arrive back at a place that it's\n> been before i.e. it can wrap around.\n\nBut integer wrap around isn't really aligned with anything important.\nxidStopLimit will kick in when we're only halfway towards literal\ninteger wrap around. Users have practical concerns about avoiding\nxidStopLimit -- what a world without xidStopLimit looks like just\ndoesn't matter. Just having some vague awareness of truncated XIDs\nbeing insufficient at some point is all you really need, even if\nyou're an advanced user.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 May 2023 09:01:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023 at 12:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > If the XID space were non-circular, it could\n> > never try to reuse the XID values that have previously been used, and\n> > this entire class of problems would go away. Because it is circular,\n> > it's possible for the XID counter to arrive back at a place that it's\n> > been before i.e. it can wrap around.\n>\n> But integer wrap around isn't really aligned with anything important.\n> xidStopLimit will kick in when we're only halfway towards literal\n> integer wrap around. Users have practical concerns about avoiding\n> xidStopLimit -- what a world without xidStopLimit looks like just\n> doesn't matter. Just having some vague awareness of truncated XIDs\n> being insufficient at some point is all you really need, even if\n> you're an advanced user.\n\nI disagree. If you start the cluster in single-user mode, you can\nactually wrap it around, unless something has changed that I don't\nknow about.\n\nI'm not trying to debate the details of the patch, which I have not\nread. I am saying that, while wraparound is perhaps not a perfect term\nfor what's happening, it is not, in my opinion, a bad term either. I\ndon't think it's accurate to imagine that this is a 64-bit counter\nwhere we only store 32 bits on disk. We're trying to retcon that into\nbeing true, but we'd have to work significantly harder to actually\nmake it true.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 May 2023 12:08:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023 at 9:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I disagree. If you start the cluster in single-user mode, you can\n> actually wrap it around, unless something has changed that I don't\n> know about.\n\nThis patch relies on John's other patch which strongly discourages the\nuse of single-user mode. Were it not for that, I might agree.\n\n> I'm not trying to debate the details of the patch, which I have not\n> read. I am saying that, while wraparound is perhaps not a perfect term\n> for what's happening, it is not, in my opinion, a bad term either. I\n> don't think it's accurate to imagine that this is a 64-bit counter\n> where we only store 32 bits on disk. We're trying to retcon that into\n> being true, but we'd have to work significantly harder to actually\n> make it true.\n\nThe purpose of this documentation section is to give users practical\nguidance, obviously. The main reason to frame it this way is because\nit seems to make the material easier to understand.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 May 2023 09:16:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023 at 9:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, May 1, 2023 at 9:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I disagree. If you start the cluster in single-user mode, you can\n> > actually wrap it around, unless something has changed that I don't\n> > know about.\n>\n> This patch relies on John's other patch which strongly discourages the\n> use of single-user mode. Were it not for that, I might agree.\n\nAlso, it's not clear that the term \"wraparound\" even describes what\nhappens when you corrupt the database by violating the \"no more than\n~2.1 billion XIDs distance between any two unfrozen XIDs\" invariant in\nsingle-user mode. What specific thing will have wrapped around? It's\npossible (and very likely) that every unfrozen XID in the database is\nfrom the same 64-XID-wise epoch.\n\nI don't think that we need to say very much about this scenario (and\nnothing at all about the specifics in \"Routine Vacuuming\"), so maybe\nit doesn't matter much. But I maintain that it makes most sense to\ndescribe this scenario as a violation of the \"no more than ~2.1\nbillion XIDs distance between any two unfrozen XIDs\" invariant, while\nleaving the term \"wraparound\" out of it completely. That terms has way\ntoo much baggage.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 May 2023 10:09:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023, 18:08 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I am saying that, while wraparound is perhaps not a perfect term\n> for what's happening, it is not, in my opinion, a bad term either.\n\n\nI don't want to put words into Peter's mouth, but I think that he's arguing\nthat the term \"wraparound\" suggests that there is something special about\nthe transition between xid 2^32 and xid 0 (or, well, 3). There isn't.\nThere's only something special about the transition, as your current xid\nadvances, between the xid that's half the xid space ahead of your current\nxid and the xid that's half the xid space behind the current xid, if the\nlatter is not frozen. I don't think that's what most users think of when\nthey hear \"wraparound\".\n\nOn Mon, May 1, 2023, 18:08 Robert Haas <robertmhaas@gmail.com> wrote:I am saying that, while wraparound is perhaps not a perfect term\nfor what's happening, it is not, in my opinion, a bad term either.I don't want to put words into Peter's mouth, but I think that he's arguing that the term \"wraparound\" suggests that there is something special about the transition between xid 2^32 and xid 0 (or, well, 3). There isn't. There's only something special about the transition, as your current xid advances, between the xid that's half the xid space ahead of your current xid and the xid that's half the xid space behind the current xid, if the latter is not frozen. I don't think that's what most users think of when they hear \"wraparound\".", "msg_date": "Mon, 1 May 2023 21:03:38 +0200", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023 at 12:03 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> I don't want to put words into Peter's mouth, but I think that he's arguing that the term \"wraparound\" suggests that there is something special about the transition between xid 2^32 and xid 0 (or, well, 3). There isn't.\n\nYes, that's exactly what I mean. There are two points that seem to be\nvery much in tension here:\n\n1. The scenario where you corrupt the database in single user mode by\nunsafely allocating XIDs (you need single user mode to bypass the\nxidStopLimit protections) generally won't involve unsigned integer\nwraparound (and if it does it's *entirely* incidental to the data\ncorruption).\n\n2. Actual unsigned integer wraparound is 100% harmless and routine, by design.\n\nSo why do we use the term wraparound as a synonym of \"the end of the\nworld\"? I assume that it's just an artefact of how the system worked\nbefore the invention of freezing. Back then, you had to do a dump and\nrestore when the system reached about 4 billion XIDs. Wraparound\nreally did mean \"the end of the world\" over 20 years ago.\n\nThis is related to my preference for explaining the issues with\nreference to a 64-bit XID space. Today we compare 64-bit XIDs using\nsimple unsigned integer comparisons. That's the same way that 32-bit\nXID comparisons worked before freezing was invented in 2001. So it\nreally does seem like the natural way to explain it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 May 2023 12:35:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Tue, May 2, 2023 at 12:09 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, May 1, 2023 at 9:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Mon, May 1, 2023 at 9:08 AM Robert Haas <robertmhaas@gmail.com>\nwrote:\n> > > I disagree. If you start the cluster in single-user mode, you can\n> > > actually wrap it around, unless something has changed that I don't\n> > > know about.\n\n+1 Pretending otherwise is dishonest.\n\n> > This patch relies on John's other patch which strongly discourages the\n> > use of single-user mode. Were it not for that, I might agree.\n\nOh that's rich. I'll note that 5% of your review was actually helpful\n(actual correction), the other 95% was needless distraction trying to\nenlist me in your holy crusade against the term \"wraparound\". It had the\nopposite effect.\n\n> Also, it's not clear that the term \"wraparound\" even describes what\n> happens when you corrupt the database by violating the \"no more than\n> ~2.1 billion XIDs distance between any two unfrozen XIDs\" invariant in\n> single-user mode. What specific thing will have wrapped around?\n\nIn your first message you said \"I'm hoping that I don't get too much push\nback on this, because it's already very difficult work.\"\n\nHere's some advice on how to avoid pushback:\n\n1. Insist that all terms can only be interpreted in the most pig-headedly\nliteral sense possible.\n2. Use that premise to pretend basic facts are a complete mystery.\n3. Claim that others are holding you back, and then try to move the\ngoalposts in their work.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, May 2, 2023 at 12:09 AM Peter Geoghegan <pg@bowt.ie> wrote:>> On Mon, May 1, 2023 at 9:16 AM Peter Geoghegan <pg@bowt.ie> wrote:> > On Mon, May 1, 2023 at 9:08 AM Robert Haas <robertmhaas@gmail.com> wrote:> > > I disagree. If you start the cluster in single-user mode, you can> > > actually wrap it around, unless something has changed that I don't> > > know about.+1 Pretending otherwise is dishonest.> > This patch relies on John's other patch which strongly discourages the> > use of single-user mode. Were it not for that, I might agree.Oh that's rich. I'll note that 5% of your review was actually helpful (actual correction), the other 95% was needless distraction trying to enlist me in your holy crusade against the term \"wraparound\". It had the opposite effect.> Also, it's not clear that the term \"wraparound\" even describes what> happens when you corrupt the database by violating the \"no more than> ~2.1 billion XIDs distance between any two unfrozen XIDs\" invariant in> single-user mode. What specific thing will have wrapped around?In your first message you said \"I'm hoping that I don't get too much push back on this, because it's already very difficult work.\"Here's some advice on how to avoid pushback:1. Insist that all terms can only be interpreted in the most pig-headedly literal sense possible.2. Use that premise to pretend basic facts are a complete mystery.3. Claim that others are holding you back, and then try to move the goalposts in their work.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 2 May 2023 10:04:40 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023 at 8:04 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> Here's some advice on how to avoid pushback:\n>\n> 1. Insist that all terms can only be interpreted in the most pig-headedly literal sense possible.\n> 2. Use that premise to pretend basic facts are a complete mystery.\n\nI can't imagine why you feel it necessary to communicate with me like\nthis. This is just vitriol, lacking any substance.\n\nHow we use words like wraparound is actually something of great\nconsequence to the Postgres project. We've needlessly scared users\nwith the way this information has been presented up until now -- that\nmuch is clear. To have you talk to me like this when I'm working on\nsuch a difficult, thankless task is a real slap in the face.\n\n> 3. Claim that others are holding you back, and then try to move the goalposts in their work.\n\nWhen did I say that? When did I even suggest it?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 May 2023 20:20:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023 at 8:04 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> Oh that's rich. I'll note that 5% of your review was actually helpful (actual correction), the other 95% was needless distraction trying to enlist me in your holy crusade against the term \"wraparound\". It had the opposite effect.\n\nI went back and checked. There were exactly two short paragraphs about\nwraparound terminology on the thread associated with the patch you're\nworking on, towards the end of this one email:\n\nhttps://postgr.es/m/CAH2-Wzm2fpPQ_=pXpRvkNiuTYBGTAUfxRNW40kLitxj9T3Ny7w@mail.gmail.com\n\nIn what world does that amount to 95% of my review, or anything like it?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 May 2023 21:19:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Mon, May 1, 2023 at 11:21 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I can't imagine why you feel it necessary to communicate with me like\n> this. This is just vitriol, lacking any substance.\n\nJohn's email is pretty harsh, but I can understand why he's frustrated.\n\nI told you that I did not agree with your dislike for the term\nwraparound and I explained why. You sent a couple more emails telling\nme that I was wrong and, frankly, saying a lot of things that seem\nonly tangentially related to the point that I was actually making. You\nseem to expect other people to spend a LOT OF TIME trying to\nunderstand what you're trying to say, but you don't seem to invest\nsimilar effort in trying to understand what they're trying to say. I\ncouldn't even begin to grasp what your point was until Maciek stepped\nin to explain, and I still don't really agree with it, and I expect\nthat no matter how many emails I write about that, your position won't\nbudge an iota.\n\nIt's really demoralizing. If I just vote -1 on the patch set, then I'm\na useless obstruction. If I actually try to review it, we'll exchange\n100 emails and I won't get anything else done for the next two weeks\nand I probably won't feel much better about the patch at the end of\nthat process than at the beginning. I don't see that I have any\nwinning options here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 May 2023 16:29:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Tue, May 2, 2023 at 1:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I told you that I did not agree with your dislike for the term\n> wraparound and I explained why. You sent a couple more emails telling\n> me that I was wrong and, frankly, saying a lot of things that seem\n> only tangentially related to the point that I was actually making.\n\nI agree that that's what I did. You're perfectly entitled to find that\nannoying (though I maintain that my point about the 64-bit XID space\nwas a good one, assuming the general subject matter was of interest).\nHowever, you're talking about this as if I dug my feet in on a\nsubstantive issue affecting the basic shape of the patch -- I don't\nbelieve that that conclusion is justified by anything I've said or\ndone. I'm not even sure that we disagree on some less important point\nthat will directly affect the patch (it's quite possible, but I'm not\neven sure of it).\n\nI've already said that I don't think that the term wraparound is going\nanywhere anytime soon (granted, that was on the other thread). So it's\nnot like I'm attempting to banish all existing use of that terminology\nwithin the scope of this patch series -- far from it. At most I tried\nto avoid inventing new terms that contain the word \"wraparound\" (also\non the other thread).\n\nThe topic originally came up in the context of moving talk about\nphysical wraparound to an entirely different chapter. Which is, I\nbelieve (based in part on previous discussions), something that all\nthree of us already agree on! So again, I must ask: is there actually\na substantive disagreement at all?\n\n> It's really demoralizing. If I just vote -1 on the patch set, then I'm\n> a useless obstruction. If I actually try to review it, we'll exchange\n> 100 emails and I won't get anything else done for the next two weeks\n> and I probably won't feel much better about the patch at the end of\n> that process than at the beginning. I don't see that I have any\n> winning options here.\n\nI've already put a huge amount of work into this. It is inherently a\nvery difficult thing to get right -- it's not hard to understand why\nit was put off for so long. Why shouldn't I have opinions, given all\nthat? I'm frustrated too.\n\nDespite all this, John basically agreed with my high level direction\n-- all of the important points seemed to have been settled without any\narguments whatsoever (also based in part on previous discussions).\nJohn's volley of abuse seemed to come from nowhere at all.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 May 2023 14:00:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 24, 2023 at 2:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> My work on page-level freezing for PostgreSQL 16 has some remaining\n> loose ends to tie up with the documentation. The \"Routine Vacuuming\"\n> section of the docs has no mention of page-level freezing. It also\n> doesn't mention the FPI optimization added by commit 1de58df4. This\n> isn't a small thing to leave out; I fully expect that the FPI\n> optimization will very significantly alter when and how VACUUM\n> freezes. The cadence will look quite a lot different.\n>\n> It seemed almost impossible to fit in discussion of page-level\n> freezing to the existing structure. In part this is because the\n> existing documentation emphasizes the worst case scenario, rather than\n> talking about freezing as a maintenance task that affects physical\n> heap pages in roughly the same way as pruning does. There isn't a\n> clean separation of things that would allow me to just add a paragraph\n> about the FPI thing.\n>\n> Obviously it's important that the system never enters xidStopLimit\n> mode -- not being able to allocate new XIDs is a huge problem. But it\n> seems unhelpful to define that as the only goal of freezing, or even\n> the main goal. To me this seems similar to defining the goal of\n> cleaning up bloat as avoiding completely running out of disk space;\n> while it may be \"the single most important thing\" in some general\n> sense, it isn't all that important in most individual cases. There are\n> many very bad things that will happen before that extreme worst case\n> is hit, which are far more likely to be the real source of pain.\n>\n> There are also very big structural problems with \"Routine Vacuuming\",\n> that I also propose to do something about. Honestly, it's a huge mess\n> at this point. It's nobody's fault in particular; there has been\n> accretion after accretion added, over many years. It is time to\n> finally bite the bullet and do some serious restructuring. I'm hoping\n> that I don't get too much push back on this, because it's already very\n> difficult work.\n>\n\nThanks for taking the time to do this. It is indeed difficult work. I'll\ngive my perspective as someone who has not read the vacuum code but have\nlearnt most of what I know about autovacuum / vacuuming by reading the\n\"Routine Vacuuming\" page 10s of times.\n\n\n>\n> Attached patch series shows what I consider to be a much better\n> overall structure. To make this convenient to take a quick look at, I\n> also attach a prebuilt version of routine-vacuuming.html (not the only\n> page that I've changed, but the most important set of changes by far).\n>\n> This initial version is still quite lacking in overall polish, but I\n> believe that it gets the general structure right. That's what I'd like\n> to get feedback on right now: can I get agreement with me about the\n> general nature of the problem? Does this high level direction seem\n> like the right one?\n>\n\nThere are things I like about the changes you've proposed and some where I\nfeel that the previous section was easier to understand. I'll comment\ninline on the summary below and will put in a few points about things I\nthink can be improved at the end.\n\n\n>\n> The following list is a summary of the major changes that I propose:\n>\n> 1. Restructures the order of items to match the actual processing\n> order within VACUUM (and ANALYZE), rather than jumping from VACUUM to\n> ANALYZE and then back to VACUUM.\n>\n> This flows a lot better, which helps with later items that deal with\n> freezing/wraparound.\n>\n\n+1\n\n\n>\n> 2. Renamed \"Preventing Transaction ID Wraparound Failures\" to\n> \"Freezing to manage the transaction ID space\". Now we talk about\n> wraparound as a subtopic of freezing, not vice-versa. (This is a\n> complete rewrite, as described by later items in this list).\n>\n\n+1 on this too. Freezing is a normal part of vacuuming and while the\naggressive vacuums are different, I think just talking about the worst case\nscenario while referring to it is alarmist.\n\n\n>\n> 3. All of the stuff about modulo-2^32 arithmetic is moved to the\n> storage chapter, where we describe the heap tuple header format.\n>\n> It seems crazy to me that the second sentence in our discussion of\n> wraparound/freezing is still:\n>\n> \"But since transaction IDs have limited size (32 bits) a cluster that\n> runs for a long time (more than 4 billion transactions) would suffer\n> transaction ID wraparound: the XID counter wraps around to zero, and\n> all of a sudden transactions that were in the past appear to be in the\n> future\"\n>\n> Here we start the whole discussion of wraparound (a particularly\n> delicate topic) by describing how VACUUM used to work 20 years ago,\n> before the invention of freezing. That was the last time that a\n> PostgreSQL cluster could run for 4 billion XIDs without freezing. The\n> invariant is that we activate xidStopLimit mode protections to avoid a\n> \"distance\" between any two unfrozen XIDs that exceeds about 2 billion\n> XIDs. So why on earth are we talking about 4 billion XIDs? This is the\n> most confusing, least useful way of describing freezing that I can\n> think of.\n>\n> 4. No more separate section for MultiXactID freezing -- that's\n> discussed as part of the discussion of page-level freezing.\n>\n> Page-level freezing takes place without regard to the trigger\n> condition for freezing. So the new approach to freezing has a fixed\n> idea of what it means to freeze a given page (what physical\n> modifications it entails). This means that having a separate sect3\n> subsection for MultiXactIds now makes no sense (if it ever did).\n>\n> 5. The top-level list of maintenance tasks has a new addition: \"To\n> truncate obsolescent transaction status information, when possible\".\n>\n> It makes a lot of sense to talk about this as something that happens\n> last (or last among those steps that take place during VACUUM). It's\n> far less important than avoiding xidStopLimit outages, obviously\n> (using some extra disk space is almost certainly the least of your\n> worries when you're near to xidStopLimit). The current documentation\n> seems to take precisely the opposite view, when it says the following:\n>\n> \"The sole disadvantage of increasing autovacuum_freeze_max_age (and\n> vacuum_freeze_table_age along with it) is that the pg_xact and\n> pg_commit_ts subdirectories of the database cluster will take more\n> space\"\n>\n> This sentence is dangerously bad advice. It is precisely backwards. At\n> the same time, we'd better say something about the need to truncate\n> pg_xact/clog here. Besides all this, the new section for this is a far\n> more accurate reflection of what's really going on: most individual\n> VACUUMs (even most aggressive VACUUMs) won't ever truncate\n> pg_xact/clog (or the other relevant SLRUs). Truncation only happens\n> after a VACUUM that advances the relfrozenxid of the table which\n> previously had the oldest relfrozenxid among all tables in the entire\n> cluster -- so we need to talk about it as an issue with the high\n> watermark storage for pg_xact.\n>\n> 6. Rename the whole \"Routine Vacuuming\" section to \"Autovacuum\n> Maintenance Tasks\".\n>\n> This is what we should be emphasizing over manually run VACUUMs.\n> Besides, the current title just seems wrong -- we're talking about\n> ANALYZE just as much as VACUUM.\n>\n\n+1 on this. Talking about autovacuum as the default and how to get the most\nout of it seems like the right way to go.\n\nI read through the new version a couple times and here is some of my\nfeedback. I haven't yet reviewed individual patches or done a very detailed\ncomparison with the previous version.\n\n1) While I agree that bundling VACUUM and VACUUM FULL is not the right way,\nmoving all VACUUM FULL references into tips and warnings also seems\nexcessive. I think it's probably best to just have a single paragraph which\ntalks about VACUUM FULL as I do think it should be mentioned in the\nreclaiming disk space section.\n2) I felt that the new section, \"Freezing to manage the transaction ID\nspace\" could be made simpler to understand. As an example, I understood\nwhat the parameters (autovacuum_freeze_max_age, vacuum_freeze_table_age) do\nand how they interact better in the previous version of the docs.\n3) In the \"VACUUMs aggressive strategy\" section, we should first introduce\nwhat an aggressive VACUUM is before going into when it's triggered, where\nmetadata is stored etc. It's only several paragraphs later that I get to\nknow what we are referring to as an \"aggressive\" autovacuum.\n4) I think we should explicitly call out that seeing an anti-wraparound\nVACUUM or \"VACUUM table (to prevent wraparound)\" is normal and that it's\njust a VACUUM triggered due to the table having unfrozen rows with an XID\nolder than autovacuum_freeze_max_age. I've seen many users panicking on\nseeing this and feeling that they are close to a wraparound. Also, we\nshould be more clear about how it's different from VACUUMs triggered due to\nthe scale factors (cancellation behavior, being triggered when autovacuum\nis disabled etc.). I think you do some of this but given the panic around\ntransactionid wraparounds, being more clear about this is better.\n5) Can we use a better name for the XidStopLimit mode? It seems like a very\nimplementation centric name. Maybe a better version of \"Running out of the\nXID space\" or something like that?\n6) In the XidStopLimit mode section, it would be good to explain briefly\nwhy you could get to this scenario. It's not something which should happen\nin a normal running system unless you have a long running transaction or\ninactive replication slots or a badly configured system or something of\nthat sort. If you got to this point, other than running VACUUM to get out\nof the situation, it's also important to figure out what got you there in\nthe first place as many VACUUMs should have attempted to advance the\nrelfrozenxid and failed.\n\nThere are a few other small things I noticed along the way but my goal was\nto look at the overall structure. As we address some of these, I'm happy to\ndo more detailed review of individual patches.\n\nRegards,\nSamay\n\nThoughts?\n>\n> --\n> Peter Geoghegan\n>\n\nHi,On Mon, Apr 24, 2023 at 2:58 PM Peter Geoghegan <pg@bowt.ie> wrote:My work on page-level freezing for PostgreSQL 16 has some remaining\nloose ends to tie up with the documentation. The \"Routine Vacuuming\"\nsection of the docs has no mention of page-level freezing. It also\ndoesn't mention the FPI optimization added by commit 1de58df4. This\nisn't a small thing to leave out; I fully expect that the FPI\noptimization will very significantly alter when and how VACUUM\nfreezes. The cadence will look quite a lot different.\n\nIt seemed almost impossible to fit in discussion of page-level\nfreezing to the existing structure. In part this is because the\nexisting documentation emphasizes the worst case scenario, rather than\ntalking about freezing as a maintenance task that affects physical\nheap pages in roughly the same way as pruning does. There isn't a\nclean separation of things that would allow me to just add a paragraph\nabout the FPI thing.\n\nObviously it's important that the system never enters xidStopLimit\nmode -- not being able to allocate new XIDs is a huge problem. But it\nseems unhelpful to define that as the only goal of freezing, or even\nthe main goal. To me this seems similar to defining the goal of\ncleaning up bloat as avoiding completely running out of disk space;\nwhile it may be \"the single most important thing\" in some general\nsense, it isn't all that important in most individual cases. There are\nmany very bad things that will happen before that extreme worst case\nis hit, which are far more likely to be the real source of pain.\n\nThere are also very big structural problems with \"Routine Vacuuming\",\nthat I also propose to do something about. Honestly, it's a huge mess\nat this point. It's nobody's fault in particular; there has been\naccretion after accretion added, over many years. It is time to\nfinally bite the bullet and do some serious restructuring. I'm hoping\nthat I don't get too much push back on this, because it's already very\ndifficult work.Thanks for taking the time to do this. It is indeed difficult work. I'll give my perspective as someone who has not read the vacuum code but have learnt most of what I know about autovacuum / vacuuming by reading the \"Routine Vacuuming\" page 10s of times.  \n\nAttached patch series shows what I consider to be a much better\noverall structure. To make this convenient to take a quick look at, I\nalso attach a prebuilt version of routine-vacuuming.html (not the only\npage that I've changed, but the most important set of changes by far).\n\nThis initial version is still quite lacking in overall polish, but I\nbelieve that it gets the general structure right. That's what I'd like\nto get feedback on right now: can I get agreement with me about the\ngeneral nature of the problem? Does this high level direction seem\nlike the right one?There are things I like about the changes you've proposed and some where I feel that the previous section was easier to understand. I'll comment inline on the summary below and will put in a few points about things I think can be improved at the end. \n\nThe following list is a summary of the major changes that I propose:\n\n1. Restructures the order of items to match the actual processing\norder within VACUUM (and ANALYZE), rather than jumping from VACUUM to\nANALYZE and then back to VACUUM.\n\nThis flows a lot better, which helps with later items that deal with\nfreezing/wraparound.+1 \n\n2. Renamed \"Preventing Transaction ID Wraparound Failures\" to\n\"Freezing to manage the transaction ID space\". Now we talk about\nwraparound as a subtopic of freezing, not vice-versa. (This is a\ncomplete rewrite, as described by later items in this list).+1 on this too. Freezing is a normal part of vacuuming and while the aggressive vacuums are different, I think just talking about the worst case scenario while referring to it is alarmist. \n\n3. All of the stuff about modulo-2^32 arithmetic is moved to the\nstorage chapter, where we describe the heap tuple header format.\n\nIt seems crazy to me that the second sentence in our discussion of\nwraparound/freezing is still:\n\n\"But since transaction IDs have limited size (32 bits) a cluster that\nruns for a long time (more than 4 billion transactions) would suffer\ntransaction ID wraparound: the XID counter wraps around to zero, and\nall of a sudden transactions that were in the past appear to be in the\nfuture\"\n\nHere we start the whole discussion of wraparound (a particularly\ndelicate topic) by describing how VACUUM used to work 20 years ago,\nbefore the invention of freezing. That was the last time that a\nPostgreSQL cluster could run for 4 billion XIDs without freezing. The\ninvariant is that we activate xidStopLimit mode protections to avoid a\n\"distance\" between any two unfrozen XIDs that exceeds about 2 billion\nXIDs. So why on earth are we talking about 4 billion XIDs? This is the\nmost confusing, least useful way of describing freezing that I can\nthink of.\n\n4. No more separate section for MultiXactID freezing -- that's\ndiscussed as part of the discussion of page-level freezing.\n\nPage-level freezing takes place without regard to the trigger\ncondition for freezing. So the new approach to freezing has a fixed\nidea of what it means to freeze a given page (what physical\nmodifications it entails). This means that having a separate sect3\nsubsection for MultiXactIds now makes no sense (if it ever did).\n\n5. The top-level list of maintenance tasks has a new addition: \"To\ntruncate obsolescent transaction status information, when possible\".\n\nIt makes a lot of sense to talk about this as something that happens\nlast (or last among those steps that take place during VACUUM). It's\nfar less important than avoiding xidStopLimit outages, obviously\n(using some extra disk space is almost certainly the least of your\nworries when you're near to xidStopLimit). The current documentation\nseems to take precisely the opposite view, when it says the following:\n\n\"The sole disadvantage of increasing autovacuum_freeze_max_age (and\nvacuum_freeze_table_age along with it) is that the pg_xact and\npg_commit_ts subdirectories of the database cluster will take more\nspace\"\n\nThis sentence is dangerously bad advice. It is precisely backwards. At\nthe same time, we'd better say something about the need to truncate\npg_xact/clog here. Besides all this, the new section for this is a far\nmore accurate reflection of what's really going on: most individual\nVACUUMs (even most aggressive VACUUMs) won't ever truncate\npg_xact/clog (or the other relevant SLRUs). Truncation only happens\nafter a VACUUM that advances the relfrozenxid of the table which\npreviously had the oldest relfrozenxid among all tables in the entire\ncluster -- so we need to talk about it as an issue with the high\nwatermark storage for pg_xact.\n\n6. Rename the whole \"Routine Vacuuming\" section to \"Autovacuum\nMaintenance Tasks\".\n\nThis is what we should be emphasizing over manually run VACUUMs.\nBesides, the current title just seems wrong -- we're talking about\nANALYZE just as much as VACUUM.+1 on this. Talking about autovacuum as the default and how to get the most out of it seems like the right way to go.I read through the new version a couple times and here is some of my feedback. I haven't yet reviewed individual patches or done a very detailed comparison with the previous version.1) While I agree that bundling VACUUM and VACUUM FULL is not the right way, moving all VACUUM FULL references into tips and warnings also seems excessive. I think it's probably best to just have a single paragraph which talks about VACUUM FULL as I do think it should be mentioned in the reclaiming disk space section.2) I felt that the new section, \"Freezing to manage the transaction ID space\" could be made simpler to understand. As an example, I understood what the parameters (autovacuum_freeze_max_age, vacuum_freeze_table_age) do and how they interact better in the previous version of the docs.3) In the \"VACUUMs aggressive strategy\" section, we should first introduce what an aggressive VACUUM is before going into when it's triggered, where metadata is stored etc. It's only several paragraphs later that I get to know what we are referring to as an \"aggressive\" autovacuum.4) I think we should explicitly call out that seeing an anti-wraparound VACUUM or \"VACUUM table (to prevent wraparound)\" is normal and that it's just a VACUUM triggered due to the table having unfrozen rows with an XID older than autovacuum_freeze_max_age. I've seen many users panicking on seeing this and feeling that they are close to a wraparound. Also, we should be more clear about how it's different from VACUUMs triggered due to the scale factors (cancellation behavior, being triggered when autovacuum is disabled etc.). I think you do some of this but given the panic around transactionid wraparounds, being more clear about this is better.5) Can we use a better name for the XidStopLimit mode? It seems like a very implementation centric name. Maybe a better version of \"Running out of the XID space\" or something like that?6) In the XidStopLimit mode section, it would be good to explain briefly why you could get to this scenario. It's not something which should happen in a normal running system unless you have a long running transaction or inactive replication slots or a badly configured system or something of that sort. If you got to this point, other than running VACUUM to get out of the situation, it's also important to figure out what got you there in the first place as many VACUUMs should have attempted to advance the relfrozenxid and failed.There are a few other small things I noticed along the way but my goal was to look at the overall structure. As we address some of these, I'm happy to do more detailed review of individual patches.Regards,Samay\nThoughts?\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 2 May 2023 23:39:52 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "Hi Samay,\n\nOn Tue, May 2, 2023 at 11:40 PM samay sharma <smilingsamay@gmail.com> wrote:\n> Thanks for taking the time to do this. It is indeed difficult work.\n\nThanks for the review! I think that this is something that would\ndefinitely benefit from a perspective such as yours.\n\n> There are things I like about the changes you've proposed and some where I feel that the previous section was easier to understand.\n\nThat makes sense, and I think that I agree with every point you've\nraised, bar none. I'm pleased to see that you basically agree with the\nhigh level direction.\n\nI would estimate that the version you looked at (v2) is perhaps 35%\ncomplete. So some of the individual problems you noticed were a direct\nconsequence of the work just not being anywhere near complete. I'll\ntry to do a better job of tracking the relative maturity of each\ncommit/patch in each commit message, going forward.\n\nAnything that falls under \"25.2.1. Recovering Disk Space\" is\nparticularly undeveloped in v2. The way that I broke that up into a\nbunch of WARNINGs/NOTEs/TIPs was just a short term way of breaking it\nup into pieces, so that the structure was very approximately what I\nwanted. I actually think that the stuff about CLUSTER and VACUUM FULL\nbelongs in a completely different chapter. Since it is not \"Routine\nVacuuming\" at all.\n\n>> 2. Renamed \"Preventing Transaction ID Wraparound Failures\" to\n>> \"Freezing to manage the transaction ID space\". Now we talk about\n>> wraparound as a subtopic of freezing, not vice-versa. (This is a\n>> complete rewrite, as described by later items in this list).\n>\n> +1 on this too. Freezing is a normal part of vacuuming and while the aggressive vacuums are different, I think just talking about the worst case scenario while referring to it is alarmist.\n\nStrangely enough, Postgres 16 is the first version that instruments\nfreezing in its autovacuum log reports. I suspect that some long term\nusers will find it quite surprising to see how much (or how little)\nfreezing takes place in non-aggressive VACUUMs.\n\nThe introduction of page-level freezing will make it easier and more\nnatural to tune settings like vacuum_freeze_min_age, with the aim of\nsmoothing out the burden of freezing over time (particularly by making\nnon-aggressive VACUUMs freeze more). Page-level freezing removes any\nquestion of not freezing every tuple on a page (barring cases where\n\"removable cutoff\" is noticeably held back by an old MVCC snapshot).\nThis makes it more natural to think of freezing as a process that\nmakes it okay to store data in individual physical heap pages, long\nterm.\n\n> 1) While I agree that bundling VACUUM and VACUUM FULL is not the right way, moving all VACUUM FULL references into tips and warnings also seems excessive. I think it's probably best to just have a single paragraph which talks about VACUUM FULL as I do think it should be mentioned in the reclaiming disk space section.\n\nAs I mentioned briefly already, my intention is to move it to another\nchapter entirely. I was thinking of \"Chapter 29. Monitoring Disk\nUsage\". The \"Routine Vacuuming\" docs would then link to this sect1 --\nsomething along the lines of \"non-routine commands to reclaim a lot of\ndisk space in the event of extreme bloat\".\n\n> 2) I felt that the new section, \"Freezing to manage the transaction ID space\" could be made simpler to understand. As an example, I understood what the parameters (autovacuum_freeze_max_age, vacuum_freeze_table_age) do and how they interact better in the previous version of the docs.\n\nAgreed. I'm going to split it up some more. I think that the current\n\"25.2.2.1. VACUUM's Aggressive Strategy\" should be split in two, so we\ngo from talking about aggressive VACUUMs to Antiwraparound\nautovacuums. Finding the least confusing way of explaining it has been\na focus of mine in the last few days.\n\n> 4) I think we should explicitly call out that seeing an anti-wraparound VACUUM or \"VACUUM table (to prevent wraparound)\" is normal and that it's just a VACUUM triggered due to the table having unfrozen rows with an XID older than autovacuum_freeze_max_age. I've seen many users panicking on seeing this and feeling that they are close to a wraparound.\n\nThat has also been my exact experience. Users are terrified, usually\nfor no good reason at all. I'll make sure that this comes across in\nthe next revision of the patch series.\n\n> Also, we should be more clear about how it's different from VACUUMs triggered due to the scale factors (cancellation behavior, being triggered when autovacuum is disabled etc.).\n\nRight. Though I think that the biggest point of confusion for users is\nhow *few* differences there really are between antiwraparound\nautovacuum, and any other kind of autovacuum that happens to use\nVACUUM's aggressive strategy. There is really only one important\ndifference: the autocancellation behavior. This is an autovacuum\nbehavior, not a VACUUM behavior -- so the \"VACUUM side\" doesn't know\nanything about that at all.\n\n> 5) Can we use a better name for the XidStopLimit mode? It seems like a very implementation centric name. Maybe a better version of \"Running out of the XID space\" or something like that?\n\nComing up with a new user-facing name for xidStopLimit is already on\nmy TODO list (it's surprisingly hard). I have used that name so far\nbecause it unambiguously refers to the exact thing that I want to talk\nabout when discussing the worst case. Other than that, it's a terrible\nname.\n\n> 6) In the XidStopLimit mode section, it would be good to explain briefly why you could get to this scenario. It's not something which should happen in a normal running system unless you have a long running transaction or inactive replication slots or a badly configured system or something of that sort.\n\nI agree that that's important. Note that there is already something\nabout \"removable cutoff\" being held back at the start of the\ndiscussion of freezing -- that will prevent freezing in exactly the\nsame way as it prevents cleanup of dead tuples.\n\nThat will become a WARNING box in the next revision. There should also\nbe a similar, analogous WARNING box (about \"removable cutoff\" being\nheld back) much earlier on in the docs -- this should appear in\n\"25.2.1. Recovering Disk Space\". Obviously this structure suggests\nthat there is an isomorphism between freezing and removing bloat. For\nexample, if you cannot \"freeze\" an XID that appears in some tuple's\nxmax, then you also cannot remove that tuple because VACUUM only sees\nit as a recently dead tuple (if xmax is >= OldestXmin/removable\ncutoff, and from a deleter that already committed).\n\nI don't think that we need to spell the \"isomorphism\" point out to the\nreader directly, but having a subtle cue that that's how it works\nseems like a good idea.\n\n> If you got to this point, other than running VACUUM to get out of the situation, it's also important to figure out what got you there in the first place as many VACUUMs should have attempted to advance the relfrozenxid and failed.\n\nIt's also true that problems that can lead to the system entering\nxidStopLimit mode aren't limited to cases where doing required\nfreezing is fundamentally impossible due to something holding back\n\"removable cutoff\". It's also possible that VACUUM simply can't keep\nup (though the failsafe has helped with that problem a lot).\n\nI tend to agree that there needs to be more about this in the\nxidStopLimit subsection (discussion of freezing being held back by\n\"removable cutoff\" is insufficient), but FWIW that seems like it\nshould probably be treated as out of scope for this patch. It is more\nthe responsibility of the other patch [1] that aims to put the\nxidStopLimit documentation on a better footing (and remove that\nterrible HINT about single user mode).\n\nOf course, that other patch is closely related to this patch -- the\nprecise boundaries are unclear at this point. In any case I think that\nthis should happen, because I think that it's a good idea.\n\n> There are a few other small things I noticed along the way but my goal was to look at the overall structure.\n\nThanks again! This is very helpful.\n\n[1] https://www.postgresql.org/message-id/flat/CAJ7c6TM2D277U2wH8X78kg8pH3tdUqebV3_JCJqAkYQFHCFzeg%40mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 3 May 2023 14:59:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Wed, May 3, 2023 at 2:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Coming up with a new user-facing name for xidStopLimit is already on\n> my TODO list (it's surprisingly hard). I have used that name so far\n> because it unambiguously refers to the exact thing that I want to talk\n> about when discussing the worst case. Other than that, it's a terrible\n> name.\n\nWhat about \"XID allocation overload\"? The implication that I'm going\nfor here is that the system was misconfigured, or there was otherwise\nsome kind of imbalance between XID supply and demand. It also seems to\nconvey the true gravity of the situation -- it's *bad*, to be sure,\nbut in many environments it's a survivable condition.\n\nOne possible downside of this name is that it could suggest that all\nthat needs to happen is for autovacuum to catch up on vacuuming. In\nreality the user *will* probably have to do more than just wait before\nthe system's ability to allocate new XIDs returns, because (in all\nlikelihood) autovacuum just won't be able to catch up unless and until\nthe user (say) drops a replication slot. Even still, the name seems to\nwork; it describes the conceptual model of the system accurately. Even\nbefore the user drops the replication slot, autovacuum will at least\n*try* to get the system back to being able to allocate new XIDs once\nmore.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 3 May 2023 15:48:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "Hi,\n\nOn Wed, May 3, 2023 at 2:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> Hi Samay,\n>\n> On Tue, May 2, 2023 at 11:40 PM samay sharma <smilingsamay@gmail.com>\n> wrote:\n> > Thanks for taking the time to do this. It is indeed difficult work.\n>\n> Thanks for the review! I think that this is something that would\n> definitely benefit from a perspective such as yours.\n>\n\nGlad to hear that my feedback was helpful.\n\n\n>\n> > There are things I like about the changes you've proposed and some where\n> I feel that the previous section was easier to understand.\n>\n> That makes sense, and I think that I agree with every point you've\n> raised, bar none. I'm pleased to see that you basically agree with the\n> high level direction.\n>\n> I would estimate that the version you looked at (v2) is perhaps 35%\n> complete. So some of the individual problems you noticed were a direct\n> consequence of the work just not being anywhere near complete. I'll\n> try to do a better job of tracking the relative maturity of each\n> commit/patch in each commit message, going forward.\n>\n> Anything that falls under \"25.2.1. Recovering Disk Space\" is\n> particularly undeveloped in v2. The way that I broke that up into a\n> bunch of WARNINGs/NOTEs/TIPs was just a short term way of breaking it\n> up into pieces, so that the structure was very approximately what I\n> wanted. I actually think that the stuff about CLUSTER and VACUUM FULL\n> belongs in a completely different chapter. Since it is not \"Routine\n> Vacuuming\" at all.\n>\n> >> 2. Renamed \"Preventing Transaction ID Wraparound Failures\" to\n> >> \"Freezing to manage the transaction ID space\". Now we talk about\n> >> wraparound as a subtopic of freezing, not vice-versa. (This is a\n> >> complete rewrite, as described by later items in this list).\n> >\n> > +1 on this too. Freezing is a normal part of vacuuming and while the\n> aggressive vacuums are different, I think just talking about the worst case\n> scenario while referring to it is alarmist.\n>\n> Strangely enough, Postgres 16 is the first version that instruments\n> freezing in its autovacuum log reports. I suspect that some long term\n> users will find it quite surprising to see how much (or how little)\n> freezing takes place in non-aggressive VACUUMs.\n>\n> The introduction of page-level freezing will make it easier and more\n> natural to tune settings like vacuum_freeze_min_age, with the aim of\n> smoothing out the burden of freezing over time (particularly by making\n> non-aggressive VACUUMs freeze more). Page-level freezing removes any\n> question of not freezing every tuple on a page (barring cases where\n> \"removable cutoff\" is noticeably held back by an old MVCC snapshot).\n> This makes it more natural to think of freezing as a process that\n> makes it okay to store data in individual physical heap pages, long\n> term.\n>\n> > 1) While I agree that bundling VACUUM and VACUUM FULL is not the right\n> way, moving all VACUUM FULL references into tips and warnings also seems\n> excessive. I think it's probably best to just have a single paragraph which\n> talks about VACUUM FULL as I do think it should be mentioned in the\n> reclaiming disk space section.\n>\n> As I mentioned briefly already, my intention is to move it to another\n> chapter entirely. I was thinking of \"Chapter 29. Monitoring Disk\n> Usage\". The \"Routine Vacuuming\" docs would then link to this sect1 --\n> something along the lines of \"non-routine commands to reclaim a lot of\n> disk space in the event of extreme bloat\".\n>\n> > 2) I felt that the new section, \"Freezing to manage the transaction ID\n> space\" could be made simpler to understand. As an example, I understood\n> what the parameters (autovacuum_freeze_max_age, vacuum_freeze_table_age) do\n> and how they interact better in the previous version of the docs.\n>\n> Agreed. I'm going to split it up some more. I think that the current\n> \"25.2.2.1. VACUUM's Aggressive Strategy\" should be split in two, so we\n> go from talking about aggressive VACUUMs to Antiwraparound\n> autovacuums. Finding the least confusing way of explaining it has been\n> a focus of mine in the last few days.\n>\n\nTo be honest, this was not super simple to understand even in the previous\nversion. However, as our goal is to simplify this and make it easier to\nunderstand, I'll hold this patch-set to a higher standard :).\n\nI wish there was a simple representation (maybe even a table or something)\nwhich would explain the differences between a VACUUM which is not\naggressive, a VACUUM which ends up being aggressive due to\nvacuum_freeze_table_age and an antiwraparound autovacuum.\n\n\n>\n> > 4) I think we should explicitly call out that seeing an anti-wraparound\n> VACUUM or \"VACUUM table (to prevent wraparound)\" is normal and that it's\n> just a VACUUM triggered due to the table having unfrozen rows with an XID\n> older than autovacuum_freeze_max_age. I've seen many users panicking on\n> seeing this and feeling that they are close to a wraparound.\n>\n> That has also been my exact experience. Users are terrified, usually\n> for no good reason at all. I'll make sure that this comes across in\n> the next revision of the patch series.\n>\n\nThinking about it a bit more, I wonder if there's value in changing the\n\"(to prevent wraparound)\" to something else. It's understandable why people\nwho just see that in pg_stat_activity and don't read docs might assume they\nare close to a wraparound.\n\nRegards,\nSamay\n\n\n>\n> > Also, we should be more clear about how it's different from VACUUMs\n> triggered due to the scale factors (cancellation behavior, being triggered\n> when autovacuum is disabled etc.).\n>\n> Right. Though I think that the biggest point of confusion for users is\n> how *few* differences there really are between antiwraparound\n> autovacuum, and any other kind of autovacuum that happens to use\n> VACUUM's aggressive strategy. There is really only one important\n> difference: the autocancellation behavior. This is an autovacuum\n> behavior, not a VACUUM behavior -- so the \"VACUUM side\" doesn't know\n> anything about that at all.\n\n\n> > 5) Can we use a better name for the XidStopLimit mode? It seems like a\n> very implementation centric name. Maybe a better version of \"Running out of\n> the XID space\" or something like that?\n>\n> Coming up with a new user-facing name for xidStopLimit is already on\n> my TODO list (it's surprisingly hard). I have used that name so far\n> because it unambiguously refers to the exact thing that I want to talk\n> about when discussing the worst case. Other than that, it's a terrible\n> name.\n>\n> > 6) In the XidStopLimit mode section, it would be good to explain briefly\n> why you could get to this scenario. It's not something which should happen\n> in a normal running system unless you have a long running transaction or\n> inactive replication slots or a badly configured system or something of\n> that sort.\n>\n> I agree that that's important. Note that there is already something\n> about \"removable cutoff\" being held back at the start of the\n> discussion of freezing -- that will prevent freezing in exactly the\n> same way as it prevents cleanup of dead tuples.\n>\n> That will become a WARNING box in the next revision. There should also\n> be a similar, analogous WARNING box (about \"removable cutoff\" being\n> held back) much earlier on in the docs -- this should appear in\n> \"25.2.1. Recovering Disk Space\". Obviously this structure suggests\n> that there is an isomorphism between freezing and removing bloat. For\n> example, if you cannot \"freeze\" an XID that appears in some tuple's\n> xmax, then you also cannot remove that tuple because VACUUM only sees\n> it as a recently dead tuple (if xmax is >= OldestXmin/removable\n> cutoff, and from a deleter that already committed).\n>\n> I don't think that we need to spell the \"isomorphism\" point out to the\n> reader directly, but having a subtle cue that that's how it works\n> seems like a good idea.\n>\n> > If you got to this point, other than running VACUUM to get out of the\n> situation, it's also important to figure out what got you there in the\n> first place as many VACUUMs should have attempted to advance the\n> relfrozenxid and failed.\n>\n> It's also true that problems that can lead to the system entering\n> xidStopLimit mode aren't limited to cases where doing required\n> freezing is fundamentally impossible due to something holding back\n> \"removable cutoff\". It's also possible that VACUUM simply can't keep\n> up (though the failsafe has helped with that problem a lot).\n>\n> I tend to agree that there needs to be more about this in the\n> xidStopLimit subsection (discussion of freezing being held back by\n> \"removable cutoff\" is insufficient), but FWIW that seems like it\n> should probably be treated as out of scope for this patch. It is more\n> the responsibility of the other patch [1] that aims to put the\n> xidStopLimit documentation on a better footing (and remove that\n> terrible HINT about single user mode).\n>\n> Of course, that other patch is closely related to this patch -- the\n> precise boundaries are unclear at this point. In any case I think that\n> this should happen, because I think that it's a good idea.\n>\n> > There are a few other small things I noticed along the way but my goal\n> was to look at the overall structure.\n>\n> Thanks again! This is very helpful.\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAJ7c6TM2D277U2wH8X78kg8pH3tdUqebV3_JCJqAkYQFHCFzeg%40mail.gmail.com\n> --\n> Peter Geoghegan\n>\n\nHi,On Wed, May 3, 2023 at 2:59 PM Peter Geoghegan <pg@bowt.ie> wrote:Hi Samay,\n\nOn Tue, May 2, 2023 at 11:40 PM samay sharma <smilingsamay@gmail.com> wrote:\n> Thanks for taking the time to do this. It is indeed difficult work.\n\nThanks for the review! I think that this is something that would\ndefinitely benefit from a perspective such as yours.Glad to hear that my feedback was helpful. \n\n> There are things I like about the changes you've proposed and some where I feel that the previous section was easier to understand.\n\nThat makes sense, and I think that I agree with every point you've\nraised, bar none. I'm pleased to see that you basically agree with the\nhigh level direction.\n\nI would estimate that the version you looked at (v2) is perhaps 35%\ncomplete. So some of the individual problems you noticed were a direct\nconsequence of the work just not being anywhere near complete. I'll\ntry to do a better job of tracking the relative maturity of each\ncommit/patch in each commit message, going forward.\n\nAnything that falls under \"25.2.1. Recovering Disk Space\" is\nparticularly undeveloped in v2. The way that I broke that up into a\nbunch of WARNINGs/NOTEs/TIPs was just a short term way of breaking it\nup into pieces, so that the structure was very approximately what I\nwanted. I actually think that the stuff about CLUSTER and VACUUM FULL\nbelongs in a completely different chapter. Since it is not \"Routine\nVacuuming\" at all.\n\n>> 2. Renamed \"Preventing Transaction ID Wraparound Failures\" to\n>> \"Freezing to manage the transaction ID space\". Now we talk about\n>> wraparound as a subtopic of freezing, not vice-versa. (This is a\n>> complete rewrite, as described by later items in this list).\n>\n> +1 on this too. Freezing is a normal part of vacuuming and while the aggressive vacuums are different, I think just talking about the worst case scenario while referring to it is alarmist.\n\nStrangely enough, Postgres 16 is the first version that instruments\nfreezing in its autovacuum log reports. I suspect that some long term\nusers will find it quite surprising to see how much (or how little)\nfreezing takes place in non-aggressive VACUUMs.\n\nThe introduction of page-level freezing will make it easier and more\nnatural to tune settings like vacuum_freeze_min_age, with the aim of\nsmoothing out the burden of freezing over time (particularly by making\nnon-aggressive VACUUMs freeze more). Page-level freezing removes any\nquestion of not freezing every tuple on a page (barring cases where\n\"removable cutoff\" is noticeably held back by an old MVCC snapshot).\nThis makes it more natural to think of freezing as a process that\nmakes it okay to store data in individual physical heap pages, long\nterm.\n\n> 1) While I agree that bundling VACUUM and VACUUM FULL is not the right way, moving all VACUUM FULL references into tips and warnings also seems excessive. I think it's probably best to just have a single paragraph which talks about VACUUM FULL as I do think it should be mentioned in the reclaiming disk space section.\n\nAs I mentioned briefly already, my intention is to move it to another\nchapter entirely. I was thinking of \"Chapter 29. Monitoring Disk\nUsage\". The \"Routine Vacuuming\" docs would then link to this sect1 --\nsomething along the lines of \"non-routine commands to reclaim a lot of\ndisk space in the event of extreme bloat\".\n\n> 2) I felt that the new section, \"Freezing to manage the transaction ID space\" could be made simpler to understand. As an example, I understood what the parameters (autovacuum_freeze_max_age, vacuum_freeze_table_age) do and how they interact better in the previous version of the docs.\n\nAgreed. I'm going to split it up some more. I think that the current\n\"25.2.2.1. VACUUM's Aggressive Strategy\" should be split in two, so we\ngo from talking about aggressive VACUUMs to Antiwraparound\nautovacuums. Finding the least confusing way of explaining it has been\na focus of mine in the last few days.To be honest, this was not super simple to understand even in the previous version. However, as our goal is to simplify this and make it easier to understand, I'll hold this patch-set to a higher standard :).I wish there was a simple representation (maybe even a table or something) which would explain the differences between a VACUUM which is not aggressive, a VACUUM which ends up being aggressive due to vacuum_freeze_table_age and an antiwraparound autovacuum. \n\n> 4) I think we should explicitly call out that seeing an anti-wraparound VACUUM or \"VACUUM table (to prevent wraparound)\" is normal and that it's just a VACUUM triggered due to the table having unfrozen rows with an XID older than autovacuum_freeze_max_age. I've seen many users panicking on seeing this and feeling that they are close to a wraparound.\n\nThat has also been my exact experience. Users are terrified, usually\nfor no good reason at all. I'll make sure that this comes across in\nthe next revision of the patch series.Thinking about it a bit more, I wonder if there's value in changing the \"(to prevent wraparound)\" to something else. It's understandable why people who just see that in pg_stat_activity and don't read docs might assume they are close to a wraparound.Regards,Samay \n\n> Also, we should be more clear about how it's different from VACUUMs triggered due to the scale factors (cancellation behavior, being triggered when autovacuum is disabled etc.).\n\nRight. Though I think that the biggest point of confusion for users is\nhow *few* differences there really are between antiwraparound\nautovacuum, and any other kind of autovacuum that happens to use\nVACUUM's aggressive strategy. There is really only one important\ndifference: the autocancellation behavior. This is an autovacuum\nbehavior, not a VACUUM behavior -- so the \"VACUUM side\" doesn't know\nanything about that at all.  \n\n> 5) Can we use a better name for the XidStopLimit mode? It seems like a very implementation centric name. Maybe a better version of \"Running out of the XID space\" or something like that?\n\nComing up with a new user-facing name for xidStopLimit is already on\nmy TODO list (it's surprisingly hard). I have used that name so far\nbecause it unambiguously refers to the exact thing that I want to talk\nabout when discussing the worst case. Other than that, it's a terrible\nname.\n\n> 6) In the XidStopLimit mode section, it would be good to explain briefly why you could get to this scenario. It's not something which should happen in a normal running system unless you have a long running transaction or inactive replication slots or a badly configured system or something of that sort.\n\nI agree that that's important. Note that there is already something\nabout \"removable cutoff\" being held back at the start of the\ndiscussion of freezing -- that will prevent freezing in exactly the\nsame way as it prevents cleanup of dead tuples.\n\nThat will become a WARNING box in the next revision. There should also\nbe a similar, analogous WARNING box (about \"removable cutoff\" being\nheld back) much earlier on in the docs -- this should appear in\n\"25.2.1. Recovering Disk Space\". Obviously this structure suggests\nthat there is an isomorphism between freezing and removing bloat. For\nexample, if you cannot \"freeze\" an XID that appears in some tuple's\nxmax, then you also cannot remove that tuple because VACUUM only sees\nit as a recently dead tuple (if xmax is >= OldestXmin/removable\ncutoff, and from a deleter that already committed).\n\nI don't think that we need to spell the \"isomorphism\" point out to the\nreader directly, but having a subtle cue that that's how it works\nseems like a good idea.\n\n> If you got to this point, other than running VACUUM to get out of the situation, it's also important to figure out what got you there in the first place as many VACUUMs should have attempted to advance the relfrozenxid and failed.\n\nIt's also true that problems that can lead to the system entering\nxidStopLimit mode aren't limited to cases where doing required\nfreezing is fundamentally impossible due to something holding back\n\"removable cutoff\". It's also possible that VACUUM simply can't keep\nup (though the failsafe has helped with that problem a lot).\n\nI tend to agree that there needs to be more about this in the\nxidStopLimit subsection (discussion of freezing being held back by\n\"removable cutoff\" is insufficient), but FWIW that seems like it\nshould probably be treated as out of scope for this patch. It is more\nthe responsibility of the other patch [1] that aims to put the\nxidStopLimit documentation on a better footing (and remove that\nterrible HINT about single user mode).\n\nOf course, that other patch is closely related to this patch -- the\nprecise boundaries are unclear at this point. In any case I think that\nthis should happen, because I think that it's a good idea.\n\n> There are a few other small things I noticed along the way but my goal was to look at the overall structure.\n\nThanks again! This is very helpful.\n\n[1] https://www.postgresql.org/message-id/flat/CAJ7c6TM2D277U2wH8X78kg8pH3tdUqebV3_JCJqAkYQFHCFzeg%40mail.gmail.com\n-- \nPeter Geoghegan", "msg_date": "Thu, 4 May 2023 14:57:32 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "Hi,\n\nOn Wed, May 3, 2023 at 3:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Wed, May 3, 2023 at 2:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Coming up with a new user-facing name for xidStopLimit is already on\n> > my TODO list (it's surprisingly hard). I have used that name so far\n> > because it unambiguously refers to the exact thing that I want to talk\n> > about when discussing the worst case. Other than that, it's a terrible\n> > name.\n>\n> What about \"XID allocation overload\"? The implication that I'm going\n> for here is that the system was misconfigured, or there was otherwise\n> some kind of imbalance between XID supply and demand. It also seems to\n> convey the true gravity of the situation -- it's *bad*, to be sure,\n> but in many environments it's a survivable condition.\n>\n\nMy concern with the term \"overload\" is similar to what you expressed below.\nIt indicates that the situation is due to extra load on the system (or due\nto too many XIDs being allocated) and people might assume that the\nsituation will resolve itself if the load were to be reduced / removed.\nHowever, it's due to that along with some misconfiguration or some other\nthing holding back the \"removable cutoff\".\n\nWhat do you think about the term \"Exhaustion\"? Maybe something like \"XID\nallocation exhaustion\" or \"Exhaustion of allocatable XIDs\"? The term\nindicates that we are running out of XIDs to allocate without necessarily\npointing towards a reason.\n\nRegards,\nSamay\n\n\n>\n> One possible downside of this name is that it could suggest that all\n> that needs to happen is for autovacuum to catch up on vacuuming. In\n> reality the user *will* probably have to do more than just wait before\n> the system's ability to allocate new XIDs returns, because (in all\n> likelihood) autovacuum just won't be able to catch up unless and until\n> the user (say) drops a replication slot. Even still, the name seems to\n> work; it describes the conceptual model of the system accurately. Even\n> before the user drops the replication slot, autovacuum will at least\n> *try* to get the system back to being able to allocate new XIDs once\n> more.\n>\n> --\n> Peter Geoghegan\n>\n\nHi,On Wed, May 3, 2023 at 3:48 PM Peter Geoghegan <pg@bowt.ie> wrote:On Wed, May 3, 2023 at 2:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Coming up with a new user-facing name for xidStopLimit is already on\n> my TODO list (it's surprisingly hard). I have used that name so far\n> because it unambiguously refers to the exact thing that I want to talk\n> about when discussing the worst case. Other than that, it's a terrible\n> name.\n\nWhat about \"XID allocation overload\"? The implication that I'm going\nfor here is that the system was misconfigured, or there was otherwise\nsome kind of imbalance between XID supply and demand. It also seems to\nconvey the true gravity of the situation -- it's *bad*, to be sure,\nbut in many environments it's a survivable condition.My concern with the term \"overload\" is similar to what you expressed below. It indicates that the situation is due to extra load on the system (or due to too many XIDs being allocated) and people might assume that the situation will resolve itself if the load were to be reduced / removed. However, it's due to that along with some misconfiguration or some other thing holding back the \"removable cutoff\".What do you think about the term \"Exhaustion\"? Maybe something like \"XID allocation exhaustion\" or \"Exhaustion of allocatable XIDs\"? The term indicates that we are running out of XIDs to allocate without necessarily pointing towards a reason.Regards,Samay \n\nOne possible downside of this name is that it could suggest that all\nthat needs to happen is for autovacuum to catch up on vacuuming. In\nreality the user *will* probably have to do more than just wait before\nthe system's ability to allocate new XIDs returns, because (in all\nlikelihood) autovacuum just won't be able to catch up unless and until\nthe user (say) drops a replication slot. Even still, the name seems to\nwork; it describes the conceptual model of the system accurately. Even\nbefore the user drops the replication slot, autovacuum will at least\n*try* to get the system back to being able to allocate new XIDs once\nmore.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 4 May 2023 15:18:33 -0700", "msg_from": "samay sharma <smilingsamay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Thu, May 4, 2023 at 3:18 PM samay sharma <smilingsamay@gmail.com> wrote:\n> What do you think about the term \"Exhaustion\"?\n\nI'm really not sure.\n\nAttached is v3, which (as with v1 and v2) comes with a prebuilt html\n\"Routine Vacuuming\", for the convenience of reviewers.\n\nv3 does have some changes based on your feedback (and feedback from\nJohn), but overall v3 can be thought of as v2 with lots and lots of\nadditional copy-editing -- though still not enough, I'm sure.\n\nv3 does add some (still incomplete) introductory remarks about the\nintended audience and goals for \"Routine Vacuuming\". But most of the\nchanges are to the way the docs describe freezing and aggressive\nVACUUM, which continued to be my focus for v3.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 4 May 2023 19:06:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Wed, 3 May 2023 at 18:50, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> What about \"XID allocation overload\"? The implication that I'm going\n> for here is that the system was misconfigured, or there was otherwise\n> some kind of imbalance between XID supply and demand.\n\nFwiw while \"wraparound\" has pitfalls I think changing it for a new\nword isn't really helpful. Especially if it's a mostly meaningless\nword like \"overload\" or \"exhaustion\". It suddenly makes every existing\ndoc hard to find and confusing to read.\n\nI say \"exhaustion\" or \"overload\" are meaningless because their meaning\nis entirely dependent on context. It's not like memory exhaustion or\ni/o overload where it's a finite resource and it's just the sheer\namount in use that matters. One way or another the user needs to\nunderstand that it's two numbers marching through a sequence and the\ndistance between them matters.\n\nI feel like \"wraparound\" while imperfect is not any worse than any\nother word. It still requires context to understand but it's context\nthat there are many docs online that already explain and are\ngoogleable.\n\nIf we wanted a new word it would be \"overrun\" but like I say, it would\njust create a new context dependent technical term that users would\nneed to find docs that explain and give context. I don't think that\nreally helps users at all\n\n-- \ngreg\n\n\n", "msg_date": "Thu, 11 May 2023 16:04:05 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Thu, May 11, 2023 at 1:04 PM Greg Stark <stark@mit.edu> wrote:\n> Fwiw while \"wraparound\" has pitfalls I think changing it for a new\n> word isn't really helpful. Especially if it's a mostly meaningless\n> word like \"overload\" or \"exhaustion\". It suddenly makes every existing\n> doc hard to find and confusing to read.\n\nJust to be clear, I am not proposing changing the name of\nanti-wraparound autovacuum at all. What I'd like to do is use a term\nlike \"XID exhaustion\" to refer to the state that we internally refer\nto as xidStopLimit. My motivation is simple: we've completely\nterrified users by emphasizing wraparound, which is something that is\nexplicitly and prominently presented as a variety of data corruption.\nThe docs say this:\n\n\"But since transaction IDs have limited size (32 bits) a cluster that\nruns for a long time (more than 4 billion transactions) would suffer\ntransaction ID wraparound: the XID counter wraps around to zero, and\nall of a sudden transactions that were in the past appear to be in the\nfuture — which means their output become invisible. In short,\ncatastrophic data loss.\"\n\n> I say \"exhaustion\" or \"overload\" are meaningless because their meaning\n> is entirely dependent on context. It's not like memory exhaustion or\n> i/o overload where it's a finite resource and it's just the sheer\n> amount in use that matters.\n\nBut transaction IDs are a finite resource, in the sense that you can\nnever have more than about 2.1 billion distinct unfrozen XIDs at any\none time. \"Transaction ID exhaustion\" is therefore a lot more\ndescriptive of the underlying problem. It's a lot better than\nwraparound, which, as I've said, is inaccurate in two major ways:\n\n1. Most cases involving xidStopLimit (or even single-user mode data\ncorruption) won't involve any kind of physical integer wraparound.\n\n2. Most physical integer wraparound is harmless and perfectly routine.\n\nBut even this is fairly secondary to me. I don't actually think it's\nthat important that the name describe exactly what's going on here --\nthat's expecting rather a lot from a name. That's not really the goal.\nThe goal is to undo the damage of documentation that heavily implies\nthat data corruption is the eventual result of not doing enough\nvacuuming, in its basic introductory remarks to freezing stuff.\n\nLike Samay, my consistent experience (particularly back in my Heroku\ndays) has been that people imagine that data corruption would happen\nwhen the system reached what we'd call xidStopLimit. Can you blame\nthem for thinking that? Almost any name for xidStopLimit that doesn't\nhave that historical baggage seems likely to be a vast improvement.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 11 May 2023 13:40:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Thu, May 11, 2023 at 1:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Just to be clear, I am not proposing changing the name of\n> anti-wraparound autovacuum at all. What I'd like to do is use a term\n> like \"XID exhaustion\" to refer to the state that we internally refer\n> to as xidStopLimit. My motivation is simple: we've completely\n> terrified users by emphasizing wraparound, which is something that is\n> explicitly and prominently presented as a variety of data corruption.\n> The docs say this:\n>\n> \"But since transaction IDs have limited size (32 bits) a cluster that\n> runs for a long time (more than 4 billion transactions) would suffer\n> transaction ID wraparound: the XID counter wraps around to zero, and\n> all of a sudden transactions that were in the past appear to be in the\n> future — which means their output become invisible. In short,\n> catastrophic data loss.\"\n\nNotice that this says that \"catastrophic data loss\" occurs when \"the\nXID counter wraps around to zero\". I think that this was how it worked\nbefore the invention of freezing, over 20 years ago -- the last time\nthe system would allocate about 4 billion XIDs without doing any\nfreezing.\n\nWhile it is still possible to corrupt the database in single user\nmode, it has precisely nothing to do with the point that \"the XID\ncounter wraps around to zero\". I believe that this wording has done\nnot insignificant damage to the project's reputation. But let's assume\nfor a moment that there's only a tiny chance that I'm right about all\nof this -- let's assume I'm probably just being alarmist about how\nthis has been received in the wider world. Even then: why take even a\nsmall chance?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 11 May 2023 13:49:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Thu, May 4, 2023 at 3:18 PM samay sharma <smilingsamay@gmail.com> wrote:\n> What do you think about the term \"Exhaustion\"? Maybe something like \"XID allocation exhaustion\" or \"Exhaustion of allocatable XIDs\"?\n\nI use the term \"transaction ID exhaustion\" in the attached revision,\nv4. Overall, v4 builds on the work that went into v2 and v3, by\ncontinuing to polish the overhaul of everything related to freezing,\nrelfrozenxid advancement, and anti-wraparound autovacuum.\n\nIt would be nice if it was possible to add an animation/diagram a\nlittle like this one: https://tuple-freezing-demo.angusd.com (this is\nhow I tend to think about the \"transaction ID space\".)\n\nI feel that the patch that deals with freezing is really coming\ntogether in v4. The main problem now is lack of detailed review --\nthough the freezing related patch is still not committable, it's\ngetting close now. (The changes to the docs covering freezing should\nbe committed separately from any further work on \"25.2.1. Recovering\nDisk Space\". I still haven't done much there in v4, and those parts\nclearly aren't anywhere near being committable. So, for now, they can\nmostly be ignored.)\n\nv4 also limits use of the term \"wraparound\" to places that directly\ndiscuss anti-wraparound autovacuums (plus one place in xact.sgml,\nwhere discussion of \"true unsigned integer wraparound\" and related\nimplementation details has been moved). Otherwise we use the term\n\"transaction ID exhaustion\", which is pretty much the user-facing name\nfor \"xidStopLimit\". I feel that this is a huge improvement, for the\nreason given to Greg earlier. I'm flexible on the details, but I feel\nstrongly that we should minimize use of the term wraparound wherever\nit might have the connotation of \"the past becoming the future\". This\nis not a case of inventing a new terminology for its own sake. If\nanybody is skeptical I ask that they take a look at what I came up\nwith before declaring it a bad idea. I have made that as easy as\npossible, by once again attaching a prebuilt routine-vacuuming.html.\n\nI no longer believe that committing this patch series needs to block\non the patch that seeks to put things straight with single user mode\nand xidStopLimit/transaction ID exhaustion (the one that John Naylor\nis currently working on getting in shape), either (I'll explain my\nreasoning if somebody wants to hear it).\n\nOther changes in v4, compared to v3:\n\n* Improved discussion of the differences between non-aggressive and\naggressive VACUUM.\n\nNow mentions the issue of aggressive VACUUMs waiting for a cleanup\nlock, including mention of the BufferPin wait event. This is the\nsecond, minor difference between each kind of VACUUM. It matters much\nless than the first difference, but it does merit a mention.\n\nThe discussion of aggressive VACUUM seems to be best approached by\nstarting with the mechanical differences, and only later going into\nthe consequences of those differences. (Particularly catch-up\nfreezing.)\n\n* Explains \"catch-up freezing\" performed by aggressive VACUUMs directly.\n\n\"Catch-up\" freezing is the really important \"consequence\" -- something\nthat emerges from how each type of VACUUM behaves over time. It is an\nindirect consequence of the behaviors. I would like to counter the\nperception that some users have about freezing only happening during\naggressive VACUUMs (or anti-wraparound autovacuums). But more than\nthat, talking about catch-up freezing seems essential because it is\nthe single most important difference.\n\n* Much improved handling of the discussion of anti-wraparound\nautovacuum, and how it relates to aggressive VACUUMs, following\nfeedback from Samay.\n\nThere is now only fairly minimal overlap in the discussion of\naggressive VACUUM and anti-wraparound autovacuuming. We finish the\ndiscussion of aggressive VACUUM just after we start discussing\nanti-wraparound autovacuum. This transition works well, because it\nenforces the idea that anti-wraparound autovacuum isn't really special\ncompared to any other aggressive autovacuum. This was something that\nSamay expressed particularly concern about: making anti-wraparound\nautovacuums sound less scary. Though it's also a concern I had from\nthe outset, based on practical experience and interactions with people\nthat have much less knowledge of Postgres than I do.\n\n* Anti-wraparound autovacuum is now mostly discussed as something that\nhappens to static or mostly-static tables.\n\nThis is related to the goal of making anti-wraparound autovacuums\nsound less scary. Larger tables don't necessarily require any\nanti-wraparound autovacuums these days -- we have the insert-driven\nautovacuum trigger condition these days, so it's plausible (perhaps\neven likely) that all aggressive VACUUMs against the largest\nappend-only tables can happen when autovacuum triggers VACUUMs to\nprocessed recently inserted tuples.\n\nThis moves discussion of anti-wraparound av in the direction of:\n\"Anti-wraparound autovacuum is a special type of autovacuum. Its\npurpose is to ensure that relfrozenxid advances when no earlier VACUUM\ncould advance it in passing — often because no VACUUM has run against\nthe table for an extended period.\"\n\n* Added a couple of \"Tips\" about instrumentation that appears in the\nserver log whenever autovacuum reports on a VACUUM operation.\n\n* Much improved \"Truncating Transaction Status Information\" subsection.\n\nMy explanation of the ways in which autovacuum_freeze_max_age can\naffect the storage overhead of commit/abort status in pg_xact is much\nclearer than it was in v3 -- pg_xact truncation is now treated as\nsomething loosely related to the global config of anti-wraparound\nautovacuum, which makes most sense.\n\nIt took a great deal of effort to find a structure that covered\neverything, and that highlighted all of the important relationships\nwithout going too far, while at the same time not being a huge mess.\nThat's what I feel I've arrived at with v4.\n\n--\nPeter Geoghegan", "msg_date": "Thu, 11 May 2023 18:18:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "Thanks for the continued work, Peter. I hate to be the guy that starts this way,\nbut this is my first ever response on pgsql-hackers. (insert awkward\nsmile face).\nHopefully I've followed etiquette well, but please forgive any\nmissteps, and I'm\nhappy for any help in making better contributions in the future.\n\nOn Thu, May 11, 2023 at 9:19 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, May 4, 2023 at 3:18 PM samay sharma <smilingsamay@gmail.com> wrote:\n> > What do you think about the term \"Exhaustion\"? Maybe something like \"XID allocation exhaustion\" or \"Exhaustion of allocatable XIDs\"?\n>\n> I use the term \"transaction ID exhaustion\" in the attached revision,\n> v4. Overall, v4 builds on the work that went into v2 and v3, by\n> continuing to polish the overhaul of everything related to freezing,\n> relfrozenxid advancement, and anti-wraparound autovacuum.\n\nJust to say on the outset, as has been said earlier in the tread by others,\nthat this is herculean work. Thank you for putting the effort you have thus far.\nThere's a lot of good from where I sit in the modification efforts.\nIt's a heavy,\ndense topic, so there's probably never going to be a perfect way to\nget it all in,\nbut some of the context early on, especially, is helpful for framing.\n\n>\n> It would be nice if it was possible to add an animation/diagram a\n> little like this one: https://tuple-freezing-demo.angusd.com (this is\n> how I tend to think about the \"transaction ID space\".)\n\nIndeed. With volunteer docs, illustrations/diagrams are hard for sure. But,\nthis or something akin to the \"clock\" image I've seen elsewhere when\ndescribing the transaction ID space would probably be helpful if it were ever\npossible. In fact, there's just a lot about the MVCC stuff in general that\nwould benefit from diagrams. But alas, I guess that's why we have some\ngood go-to community talks/slide decks. :-)\n\n> v4 also limits use of the term \"wraparound\" to places that directly\n> discuss anti-wraparound autovacuums (plus one place in xact.sgml,\n> where discussion of \"true unsigned integer wraparound\" and related\n> implementation details has been moved). Otherwise we use the term\n> \"transaction ID exhaustion\", which is pretty much the user-facing name\n> for \"xidStopLimit\". I feel that this is a huge improvement, for the\n> reason given to Greg earlier. I'm flexible on the details, but I feel\n> strongly that we should minimize use of the term wraparound wherever\n> it might have the connotation of \"the past becoming the future\". This\n> is not a case of inventing a new terminology for its own sake. If\n> anybody is skeptical I ask that they take a look at what I came up\n> with before declaring it a bad idea. I have made that as easy as\n> possible, by once again attaching a prebuilt routine-vacuuming.html.\n\nThanks again for doing this. Really helpful for doc newbies like me that\nwant to help but are still working through the process. Really helpful\nand appreciated.\n\n>\n>\n> Other changes in v4, compared to v3:\n>\n> * Improved discussion of the differences between non-aggressive and\n> aggressive VACUUM.\n\nThis was helpful for me and not something I've previously put much thought\ninto. Helpful context that is missing from the current docs.\n\n> * Explains \"catch-up freezing\" performed by aggressive VACUUMs directly.\n>\n> \"Catch-up\" freezing is the really important \"consequence\" -- something\n> that emerges from how each type of VACUUM behaves over time. It is an\n> indirect consequence of the behaviors. I would like to counter the\n> perception that some users have about freezing only happening during\n> aggressive VACUUMs (or anti-wraparound autovacuums). But more than\n> that, talking about catch-up freezing seems essential because it is\n> the single most important difference.\n>\n\nSimilarly, this was helpful overall context of various things\nhappening with freezing.\n\n> * Much improved handling of the discussion of anti-wraparound\n> autovacuum, and how it relates to aggressive VACUUMs, following\n> feedback from Samay.\n>\n> There is now only fairly minimal overlap in the discussion of\n> aggressive VACUUM and anti-wraparound autovacuuming. We finish the\n> discussion of aggressive VACUUM just after we start discussing\n> anti-wraparound autovacuum. This transition works well, because it\n> enforces the idea that anti-wraparound autovacuum isn't really special\n> compared to any other aggressive autovacuum. This was something that\n> Samay expressed particularly concern about: making anti-wraparound\n> autovacuums sound less scary. Though it's also a concern I had from\n> the outset, based on practical experience and interactions with people\n> that have much less knowledge of Postgres than I do.\n\nAgree. This flows fairly well and helps the user understand that each\n\"next step\"\nin the vacuum/freezing process has a distinct job based on previous work.\n\n>\n> * Anti-wraparound autovacuum is now mostly discussed as something that\n> happens to static or mostly-static tables....\n> ...This moves discussion of anti-wraparound av in the direction of:\n> \"Anti-wraparound autovacuum is a special type of autovacuum. Its\n> purpose is to ensure that relfrozenxid advances when no earlier VACUUM\n> could advance it in passing — often because no VACUUM has run against\n> the table for an extended period.\"\n>\n\nAgain, learned something new here, at least in how I think about it and talk\nwith others. In total, I do think these changes make wraparound/exhaustion\nseem less \"the sky is falling\".\n\n> * Added a couple of \"Tips\" about instrumentation that appears in the\n> server log whenever autovacuum reports on a VACUUM operation.\n>\n> * Much improved \"Truncating Transaction Status Information\" subsection.\n>\n> My explanation of the ways in which autovacuum_freeze_max_age can\n> affect the storage overhead of commit/abort status in pg_xact is much\n> clearer than it was in v3 -- pg_xact truncation is now treated as\n> something loosely related to the global config of anti-wraparound\n> autovacuum, which makes most sense.\n>\nThis one isn't totally sinking in with me yet. Need another read.\n\n> It took a great deal of effort to find a structure that covered\n> everything, and that highlighted all of the important relationships\n> without going too far, while at the same time not being a huge mess.\n> That's what I feel I've arrived at with v4.\n\nIn most respects I agree with the overall flow of changes w.r.t the current doc.\nFocusing on all of this as something that should normally just be happening\nas part of autovacuum is helpful. Working through it as an order of operations\n(and I'm just assuming this is the general order) feels like it ties\nthings together\na lot more. I honestly come away from this document with more of a \"I understand\nthe process\" feel than I did previously.\n\nFor now, I'd add the following few comments on the intro section,\n2.5.1 and 2.5.2. I\nhaven't gotten to the bottom sections yet for much feedback.\n\nIntro Comments:\n1) \"The autovacuum daemon automatically schedules maintenance tasks based on\nworkload requirements.\" feels at tension with \"Autovacuum scheduling\nis controlled\nvia threshold settings.\"\n\nOwing to the lingering belief that many users have whereby hosting providers\nhave magically enabled Postgres to do all of this for you, there is\nstill a need to\nactively deal with these thresholds based on load. That is, as far as\nI understand,\nPostgres doesn't automatically adjust based on load. Someone/thing\nstill has to modify\nthe thresholds as load and data size changes.\n\nIf the \"workload requirements\" is pointing towards aggressive\nfreezing/wraparound\ntasks that happen regardless of thresholds, then for me at least that\nisn't clear\nin that sentence and it feels like there's an implication that\nPostgres/autovacuum\nis going to magically adjust overall vacuum work based on database workload.\n\n2) \"The intended audience is database administrators that wish to\nperform more advanced\n autovacuum tuning, with any of the following goals in mind:\"\n\nI love calling out the audience in some way. That's really helpful, as are the\nstated goals in the bullet list. However, as someone feeling pretty novice\nafter reading all of this, I can't honestly connect how the content on this page\nhelps me to more advanced tuning. I have a much better idea how freezing,\nin particular, works (yay!), but I'm feeling a bit dense how almost anything\nhere helps me tune vacuum, at least as it relates to the bullets.\n\nI'm sure you have a connection in mind for each, and certainly understanding the\ninner workings of what's happening under the covers is tremendously beneficial,\nbut when I search for \"response\" or \"performance\" in this document, it refers\nback to another page (not included in this patch) that talks about the\nthresholds.\n\nIt might be as simple as adding something to the end of each bullet to draw\nthat relationship, but as is, it's hard for me to do it mentally (although I can\nconjecture a few things on my own)\n\nThat said, I definitely appreciate the callout that tuning is an\niterative process\nand the minor switch from \"creates a substantial amount of I/O\ntraffic\" to \"may create...\".\n\n** Section 2.5.1 - Recovering Disk Space **\n\n3). \"The space dead tuples occupy must eventually be reclaimed for reuse\nby new rows, to avoid unbounded growth of disk space requirements. Reclaiming\nspace from dead rows is VACUUM's main responsibility.\"\n\nIt feels like one connection you could make to the bullet list above\nis in this area\nand not mentioned. By freeing up space and reducing the number of pages that\nneed to be read for satisfying a query, vacuum and recovering disk space\n(theoretically) improves query performance. Not 100% how to add it in context\nof these first two paragraphs.\n\n4) Caution: \"It may be a good idea to add monitoring to alert you about this.\"\nI hate to be pedantic about it, but I think we should spell out\n\"this\". Do we have\na pointer in documentation to what kinds of things to monitor for? Am monitoring\nlong-running transactions or some metric that shows me that VACUUM is being\n\"held back\"? I know what you mean, but it's not clear to me how to do the right\nthing in my environment here.\n\n5) The plethora of tips/notes/warnings.\nAs you and others have mentioned, as presented these really have no context\nfor me. Individually they are good/helpful information, but it's\nreally hard to make\na connection to what I should \"do\" about it.\n\nIt seems to me that this would be a good place to put a subsection which is\nsomething like, \"A note about reclaiming disk space\" or something. In my\nexperience, most people hear about and end up using VACUUM FULL because\nthings got out of control and they want to get into a better spot (I have been\nin that boat). I think with a small section that says, in essence,\n\"hey, now that\nyou understand why/how vacuum reclaims disk resources normally, if you're\nin a position where things aren't in a good state, this is what you need to know\nif you want to reclaim space from a really inefficient table\"\n\nFor me, at least, I think it would be easier to read/grok what you're sharing in\nthese callouts.\n\n6) One last missing piece that very well might be in another page not referenced\n(I obviously need to get the PG16 docs pulled and built locally so\nthat I can have\nbetter overall reference. My apologies).\n\nIn my experience, one of the biggest issues with the thresholds and recovering\nspace is the idea of tuning individual tables, not just the entire\ndatabase. 5/10/20%\nmight be fine for most tables, but it's usually the really active ones\nthat need the\ntuning, specifically lowering the thresholds. That doesn't come across to me in\nthis section at all. Again, maybe I've missed something on another page and\nit's all good, but it felt worth calling out.\n\nPlus, it may provide an opportunity to bring in the threshold formulas again if\nthey aren't referenced elsewhere (although they probably are).\n\nHope that makes sense.\n\n** Section 2.5.2: Freezing to manage... **\nAs stated above, the effort here overall is great IMO. I like the flow\nand reduction\nin alarmist tone for things like wraparound, etc. I understand more\nabout freezing,\naggressive and otherwise, than I did before.\n\n7) That said, totally speaking as a non-contributor, this section is\nobviously very long\nfor good reason. But, by the time I've gotten down to 25.2.2.3, my\nbrain is a bit\nbewildered on where we've gotten to. That's more a comment on my capability\nto process it all, but I wonder if a slightly more explicit intro\ncould help set the\nstage at least.\n\n\"One side-effect of vacuum and transaction ID management at the row level is\nthat PostgreSQL would normally need to inspect each row for every query to\nensure it is visible to each requesting transaction. In order to\nreduce the need to\nread and inspect excessive amounts of data at query time or when normal vacuum\nmaintenance kicks in, VACUUM has a second job called freezing, which\naccomplishes three goals: (attempting to tie in the three sections)\n * speeding up queries and vacuum operations by...\n * advancing the transaction ID space on generally static tables...\n * ensure there are always free transaction IDs available for normal\noperation...\n\"\n\nMaybe totally worthless and too much, but something like that might set a reader\nup for just a bit more context. Then you could take most of what comes before\n\"2.5.2.2.1 Aggressive Vacuum\" as a subsection (would require a renumber below)\nwith something like \"2.5.2.2.1 Normal Freezing Activity\"\n\n8) Note \"In PostgreSQL versions before 16...\"\nShowing my naivety, somehow this isn't connecting with me totally. If\nit's important\nto call out, then maybe we need a connecting sentence. Based on the content\nabove, I think you're pointing to \"It's also why VACUUM will freeze all eligible\ntuples from a heap page once the decision to freeze at least one tuple\nis taken:\"\nIf that's it, it's just not clear to me what's totally changed. Sorry,\nmore learning. :-)\n\n---\nHope something in there is helpful.\n\nRyan Booz\n\n>\n> --\n> Peter Geoghegan\n\n\n", "msg_date": "Fri, 12 May 2023 13:36:27 -0400", "msg_from": "Ryan Booz <ryan@softwareandbooz.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "And, of course, I forgot that I switch to text-mode after writing most\nof this, so the carriage returns were unnecessary. (facepalm... sigh)\n\n--\nRyan\n\nOn Fri, May 12, 2023 at 1:36 PM Ryan Booz <ryan@softwareandbooz.com> wrote:\n>\n> Thanks for the continued work, Peter. I hate to be the guy that starts this way,\n> but this is my first ever response on pgsql-hackers. (insert awkward\n> smile face).\n> Hopefully I've followed etiquette well, but please forgive any\n> missteps, and I'm\n> happy for any help in making better contributions in the future.\n>\n> On Thu, May 11, 2023 at 9:19 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Thu, May 4, 2023 at 3:18 PM samay sharma <smilingsamay@gmail.com> wrote:\n> > > What do you think about the term \"Exhaustion\"? Maybe something like \"XID allocation exhaustion\" or \"Exhaustion of allocatable XIDs\"?\n> >\n> > I use the term \"transaction ID exhaustion\" in the attached revision,\n> > v4. Overall, v4 builds on the work that went into v2 and v3, by\n> > continuing to polish the overhaul of everything related to freezing,\n> > relfrozenxid advancement, and anti-wraparound autovacuum.\n>\n> Just to say on the outset, as has been said earlier in the tread by others,\n> that this is herculean work. Thank you for putting the effort you have thus far.\n> There's a lot of good from where I sit in the modification efforts.\n> It's a heavy,\n> dense topic, so there's probably never going to be a perfect way to\n> get it all in,\n> but some of the context early on, especially, is helpful for framing.\n>\n> >\n> > It would be nice if it was possible to add an animation/diagram a\n> > little like this one: https://tuple-freezing-demo.angusd.com (this is\n> > how I tend to think about the \"transaction ID space\".)\n>\n> Indeed. With volunteer docs, illustrations/diagrams are hard for sure. But,\n> this or something akin to the \"clock\" image I've seen elsewhere when\n> describing the transaction ID space would probably be helpful if it were ever\n> possible. In fact, there's just a lot about the MVCC stuff in general that\n> would benefit from diagrams. But alas, I guess that's why we have some\n> good go-to community talks/slide decks. :-)\n>\n> > v4 also limits use of the term \"wraparound\" to places that directly\n> > discuss anti-wraparound autovacuums (plus one place in xact.sgml,\n> > where discussion of \"true unsigned integer wraparound\" and related\n> > implementation details has been moved). Otherwise we use the term\n> > \"transaction ID exhaustion\", which is pretty much the user-facing name\n> > for \"xidStopLimit\". I feel that this is a huge improvement, for the\n> > reason given to Greg earlier. I'm flexible on the details, but I feel\n> > strongly that we should minimize use of the term wraparound wherever\n> > it might have the connotation of \"the past becoming the future\". This\n> > is not a case of inventing a new terminology for its own sake. If\n> > anybody is skeptical I ask that they take a look at what I came up\n> > with before declaring it a bad idea. I have made that as easy as\n> > possible, by once again attaching a prebuilt routine-vacuuming.html.\n>\n> Thanks again for doing this. Really helpful for doc newbies like me that\n> want to help but are still working through the process. Really helpful\n> and appreciated.\n>\n> >\n> >\n> > Other changes in v4, compared to v3:\n> >\n> > * Improved discussion of the differences between non-aggressive and\n> > aggressive VACUUM.\n>\n> This was helpful for me and not something I've previously put much thought\n> into. Helpful context that is missing from the current docs.\n>\n> > * Explains \"catch-up freezing\" performed by aggressive VACUUMs directly.\n> >\n> > \"Catch-up\" freezing is the really important \"consequence\" -- something\n> > that emerges from how each type of VACUUM behaves over time. It is an\n> > indirect consequence of the behaviors. I would like to counter the\n> > perception that some users have about freezing only happening during\n> > aggressive VACUUMs (or anti-wraparound autovacuums). But more than\n> > that, talking about catch-up freezing seems essential because it is\n> > the single most important difference.\n> >\n>\n> Similarly, this was helpful overall context of various things\n> happening with freezing.\n>\n> > * Much improved handling of the discussion of anti-wraparound\n> > autovacuum, and how it relates to aggressive VACUUMs, following\n> > feedback from Samay.\n> >\n> > There is now only fairly minimal overlap in the discussion of\n> > aggressive VACUUM and anti-wraparound autovacuuming. We finish the\n> > discussion of aggressive VACUUM just after we start discussing\n> > anti-wraparound autovacuum. This transition works well, because it\n> > enforces the idea that anti-wraparound autovacuum isn't really special\n> > compared to any other aggressive autovacuum. This was something that\n> > Samay expressed particularly concern about: making anti-wraparound\n> > autovacuums sound less scary. Though it's also a concern I had from\n> > the outset, based on practical experience and interactions with people\n> > that have much less knowledge of Postgres than I do.\n>\n> Agree. This flows fairly well and helps the user understand that each\n> \"next step\"\n> in the vacuum/freezing process has a distinct job based on previous work.\n>\n> >\n> > * Anti-wraparound autovacuum is now mostly discussed as something that\n> > happens to static or mostly-static tables....\n> > ...This moves discussion of anti-wraparound av in the direction of:\n> > \"Anti-wraparound autovacuum is a special type of autovacuum. Its\n> > purpose is to ensure that relfrozenxid advances when no earlier VACUUM\n> > could advance it in passing — often because no VACUUM has run against\n> > the table for an extended period.\"\n> >\n>\n> Again, learned something new here, at least in how I think about it and talk\n> with others. In total, I do think these changes make wraparound/exhaustion\n> seem less \"the sky is falling\".\n>\n> > * Added a couple of \"Tips\" about instrumentation that appears in the\n> > server log whenever autovacuum reports on a VACUUM operation.\n> >\n> > * Much improved \"Truncating Transaction Status Information\" subsection.\n> >\n> > My explanation of the ways in which autovacuum_freeze_max_age can\n> > affect the storage overhead of commit/abort status in pg_xact is much\n> > clearer than it was in v3 -- pg_xact truncation is now treated as\n> > something loosely related to the global config of anti-wraparound\n> > autovacuum, which makes most sense.\n> >\n> This one isn't totally sinking in with me yet. Need another read.\n>\n> > It took a great deal of effort to find a structure that covered\n> > everything, and that highlighted all of the important relationships\n> > without going too far, while at the same time not being a huge mess.\n> > That's what I feel I've arrived at with v4.\n>\n> In most respects I agree with the overall flow of changes w.r.t the current doc.\n> Focusing on all of this as something that should normally just be happening\n> as part of autovacuum is helpful. Working through it as an order of operations\n> (and I'm just assuming this is the general order) feels like it ties\n> things together\n> a lot more. I honestly come away from this document with more of a \"I understand\n> the process\" feel than I did previously.\n>\n> For now, I'd add the following few comments on the intro section,\n> 2.5.1 and 2.5.2. I\n> haven't gotten to the bottom sections yet for much feedback.\n>\n> Intro Comments:\n> 1) \"The autovacuum daemon automatically schedules maintenance tasks based on\n> workload requirements.\" feels at tension with \"Autovacuum scheduling\n> is controlled\n> via threshold settings.\"\n>\n> Owing to the lingering belief that many users have whereby hosting providers\n> have magically enabled Postgres to do all of this for you, there is\n> still a need to\n> actively deal with these thresholds based on load. That is, as far as\n> I understand,\n> Postgres doesn't automatically adjust based on load. Someone/thing\n> still has to modify\n> the thresholds as load and data size changes.\n>\n> If the \"workload requirements\" is pointing towards aggressive\n> freezing/wraparound\n> tasks that happen regardless of thresholds, then for me at least that\n> isn't clear\n> in that sentence and it feels like there's an implication that\n> Postgres/autovacuum\n> is going to magically adjust overall vacuum work based on database workload.\n>\n> 2) \"The intended audience is database administrators that wish to\n> perform more advanced\n> autovacuum tuning, with any of the following goals in mind:\"\n>\n> I love calling out the audience in some way. That's really helpful, as are the\n> stated goals in the bullet list. However, as someone feeling pretty novice\n> after reading all of this, I can't honestly connect how the content on this page\n> helps me to more advanced tuning. I have a much better idea how freezing,\n> in particular, works (yay!), but I'm feeling a bit dense how almost anything\n> here helps me tune vacuum, at least as it relates to the bullets.\n>\n> I'm sure you have a connection in mind for each, and certainly understanding the\n> inner workings of what's happening under the covers is tremendously beneficial,\n> but when I search for \"response\" or \"performance\" in this document, it refers\n> back to another page (not included in this patch) that talks about the\n> thresholds.\n>\n> It might be as simple as adding something to the end of each bullet to draw\n> that relationship, but as is, it's hard for me to do it mentally (although I can\n> conjecture a few things on my own)\n>\n> That said, I definitely appreciate the callout that tuning is an\n> iterative process\n> and the minor switch from \"creates a substantial amount of I/O\n> traffic\" to \"may create...\".\n>\n> ** Section 2.5.1 - Recovering Disk Space **\n>\n> 3). \"The space dead tuples occupy must eventually be reclaimed for reuse\n> by new rows, to avoid unbounded growth of disk space requirements. Reclaiming\n> space from dead rows is VACUUM's main responsibility.\"\n>\n> It feels like one connection you could make to the bullet list above\n> is in this area\n> and not mentioned. By freeing up space and reducing the number of pages that\n> need to be read for satisfying a query, vacuum and recovering disk space\n> (theoretically) improves query performance. Not 100% how to add it in context\n> of these first two paragraphs.\n>\n> 4) Caution: \"It may be a good idea to add monitoring to alert you about this.\"\n> I hate to be pedantic about it, but I think we should spell out\n> \"this\". Do we have\n> a pointer in documentation to what kinds of things to monitor for? Am monitoring\n> long-running transactions or some metric that shows me that VACUUM is being\n> \"held back\"? I know what you mean, but it's not clear to me how to do the right\n> thing in my environment here.\n>\n> 5) The plethora of tips/notes/warnings.\n> As you and others have mentioned, as presented these really have no context\n> for me. Individually they are good/helpful information, but it's\n> really hard to make\n> a connection to what I should \"do\" about it.\n>\n> It seems to me that this would be a good place to put a subsection which is\n> something like, \"A note about reclaiming disk space\" or something. In my\n> experience, most people hear about and end up using VACUUM FULL because\n> things got out of control and they want to get into a better spot (I have been\n> in that boat). I think with a small section that says, in essence,\n> \"hey, now that\n> you understand why/how vacuum reclaims disk resources normally, if you're\n> in a position where things aren't in a good state, this is what you need to know\n> if you want to reclaim space from a really inefficient table\"\n>\n> For me, at least, I think it would be easier to read/grok what you're sharing in\n> these callouts.\n>\n> 6) One last missing piece that very well might be in another page not referenced\n> (I obviously need to get the PG16 docs pulled and built locally so\n> that I can have\n> better overall reference. My apologies).\n>\n> In my experience, one of the biggest issues with the thresholds and recovering\n> space is the idea of tuning individual tables, not just the entire\n> database. 5/10/20%\n> might be fine for most tables, but it's usually the really active ones\n> that need the\n> tuning, specifically lowering the thresholds. That doesn't come across to me in\n> this section at all. Again, maybe I've missed something on another page and\n> it's all good, but it felt worth calling out.\n>\n> Plus, it may provide an opportunity to bring in the threshold formulas again if\n> they aren't referenced elsewhere (although they probably are).\n>\n> Hope that makes sense.\n>\n> ** Section 2.5.2: Freezing to manage... **\n> As stated above, the effort here overall is great IMO. I like the flow\n> and reduction\n> in alarmist tone for things like wraparound, etc. I understand more\n> about freezing,\n> aggressive and otherwise, than I did before.\n>\n> 7) That said, totally speaking as a non-contributor, this section is\n> obviously very long\n> for good reason. But, by the time I've gotten down to 25.2.2.3, my\n> brain is a bit\n> bewildered on where we've gotten to. That's more a comment on my capability\n> to process it all, but I wonder if a slightly more explicit intro\n> could help set the\n> stage at least.\n>\n> \"One side-effect of vacuum and transaction ID management at the row level is\n> that PostgreSQL would normally need to inspect each row for every query to\n> ensure it is visible to each requesting transaction. In order to\n> reduce the need to\n> read and inspect excessive amounts of data at query time or when normal vacuum\n> maintenance kicks in, VACUUM has a second job called freezing, which\n> accomplishes three goals: (attempting to tie in the three sections)\n> * speeding up queries and vacuum operations by...\n> * advancing the transaction ID space on generally static tables...\n> * ensure there are always free transaction IDs available for normal\n> operation...\n> \"\n>\n> Maybe totally worthless and too much, but something like that might set a reader\n> up for just a bit more context. Then you could take most of what comes before\n> \"2.5.2.2.1 Aggressive Vacuum\" as a subsection (would require a renumber below)\n> with something like \"2.5.2.2.1 Normal Freezing Activity\"\n>\n> 8) Note \"In PostgreSQL versions before 16...\"\n> Showing my naivety, somehow this isn't connecting with me totally. If\n> it's important\n> to call out, then maybe we need a connecting sentence. Based on the content\n> above, I think you're pointing to \"It's also why VACUUM will freeze all eligible\n> tuples from a heap page once the decision to freeze at least one tuple\n> is taken:\"\n> If that's it, it's just not clear to me what's totally changed. Sorry,\n> more learning. :-)\n>\n> ---\n> Hope something in there is helpful.\n>\n> Ryan Booz\n>\n> >\n> > --\n> > Peter Geoghegan\n\n\n", "msg_date": "Fri, 12 May 2023 13:37:34 -0400", "msg_from": "Ryan Booz <ryan@softwareandbooz.com>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Fri, May 12, 2023 at 10:36 AM Ryan Booz <ryan@softwareandbooz.com> wrote:\n> Just to say on the outset, as has been said earlier in the tread by others,\n> that this is herculean work. Thank you for putting the effort you have thus far.\n\nThanks!\n\n> > It would be nice if it was possible to add an animation/diagram a\n> > little like this one: https://tuple-freezing-demo.angusd.com (this is\n> > how I tend to think about the \"transaction ID space\".)\n>\n> Indeed. With volunteer docs, illustrations/diagrams are hard for sure. But,\n> this or something akin to the \"clock\" image I've seen elsewhere when\n> describing the transaction ID space would probably be helpful if it were ever\n> possible. In fact, there's just a lot about the MVCC stuff in general that\n> would benefit from diagrams. But alas, I guess that's why we have some\n> good go-to community talks/slide decks. :-)\n\nA picture is worth a thousand words. This particular image may be\nworth even more, though.\n\nIt happens to be *exactly* what I'd have done if I was tasked with\ncoming up with an animation that conveys the central ideas. Obviously\nI brought this image up because I think that it would be great if we\ncould find a way to do something like that directly (not impossible,\nthere are a few images already). However, there is a less obvious\nreason why I brought it to your attention: it's a very intuitive way\nof understanding what I actually intend to convey through words -- at\nleast as far as talk about the cluster-wide XID space is concerned. It\nmight better equip you to review the patch series.\n\nSure, the animation will make the general idea clearer to just about\nanybody -- that's a big part of what I like about it. But it also\ncaptures the nuance that might matter to experts (e.g., the oldest XID\nmoves forward in jerky discrete jumps, while the next/unallocated XID\nmoves forward in a smooth, continuous fashion). So it works on\nmultiple levels, for multiple audiences/experience levels, without any\nconflicts -- which is no small thing.\n\nDo my words make you think of something a little like the animation?\nIf so, good.\n\n> Thanks again for doing this. Really helpful for doc newbies like me that\n> want to help but are still working through the process. Really helpful\n> and appreciated.\n\nI think that this is the kind of thing that particularly benefits from\ndiversity in perspectives.\n\n> Agree. This flows fairly well and helps the user understand that each\n> \"next step\"\n> in the vacuum/freezing process has a distinct job based on previous work.\n\nI'm trying to make it possible to read in short bursts, and to skim.\nThe easiest wins in this area will come from simply having more\nindividual sections/headings, and a more consistent structure. The\nreally difficult part is coming up with prose that can sort of work\nfor all audiences at the same time -- without alienating anybody.\n\nHere is an example of what I mean:\n\nThe general idea of freezing can reasonably be summarized as \"a\nprocess that VACUUM uses to make pages self-contained (no need to do\npg_xact lookups anymore), that also has a role in avoiding transaction\nID exhaustion\". That is a totally reasonable beginner-level (well,\nrelative-beginner-level) understanding of freezing. It *isn't* dumbed\ndown. You, as a beginner, have a truly useful take-away. At the same\ntime, you have avoided learning anything that you'll need to unlearn\nsome day. If I can succeed in doing that, I'll feel a real sense of\naccomplishment.\n\n> > * Much improved \"Truncating Transaction Status Information\" subsection.\n> >\n> > My explanation of the ways in which autovacuum_freeze_max_age can\n> > affect the storage overhead of commit/abort status in pg_xact is much\n> > clearer than it was in v3 -- pg_xact truncation is now treated as\n> > something loosely related to the global config of anti-wraparound\n> > autovacuum, which makes most sense.\n> >\n> This one isn't totally sinking in with me yet. Need another read.\n\n\"Truncating Transaction Status Information\" is explicitly supposed to\nmatter much less than the rest of the stuff on freezing. The main\nbenefit that the DBA can expect from understanding this content is how\nto save a few GB of disk space for pg_xact, which isn't particularly\nlikely to be possible, and is very very unlikely to be of any real\nconsequence, compared to everything else. If you were reading the\nrevised \"Routine Vacuuming\" as the average DBA, what you'd probably\nhave ended up doing is just not reading this part at all. And that\nwould probably be the ideal outcome. It's roughly the opposite of what\nyou'll get right now, by the way (bizarrely, the current docs place a\ngreat deal of emphasis on this).\n\n(Of course I welcome your feedback here too. Just giving you the context.)\n\n> > It took a great deal of effort to find a structure that covered\n> > everything, and that highlighted all of the important relationships\n> > without going too far, while at the same time not being a huge mess.\n> > That's what I feel I've arrived at with v4.\n>\n> In most respects I agree with the overall flow of changes w.r.t the current doc.\n> Focusing on all of this as something that should normally just be happening\n> as part of autovacuum is helpful. Working through it as an order of operations\n> (and I'm just assuming this is the general order) feels like it ties\n> things together\n> a lot more. I honestly come away from this document with more of a \"I understand\n> the process\" feel than I did previously.\n\nThat's great news. It might be helpful to give you more context about\nthe particular approach I've taken here, and how it falls short of\nwhat I'd ideally like to do, in my own mind.\n\nThere are some rather strange things that happen to be true about\nVACUUM and freezing today, that definitely influenced the way I\nstructured the docs. I can imagine an improved version of VACUUM that\nis not so different to the real VACUUM that we have today (one that\nstill has freezing as we know it), that still has a much simpler UI --\nsome policy-based process for deciding which pages to freeze that was\nmuch smarter than a simple trigger. If we were living in a world where\nVACUUM actually worked like that, then I'd have been able to come up\nwith a structure that is a lot closer to what you might have been\nhoping for from this patch series. At the very least, I'd have been\nable to add some \"TL;DR\" text at the start of each section, that just\ngave the main practical takeaway.\n\nTake vacuum_freeze_min_age. It's a *really* bad design, even on its\nown terms, even if we assume that nothing can change about how\nfreezing works. Yet it's probably still the most important\nfreezing-related GUC, even in Postgres 16. History matters here. The\nGUC was invented in a world before the visibility map existed. When\nthe visibility map was invented, aggressive VACUUM was also invented\n(before then the name for what we now call \"aggressive VACUUM\" was\nactually just \"VACUUM\"). This development utterly changed the way that\nvacuum_freeze_min_age actually works, but we still talk about it as if\nits idea of \"age\" can be considered in isolation, as a universal\ncatch-all that can be tuned iteratively. The reality is that it is\ninterpreted in a way that is *hopelessly* tied to other things.\n\nThis isn't a minor point. There are really bizarre implications, with\nreal practical consequences. For example, suppose you want to make\nautovacuums run more often against a simple append-only table -- so\nyou lower autovacuum_vacuum_insert_scale_factor with that in mind.\nIt's entirely possible that you'll now do *less* useful work, even\nthough you specifically set out to vacuum more aggressively! This is\ndue to the way the GUCs interact with each other, of course: the more\noften VACUUM runs, the less likely it is that it'll find XIDs before\nvacuum_freeze_min_age to trigger freezing during any individual VACUUM\noperation, the less useful work you'll do (you'll just accumulate\nunfrozen all-visible pages until you finally have an aggressive\nVACUUM).\n\nThis is exactly as illogical as it sounds. Postgres 16 will be the\nfirst version that even shows instrumentation around freezing at all\nin the log reports from autovacuum. This will be a real eye-opener, I\nsuspect -- I predict that people will be surprised at how freezing\nworks with their workload, when they finally have the opportunity to\nsee it for themselves.\n\n> Owing to the lingering belief that many users have whereby hosting providers\n> have magically enabled Postgres to do all of this for you, there is\n> still a need to\n> actively deal with these thresholds based on load. That is, as far as\n> I understand,\n> Postgres doesn't automatically adjust based on load. Someone/thing\n> still has to modify\n> the thresholds as load and data size changes.\n\nWell, vacuum_freeze_min_age (anything based on XID age) runs into the\nfollowing problem: what is the relationship between XID age, and\nfreezing work? This is a question whose answer is much too\ncomplicated, suggesting that it's just the wrong question. There is\nabsolutely no reason to expect a linear relationship (or anything like\nit) between XIDs consumed and WAL required to freeze rows from those\nXIDs. It's a totally chaotic thing.\n\nThe reason for this is: of course it is, why wouldn't it be? On Monday\nyou'll do a bulk load, and 1 XID will write 1TB to one table. On\nTuesday, there might be only one row per XID consumed, with millions\nand millions of rows inserted. This is 100% common sense, and yet is\nkinda at odds with the whole idea of basing the decision to freeze on\nage (as if vacuum_freeze_min_age didn't have enough problems\nalready!).\n\nFor now, I think our best bet is to signal the importance of avoiding\ndisaster to intermediate users, and signal the importance of iterative\ntuning to advanced users.\n\n> If the \"workload requirements\" is pointing towards aggressive\n> freezing/wraparound\n> tasks that happen regardless of thresholds, then for me at least that\n> isn't clear\n> in that sentence and it feels like there's an implication that\n> Postgres/autovacuum\n> is going to magically adjust overall vacuum work based on database workload.\n\nThat's a good point.\n\n> 2) \"The intended audience is database administrators that wish to\n> perform more advanced\n> autovacuum tuning, with any of the following goals in mind:\"\n>\n> I love calling out the audience in some way. That's really helpful, as are the\n> stated goals in the bullet list. However, as someone feeling pretty novice\n> after reading all of this, I can't honestly connect how the content on this page\n> helps me to more advanced tuning.\n\nYou're right to point that out; the actual content here was written\nhalf-heartedly, in part because it depends on the dead-tuple-space\npatch, which is not my focus at all right now.\n\nHere is what I'd like the message to be, roughly:\n\n1. This isn't something that you read once. You read it in small\nbites. You come back to it from time to time (or you will if you need\nto).\n\nAt one point Samay said: \"I'll give my perspective as someone who has\nnot read the vacuum code but have learnt most of what I know about\nautovacuum / vacuuming by reading the \"Routine Vacuuming\" page 10s of\ntimes\". I fully expect that a minority of users will want to do the\nsame with these revised docs. The content is very much not supposed to\nbe read through in one sitting (not if you expect to get any value out\nof it). It is very calorie dense, and I don't think that that's really\na problem to be solved.\n\nYou have a much better chance of getting value out of it if you as a\nuser refer back to it as problems emerge. Some things may only click\nafter the second or third read, based on the experience of trying to\nput something else into action in production.\n\n2. If you don't like that it's calorie dense, then that's probably\nokay -- just don't read past the parts that seem useful.\n\n3. There are one or two exceptions (e.g., the \"Tip\" about freezing for\nappend-only tables), but overall there isn't going to be a simple\nformula to follow -- the closest thing might be \"don't bother doing\nanything until it proves necessary\".\n\nThis is because too much depends on individual workload requirements.\nIt is also partly due to it just being really hard to tune things like\nvacuum_freeze_min_age very well right now.\n\n4. It's an applied process. The emphasis should be on solving\npractical, observed problems that are directly observed -- this isn't\na cookbook (though there are a couple of straightforward recipes,\ncovering one or two specific things).\n\n> ** Section 2.5.1 - Recovering Disk Space **\n\nIt should be noted that what I've done in this area is quite\nincomplete. I have only really done structural things here, and some\nof these may not be much good.\n\n> It feels like one connection you could make to the bullet list above\n> is in this area\n> and not mentioned. By freeing up space and reducing the number of pages that\n> need to be read for satisfying a query, vacuum and recovering disk space\n> (theoretically) improves query performance. Not 100% how to add it in context\n> of these first two paragraphs.\n\nIt's hard, because it's not so much that vacuuming improves query\nperformance. It's more like *not* vacuuming hurts it. The exact point\nthat it starts to hurt is rather hard to predict -- and there might be\nlittle point in trying to predict it with precision.\n\nI tend to think that I'd probably be better off saying nothing about\nquery response times. Or saying something negative (what to definitely\navoid), not something positive (what to do) -- I would expect it to\ngeneralize a lot better that way.\n\n> 4) Caution: \"It may be a good idea to add monitoring to alert you about this.\"\n> I hate to be pedantic about it, but I think we should spell out\n> \"this\". Do we have\n> a pointer in documentation to what kinds of things to monitor for? Am monitoring\n> long-running transactions or some metric that shows me that VACUUM is being\n> \"held back\"? I know what you mean, but it's not clear to me how to do the right\n> thing in my environment here.\n\nWill do.\n\n> 5) The plethora of tips/notes/warnings.\n> As you and others have mentioned, as presented these really have no context\n> for me. Individually they are good/helpful information, but it's\n> really hard to make\n> a connection to what I should \"do\" about it.\n\nYeah, I call that out in the relevant commit message of the patch as\nbad, as temporary.\n\n> It seems to me that this would be a good place to put a subsection which is\n> something like, \"A note about reclaiming disk space\" or something. In my\n> experience, most people hear about and end up using VACUUM FULL because\n> things got out of control and they want to get into a better spot (I have been\n> in that boat). I think with a small section that says, in essence,\n> \"hey, now that\n> you understand why/how vacuum reclaims disk resources normally, if you're\n> in a position where things aren't in a good state, this is what you need to know\n> if you want to reclaim space from a really inefficient table\"\n>\n> For me, at least, I think it would be easier to read/grok what you're sharing in\n> these callouts.\n\nThat's the kind of thing that I had planned on with VACUUM FULL,\nactually. You know, once I'm done with freezing. There is passing\nmention of this in the relevant commit message.\n\n> In my experience, one of the biggest issues with the thresholds and recovering\n> space is the idea of tuning individual tables, not just the entire\n> database. 5/10/20%\n> might be fine for most tables, but it's usually the really active ones\n> that need the\n> tuning, specifically lowering the thresholds. That doesn't come across to me in\n> this section at all. Again, maybe I've missed something on another page and\n> it's all good, but it felt worth calling out.\n\nI think that that's true. The rules are kind of different for larger tables.\n\n> read and inspect excessive amounts of data at query time or when normal vacuum\n> maintenance kicks in, VACUUM has a second job called freezing, which\n> accomplishes three goals: (attempting to tie in the three sections)\n> * speeding up queries and vacuum operations by...\n> * advancing the transaction ID space on generally static tables...\n> * ensure there are always free transaction IDs available for normal\n> operation...\n> \"\n>\n> Maybe totally worthless and too much, but something like that might set a reader\n> up for just a bit more context. Then you could take most of what comes before\n> \"2.5.2.2.1 Aggressive Vacuum\" as a subsection (would require a renumber below)\n> with something like \"2.5.2.2.1 Normal Freezing Activity\"\n\nI think I know what you mean. What I've tried to do here is start with\nfreezing, and describe it as something that has immediate benefits,\nthat can be understood as useful, independently of its role in\nadvancing relfrozenxid later on. So now you wonder: what specific\nbenefits do I get?\n\nIt's hard to be too concrete about those benefits, because you have\nthings like hint bits. I could say something like that, but I think\nI'd have to hedge too much, because you also have hint bits, that help\nquery response times in roughly the same way (albeit less reliably,\nalbeit without being set on physical replication standbys when they're\nset on the primary).\n\n> 8) Note \"In PostgreSQL versions before 16...\"\n> Showing my naivety, somehow this isn't connecting with me totally. If\n> it's important\n> to call out, then maybe we need a connecting sentence. Based on the content\n> above, I think you're pointing to \"It's also why VACUUM will freeze all eligible\n> tuples from a heap page once the decision to freeze at least one tuple\n> is taken:\"\n> If that's it, it's just not clear to me what's totally changed. Sorry,\n> more learning. :-)\n\nIn Postgres 15, vacuum_freeze_min_age was applied in a way that only\nfroze whatever XIDs could be frozen from the page -- so if you had\nhalf the tuples that were older, and half that were younger, you'd\nonly freeze the older half. Even when it might have cost you\npractically nothing to freeze them all in one go. Now, as the text\nyou've quoted points out, vacuum_freeze_min_age triggers freezing at\nthe level of whole pages, including for new XIDs (though only if\nthey're eligible to be frozen, meaning that everybody agrees that\nthey're all visible now). So vacuum_freeze_min_age picks pages to\nfreeze, not individual tuples to freeze (this optimization is so\nobvious that it's a little surprising that it took as long as it did\nto get in).\n\nPage-level freezing justifies the following statement from the patch,\nfor example:\n\n\"It doesn't matter if it was vacuum_freeze_table_age or\nvacuum_multixact_freeze_table_age that made VACUUM use its aggressive\nstrategy. Every aggressive VACUUM will advance relfrozenxid and\nrelminmxid by applying the same generic policy that controls which\npages are frozen.\"\n\nNow, since freezing works at the level of physical heap pages in 16,\nthe thing that triggers aggressive VACUUM matters less (just as the\nthing that triggers freezing of individual pages matters much less --\nfreezing is freezing). There is minimal risk of freezing the same page\n3 times during each of 3 different aggressive VACUUMs. To a much\ngreater extent, 3 aggressive VACUUMs isn't that different to only 1\naggressive VACUUM for those pages that were already \"settled\" from the\nstart. As a result, the addition of page-level freezing made\nvacuum_freeze_min_age somewhat less bad -- in 16, its behavior was a\nlittle less dependent on the phase of the moon (especially during\naggressive VACUUMs).\n\nI really value stuff like that -- cases where you as a user can think\nof something as independent to some other thing that you also need to\ntune. There needs to be a lot more such improvements, but at least we\nhave this one now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 12 May 2023 19:40:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Thu, 11 May 2023 at 16:41, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> > I say \"exhaustion\" or \"overload\" are meaningless because their meaning\n> > is entirely dependent on context. It's not like memory exhaustion or\n> > i/o overload where it's a finite resource and it's just the sheer\n> > amount in use that matters.\n>\n> But transaction IDs are a finite resource, in the sense that you can\n> never have more than about 2.1 billion distinct unfrozen XIDs at any\n> one time. \"Transaction ID exhaustion\" is therefore a lot more\n> descriptive of the underlying problem. It's a lot better than\n> wraparound, which, as I've said, is inaccurate in two major ways:\n\nI realize that's literally true that xids are a finite resource. But\nthat's not what people think of when you talk about exhausting a\nfinite resource.\n\nIt's not like there are 2 billion XIDs in a big pool being used and\nreturned and as long as you don't use too many XIDs leaving the pool\nempty you're ok. When people talk about resource exhaustion they\nimagine that they just need a faster machine or some other way of just\nputting more XIDs in the pool so they can keep using them at a faster\nrate.\n\nI really think focusing on changing one term of art for another,\nneither of which is at all meaningful without an extensive technical\nexplanation helps anyone. All it does is hide all the existing\nexplanations that are all over the internet from them.\n\n> 1. Most cases involving xidStopLimit (or even single-user mode data\n> corruption) won't involve any kind of physical integer wraparound.\n\nFwiw I've never actually bumped into anyone talking about integer\noverflow (which isn't usually called \"wraparound\" anyways). And in any\ncase it's not a terrible misconception, it at least gives users a\nreasonable model for how XID space consumption works. In fact it's not\neven entirely wrong -- it's not the XID itself that's overflowing but\nthe difference between the XID and the frozen XID.\n\n-- \ngreg\n\n\n", "msg_date": "Sat, 13 May 2023 22:46:29 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Sat, May 13, 2023 at 7:47 PM Greg Stark <stark@mit.edu> wrote:\n> It's not like there are 2 billion XIDs in a big pool being used and\n> returned and as long as you don't use too many XIDs leaving the pool\n> empty you're ok. When people talk about resource exhaustion they\n> imagine that they just need a faster machine or some other way of just\n> putting more XIDs in the pool so they can keep using them at a faster\n> rate.\n>\n> I really think focusing on changing one term of art for another,\n> neither of which is at all meaningful without an extensive technical\n> explanation helps anyone. All it does is hide all the existing\n> explanations that are all over the internet from them.\n\nHave you read the documentation in question recently? The first two\nparagraphs, in particular:\n\nhttps://www.postgresql.org/docs/devel/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\n\nAs I keep pointing out, we literally introduce the whole topic of\nfreezing/wraparound by telling users that VACUUM needs to avoid\nwraparound to stop your database from becoming corrupt. Which is when\n\"the past becomes the future\", or in simple terms data corruption\n(without any qualification about single user mode being required to\ncorrupt the DB). Users think that that's what \"wraparound\" means\nbecause we've taught them to think that. We've already been giving\nusers \"an extensive technical explanation\" for many years -- one that\nhappens to be both harmful and factually incorrect.\n\nI agree that users basically don't care about unsigned vs signed vs\nwhatever. But there is a sense that that matters, because the docs\nhave made it matter. That's my starting point. That's the damage that\nI'm trying to undo.\n\n> > 1. Most cases involving xidStopLimit (or even single-user mode data\n> > corruption) won't involve any kind of physical integer wraparound.\n>\n> Fwiw I've never actually bumped into anyone talking about integer\n> overflow (which isn't usually called \"wraparound\" anyways). And in any\n> case it's not a terrible misconception, it at least gives users a\n> reasonable model for how XID space consumption works. In fact it's not\n> even entirely wrong -- it's not the XID itself that's overflowing but\n> the difference between the XID and the frozen XID.\n\nEven your \"not entirely wrong\" version is entirely wrong. What you\ndescribe literally cannot happen (outside of single user mode),\nbecause xidStopLimit stops it from happening. To me your argument\nseems similar to arguing that it's okay to call chemotherapy \"cancer\"\non the grounds that \"cancer\" refers to something that you really ought\nto avoid in the first place in any case, which makes the whole\ndistinction irrelevant to non-oncologists.\n\nThat said, I concede that the term wraparound is too established to\njust get rid of now. The only viable way forward now may be to\nencourage users to think about it in the way that you suppose they\nmust already think about it. That is, to prominently point out that\n\"wraparound\" actually refers to a protective mode of operation where\nXID allocations are temporarily disallowed. And not pretty much\nnothing to do with \"wraparound\" of the kind that the user may be\nfamiliar with from other contexts. Including (and especially) all\nearlier versions of the Postgres docs.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 May 2023 13:59:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" }, { "msg_contents": "On Sun, May 14, 2023 at 1:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Have you read the documentation in question recently? The first two\n> paragraphs, in particular:\n>\n> https://www.postgresql.org/docs/devel/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\n>\n> As I keep pointing out, we literally introduce the whole topic of\n> freezing/wraparound by telling users that VACUUM needs to avoid\n> wraparound to stop your database from becoming corrupt. Which is when\n> \"the past becomes the future\"\n\nI went through the history of maintenance.sgml. \"Routine Vacuuming\"\ndates back to 2001. Sure enough, our current \"25.1.5. Preventing\nTransaction ID Wraparound Failures\" introductory paragraphs (the ones\nthat I find so misleading and alarmist) appear in the original\nversion, too. But in 2001, they weren't alarmist -- they were\nproportionate to the risk that existed at the time. This becomes\ntotally clear once you see the original. In particular, once you see\nthe current introductory paragraphs next to another paragraph in the\noriginal 2001 version:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/maintenance.sgml;h=7629c5bd541e25b88241e030ca86a243e964e4b7;hb=c5b7f67fcc8c4a01c82660eb0996a3c697fac283#l245\n\nThe later paragraph follows up by saying: \"In practice this [somewhat\nregular vacuuming] isn't an onerous requirement, but since the\nconsequences of failing to meet it can be ___complete data loss___\n(not just wasted disk space or slow performance), some special\nprovisions...\". This means that my particular interpretation of the\n25.1.5. introductory paragraphs are absolutely consistent with the\noriginal intent from the time they were written. I'm now more\nconfident than ever that all of the stuff about \"catastrophic data\nloss\" should have been removed in 2005 or 2006 at the latest. It\n*almost* was removed around that time, but for whatever reason it\nwasn't removed in full. And for whatever reason it didn't quite\nregister with anybody in a position to do much about it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 May 2023 20:10:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Overhauling \"Routine Vacuuming\" docs, particularly its handling\n of freezing" } ]
[ { "msg_contents": "Hello.\n\nWhile working on a patch, I noticed that a rcent commit (d4e71df6d75)\nadded an apparently unnecessary inclusion of guc.h in smgr.h.\n\nThe only change made by the commit to the file is the added #include\ndirective, which doesn't seem to be functioning, and the build\nactually suceeds without it. Moreover, it brings in some\nserver-related stuff when I incluce smgr.h in storage_xlog.h, causing\ncompilation issues for pg_rewind.\n\nShould we remove it? Please find the attached patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 25 Apr 2023 11:57:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "seemingly useless #include recently added" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> While working on a patch, I noticed that a rcent commit (d4e71df6d75)\n> added an apparently unnecessary inclusion of guc.h in smgr.h.\n\nYes, that seems quite awful, and I also wonder why it changed fd.h.\nAdding #include's to header files is generally not the first choice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Apr 2023 23:12:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: seemingly useless #include recently added" }, { "msg_contents": "On Tue, Apr 25, 2023 at 3:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > While working on a patch, I noticed that a rcent commit (d4e71df6d75)\n> > added an apparently unnecessary inclusion of guc.h in smgr.h.\n>\n> Yes, that seems quite awful, and I also wonder why it changed fd.h.\n> Adding #include's to header files is generally not the first choice.\n\nAgreed for smgr.h. Will push when I'm back at a real computer soon,\nor +1 from me if someone else wants to. It must have been left over\nfrom an earlier version that had a different arrangement with multiple\nGUCs in different places and might have needed GUC-related types to\ndeclare the check functions or something like that; sorry. As for\nfd.h, the reason it now includes <fcntl.h> is that fd.h tests whether\nO_DIRECT is defined, so in fact that was an omission from 2dbe8905\nwhich moved the #if defined(O_DIRECT) stuff from xlogdefs.h to fd.h\nbut failed to move the #include with it; I will check if something\nneeds to be back-patched there.\n\n\n", "msg_date": "Tue, 25 Apr 2023 15:41:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: seemingly useless #include recently added" } ]
[ { "msg_contents": "Hi,\n\nCurrently pg_archivecleanup doesn't remove backup history files even \nwhen they're older than oldestkeptwalfile.\n\nOf course the size of backup history files are smaller than WAL files \nand they wouldn't consume much disk space, but a lot of backup history \nfiles(e.g. daily backup for years) whose backups data have been already \nremoved are unnecessary and I would appreciate if pg_archivecleanup has \nan option to remove them.\n\nAttached a PoC patch, which added new option -b to remove files \nincluding backup history files older than oldestkeptwalfile.\n\n $ ls archivedir\n 000000010000000000000001 000000010000000000000003 \n000000010000000000000006\n 000000010000000000000008\n 000000010000000000000002 000000010000000000000004 \n000000010000000000000007\n 000000010000000000000009\n 000000010000000000000002.00000028.backup 000000010000000000000005\n 000000010000000000000007.00000028.backup \n00000001000000000000000A.partial\n\n $ pg_archivecleanup -b archivedir 000000010000000000000009\n\n $ ls archivedir\n 000000010000000000000009 00000001000000000000000A.partial\n\nAny thoughts?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 25 Apr 2023 16:38:16 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "At Tue, 25 Apr 2023 16:38:16 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> Hi,\n> \n> Currently pg_archivecleanup doesn't remove backup history files even\n> when they're older than oldestkeptwalfile.\n> \n> Of course the size of backup history files are smaller than WAL files\n> and they wouldn't consume much disk space, but a lot of backup history\n> files(e.g. daily backup for years) whose backups data have been\n> already removed are unnecessary and I would appreciate if\n> pg_archivecleanup has an option to remove them.\n> \n> Attached a PoC patch, which added new option -b to remove files\n> including backup history files older than oldestkeptwalfile.\n> \n> $ ls archivedir\n> 000000010000000000000001 000000010000000000000003\n> 000000010000000000000006\n> 000000010000000000000008\n> 000000010000000000000002 000000010000000000000004\n> 000000010000000000000007\n> 000000010000000000000009\n> 000000010000000000000002.00000028.backup 000000010000000000000005\n> 000000010000000000000007.00000028.backup\n> 00000001000000000000000A.partial\n> \n> $ pg_archivecleanup -b archivedir 000000010000000000000009\n> \n> $ ls archivedir\n> 000000010000000000000009 00000001000000000000000A.partial\n> \n> Any thoughts?\n\nI thought that we have decided not to do that, but I coundn't find any\ndiscussion about it in the ML archive. Anyway, I think it is great\nthat we have that option.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 25 Apr 2023 17:29:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Tue, Apr 25, 2023 at 05:29:48PM +0900, Kyotaro Horiguchi wrote:\n> I thought that we have decided not to do that, but I coundn't find any\n> discussion about it in the ML archive. Anyway, I think it is great\n> that we have that option.\n\nNo objections from here to make that optional. It's been argued for a\ncouple of times that leaving the archive history files is good for\ndebugging, but I can also get why one would one to clean up all the\nfiles older than a WAL retention policy. Even if these are just few\nbytes, it can be noisy for the eye to see a large accumulation of\nhistory files mixed with the WAL segments.\n--\nMichael", "msg_date": "Wed, 26 Apr 2023 06:39:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "Horiguchi-san, Michael-san\n\nThanks for your comments and information!\n\nAttached a patch with documentation and regression tests.\n\n\nOn 2023-04-26 06:39, Michael Paquier wrote:\n> On Tue, Apr 25, 2023 at 05:29:48PM +0900, Kyotaro Horiguchi wrote:\n>> I thought that we have decided not to do that, but I coundn't find any\n>> discussion about it in the ML archive. Anyway, I think it is great\n>> that we have that option.\n> \n> No objections from here to make that optional. It's been argued for a\n> couple of times that leaving the archive history files is good for\n> debugging, but I can also get why one would one to clean up all the\n> files older than a WAL retention policy. Even if these are just few\n> bytes, it can be noisy for the eye to see a large accumulation of\n> history files mixed with the WAL segments.\n> --\n> Michael\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 09 May 2023 22:32:59 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Tue, May 9, 2023 at 7:03 PM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> Attached a patch with documentation and regression tests.\n\nThanks. I think pg_archivecleanup cleaning up history files makes it a\ncomplete feature as there's no need to write custom code/scripts over\nand above what pg_archivecleanup provides. It will help those who are\nusing pg_archivecleanup for cleaning up older WAL files, say from\ntheir archive location.\n\nJust curious to know the driving point behind this proposal - is\npg_archivecleanup deployed in production that was unable to clean up\nthe history files and there were many such history files left? It will\nhelp us know how pg_archivecleanup is being used.\n\nI'm wondering if making -x generic with '-x' '.backup', is simpler\nthan adding another option?\n\nComments on the patch:\n1. Why just only the backup history files? Why not remove the timeline\nhistory files too? Is it because there may not be as many tli switches\nhappening as backups?\n2.+sub remove_backuphistoryfile_run_check\n+{\nWhy to invent a new function when run_check() can be made generic with\nfew arguments passed? For instance, run_check() can receive\npg_archivecleanup command args, what to use for create_files(), in the\nerror condition if the pg_archivecleanup command args contain 'b',\nthen use a different message \"$test_name: first older WAL file was\ncleaned up\" or \"$test_name: first .backup file was cleaned up\".\nOtherwise, just modify the messages to be:\n\"$test_name: first older file %s was cleaned up\", $files[0]);\n\"$test_name: second older file %s was cleaned up\", $files[1]);\n\"$test_name: restart file %s was not cleaned up\", $files[2]);\n\"$test_name: newer file %s was not cleaned up\", $files[3]);\n\"$test_name: unrelated file %s was not cleaned up\", $files[4]);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 May 2023 14:22:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-05-10 17:52, Bharath Rupireddy wrote:\nThanks for your comments!\n\n> Just curious to know the driving point behind this proposal - is\n> pg_archivecleanup deployed in production that was unable to clean up\n> the history files and there were many such history files left? It will\n> help us know how pg_archivecleanup is being used.\n\nYes.\n\n> Just curious to know the driving point behind this proposal - is\n> pg_archivecleanup deployed in production that was unable to clean up\n> the history files and there were many such history files left? It will\n> help us know how pg_archivecleanup is being used.\n> \n> I'm wondering if making -x generic with '-x' '.backup', is simpler\n> than adding another option?\n\nSince according to the current semantics, deleting backup history files \nwith -x demands not '-x .backup' but '-x .007C9330.backup' when the file \nname is 0000000100001234000055CD.007C9330.backup, it needs special \ntreatment for backup history files, right?\n\nI think it would be intuitive and easier to remember than new option.\n\nI was a little concerned about what to do when deleting both the files \nending in .gz and backup history files.\nIs making it possible to specify both \"-x .backup\" and \"-x .gz\" the way \nto go?\n\nI also concerned someone might add \".backup\" to WAL files, but does that \nusually not happen?\n\n> Comments on the patch:\n> 1. Why just only the backup history files? Why not remove the timeline\n> history files too? Is it because there may not be as many tli switches\n> happening as backups?\n\nYeah, do you think we should also add logic for '-x .history'?\n\n> 2.+sub remove_backuphistoryfile_run_check\n> +{\n> Why to invent a new function when run_check() can be made generic with\n> few arguments passed?\n\nThanks, I'm going to reconsider it.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 12 May 2023 17:53:45 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Fri, May 12, 2023 at 05:53:45PM +0900, torikoshia wrote:\n> On 2023-05-10 17:52, Bharath Rupireddy wrote:\n> I was a little concerned about what to do when deleting both the files\n> ending in .gz and backup history files.\n> Is making it possible to specify both \"-x .backup\" and \"-x .gz\" the way to\n> go?\n> \n> I also concerned someone might add \".backup\" to WAL files, but does that\n> usually not happen?\n\nDepends on the archive command, of course. I have seen code using\nthis suffix for some segment names in the past, FWIW.\n\n>> Comments on the patch:\n>> 1. Why just only the backup history files? Why not remove the timeline\n>> history files too? Is it because there may not be as many tli switches\n>> happening as backups?\n> \n> Yeah, do you think we should also add logic for '-x .history'?\n\nTimeline history files can be critical pieces when it comes to\nassigning a TLI as these could be retrieved by a restore_command\nduring recovery for a TLI jump or just assign a new TLI number at the\nend of recovery, still you could presumably remove the TLI history\nfiles that are older than the TLI the segment defined refers too.\n(pg_archivecleanup has no specific logic to look at the history with\nchild TLIs for example, to keep it simple, and I'd rather keep it this\nway). There may be an argument for making that optional, of course,\nbut it does not strike me as really necessary compared to the backup\nhistory files which are just around for debugging purposes.\n\n>> 2.+sub remove_backuphistoryfile_run_check\n>> +{\n>> Why to invent a new function when run_check() can be made generic with\n>> few arguments passed?\n> \n> Thanks, I'm going to reconsider it.\n\n+ <para>\n+ Remove files including backup history files, whose suffix is <filename>.backup</filename>.\n+ Note that when <replaceable>oldestkeptwalfile</replaceable> is a backup history file,\n+ specified file is kept and only preceding WAL files and backup history files are removed.\n+ </para>\n\nThis addition to the documentation looks unprecise to me. Backup\nhistory files have a more complex format than just the .backup\nsuffix, and this is documented in backup.sgml.\n\nHow about plugging in some long options, and use something more\nexplicit like --clean-backup-history?\n\n- if ((IsXLogFileName(walfile) || IsPartialXLogFileName(walfile)) &&\n+ if (((IsXLogFileName(walfile) || IsPartialXLogFileName(walfile)) ||\n+ (removeBackupHistoryFile && IsBackupHistoryFileName(walfile))) &&\n strcmp(walfile + 8, exclusiveCleanupFileName + 8) < 0)\n\nCould it be a bit cleaner to split this check in two, saving one level\nof indentation on the way for its most inner loop? I would imagine\nsomething like:\n /* Check file name */\n if (!IsXLogFileName(walfile) &&\n\t!IsPartialXLogFileName(walfile))\n\tcontinue;\n /* Check cutoff point */\n if (strcmp(walfile + 8, exclusiveCleanupFileName + 8) >= 0)\n continue;\n //rest of the code doing the unlinks.\n--\nMichael", "msg_date": "Mon, 15 May 2023 09:18:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-05-15 09:18, Michael Paquier wrote:\n> On Fri, May 12, 2023 at 05:53:45PM +0900, torikoshia wrote:\n>> On 2023-05-10 17:52, Bharath Rupireddy wrote:\n>> I was a little concerned about what to do when deleting both the files\n>> ending in .gz and backup history files.\n>> Is making it possible to specify both \"-x .backup\" and \"-x .gz\" the \n>> way to\n>> go?\n>> \n>> I also concerned someone might add \".backup\" to WAL files, but does \n>> that\n>> usually not happen?\n> \n> Depends on the archive command, of course. I have seen code using\n> this suffix for some segment names in the past, FWIW.\n\nThanks for the information.\nI'm going to stop adding special logic for \"-x .backup\" and add a new\noption for removing backup history files.\n\n>>> Comments on the patch:\n>>> 1. Why just only the backup history files? Why not remove the \n>>> timeline\n>>> history files too? Is it because there may not be as many tli \n>>> switches\n>>> happening as backups?\n>> \n>> Yeah, do you think we should also add logic for '-x .history'?\n> \n> Timeline history files can be critical pieces when it comes to\n> assigning a TLI as these could be retrieved by a restore_command\n> during recovery for a TLI jump or just assign a new TLI number at the\n> end of recovery, still you could presumably remove the TLI history\n> files that are older than the TLI the segment defined refers too.\n> (pg_archivecleanup has no specific logic to look at the history with\n> child TLIs for example, to keep it simple, and I'd rather keep it this\n> way). There may be an argument for making that optional, of course,\n> but it does not strike me as really necessary compared to the backup\n> history files which are just around for debugging purposes.\n\nAgreed.\n\n>>> 2.+sub remove_backuphistoryfile_run_check\n>>> +{\n>>> Why to invent a new function when run_check() can be made generic \n>>> with\n>>> few arguments passed?\n>> \n>> Thanks, I'm going to reconsider it.\n> \n> + <para>\n> + Remove files including backup history files, whose suffix is\n> <filename>.backup</filename>.\n> + Note that when <replaceable>oldestkeptwalfile</replaceable>\n> is a backup history file,\n> + specified file is kept and only preceding WAL files and\n> backup history files are removed.\n> + </para>\n> \n> This addition to the documentation looks unprecise to me. Backup\n> history files have a more complex format than just the .backup\n> suffix, and this is documented in backup.sgml.\n\nI'm going to remove the explanation for the backup history files and\njust add the hyperlink to the original explanation in backup.sgml.\n\n> How about plugging in some long options, and use something more\n> explicit like --clean-backup-history?\n\nAgreed.\n\n> \n> - if ((IsXLogFileName(walfile) || IsPartialXLogFileName(walfile)) &&\n> + if (((IsXLogFileName(walfile) || IsPartialXLogFileName(walfile)) ||\n> + (removeBackupHistoryFile && \n> IsBackupHistoryFileName(walfile))) &&\n> strcmp(walfile + 8, exclusiveCleanupFileName + 8) < 0)\n> \n> Could it be a bit cleaner to split this check in two, saving one level\n> of indentation on the way for its most inner loop? I would imagine\n> something like:\n> /* Check file name */\n> if (!IsXLogFileName(walfile) &&\n> \t!IsPartialXLogFileName(walfile))\n> \tcontinue;\n> /* Check cutoff point */\n> if (strcmp(walfile + 8, exclusiveCleanupFileName + 8) >= 0)\n> continue;\n> //rest of the code doing the unlinks.\n> --\nThanks, that looks better.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 15 May 2023 15:16:46 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Mon, May 15, 2023 at 03:16:46PM +0900, torikoshia wrote:\n> On 2023-05-15 09:18, Michael Paquier wrote:\n>> How about plugging in some long options, and use something more\n>> explicit like --clean-backup-history?\n> \n> Agreed.\n\nIf you begin to implement that, it seems to me that this should be\nshaped with a first separate patch that refactors the code to use\ngetopt_long(), and a second patch for the proposed feature that builds\non top of it.\n--\nMichael", "msg_date": "Mon, 15 May 2023 15:22:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-05-15 15:22, Michael Paquier wrote:\n> On Mon, May 15, 2023 at 03:16:46PM +0900, torikoshia wrote:\n>> On 2023-05-15 09:18, Michael Paquier wrote:\n>>> How about plugging in some long options, and use something more\n>>> explicit like --clean-backup-history?\n>> \n>> Agreed.\n> \n> If you begin to implement that, it seems to me that this should be\n> shaped with a first separate patch that refactors the code to use\n> getopt_long(), and a second patch for the proposed feature that builds\n> on top of it.\nThanks for your advice, attached patches.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Mon, 22 May 2023 18:24:49 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Mon, May 22, 2023 at 06:24:49PM +0900, torikoshia wrote:\n> Thanks for your advice, attached patches.\n\n0001 looks OK, thanks!\n\n+ Remove files including backup history file.\n\nThis could be reworded as \"Remove backup history files.\", I assume.\n\n+ Note that when <replaceable>oldestkeptwalfile</replaceable> is a backup history file,\n+ specified file is kept and only preceding WAL files and backup history files are removed.\n\nThe same thing is described at the bottom of the documentation, so it\ndoes not seem necessary here.\n\n- printf(_(\" -d, --debug generate debug output (verbose mode)\\n\"));\n- printf(_(\" -n, --dry-run dry run, show the names of the files that would be removed\\n\"));\n- printf(_(\" -V, --version output version information, then exit\\n\"));\n- printf(_(\" -x --strip-extension=EXT clean up files if they have this extension\\n\"));\n- printf(_(\" -?, --help show this help, then exit\\n\"));\n+ printf(_(\" -d, --debug generate debug output (verbose mode)\\n\"));\n+ printf(_(\" -n, --dry-run dry run, show the names of the files that would be removed\\n\"));\n+ printf(_(\" -b, --clean-backup-history clean up files including backup history files\\n\"));\n+ printf(_(\" -V, --version output version information, then exit\\n\"));\n+ printf(_(\" -x --strip-extension=EXT clean up files if they have this extension\\n\"));\n+ printf(_(\" -?, --help show this help, then exit\\n\"));\n\nPerhaps this indentation had better be adjusted in 0001, no big deal\neither way.\n\n- ok(!-f \"$tempdir/$walfiles[0]\",\n- \"$test_name: first older WAL file was cleaned up\");\n+ if (grep {$_ eq '-x.gz'} @options) {\n+ ok(!-f \"$tempdir/$walfiles[0]\",\n+ \"$test_name: first older WAL file with .gz was cleaned up\");\n+ } else {\n+ ok(-f \"$tempdir/$walfiles[0]\",\n+ \"$test_name: first older WAL file with .gz was not cleaned up\");\n+ }\n[...]\n- ok(-f \"$tempdir/$walfiles[2]\",\n- \"$test_name: restartfile was not cleaned up\");\n+ if (grep {$_ eq '-b'} @options) {\n+ ok(!-f \"$tempdir/$walfiles[2]\",\n+ \"$test_name: Backup history file was cleaned up\");\n+ } else {\n+ ok(-f \"$tempdir/$walfiles[2]\",\n+ \"$test_name: Backup history file was not cleaned up\");\n+ }\n\nThat's over-complicated, caused by the fact that all the tests rely on\nthe same list of WAL files to create, aka @walfiles defined at the\nbeginning of the script. Wouldn't it be cleaner to handle the fake\nfile creations with more than one global list, before each command\nrun? The list of files to be created could just be passed down as an\nargument of run_check(), for example, before cleaning up the contents\nfor the next command.\n--\nMichael", "msg_date": "Wed, 24 May 2023 10:26:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-05-24 10:26, Michael Paquier wrote:\n> On Mon, May 22, 2023 at 06:24:49PM +0900, torikoshia wrote:\n>> Thanks for your advice, attached patches.\n> \n> 0001 looks OK, thanks!\n> \n> + Remove files including backup history file.\n> \n> This could be reworded as \"Remove backup history files.\", I assume.\n> \n> + Note that when <replaceable>oldestkeptwalfile</replaceable>\n> is a backup history file,\n> + specified file is kept and only preceding WAL files and\n> backup history files are removed.\n> \n> The same thing is described at the bottom of the documentation, so it\n> does not seem necessary here.\n> \n> - printf(_(\" -d, --debug generate debug output\n> (verbose mode)\\n\"));\n> - printf(_(\" -n, --dry-run dry run, show the names\n> of the files that would be removed\\n\"));\n> - printf(_(\" -V, --version output version\n> information, then exit\\n\"));\n> - printf(_(\" -x --strip-extension=EXT clean up files if they\n> have this extension\\n\"));\n> - printf(_(\" -?, --help show this help, then \n> exit\\n\"));\n> + printf(_(\" -d, --debug generate debug output\n> (verbose mode)\\n\"));\n> + printf(_(\" -n, --dry-run dry run, show the\n> names of the files that would be removed\\n\"));\n> + printf(_(\" -b, --clean-backup-history clean up files\n> including backup history files\\n\"));\n> + printf(_(\" -V, --version output version\n> information, then exit\\n\"));\n> + printf(_(\" -x --strip-extension=EXT clean up files if they\n> have this extension\\n\"));\n> + printf(_(\" -?, --help show this help, then \n> exit\\n\"));\n> \n> Perhaps this indentation had better be adjusted in 0001, no big deal\n> either way.\n> \n> - ok(!-f \"$tempdir/$walfiles[0]\",\n> - \"$test_name: first older WAL file was cleaned up\");\n> + if (grep {$_ eq '-x.gz'} @options) {\n> + ok(!-f \"$tempdir/$walfiles[0]\",\n> + \"$test_name: first older WAL file with .gz was\n> cleaned up\");\n> + } else {\n> + ok(-f \"$tempdir/$walfiles[0]\",\n> + \"$test_name: first older WAL file with .gz was\n> not cleaned up\");\n> + }\n> [...]\n> - ok(-f \"$tempdir/$walfiles[2]\",\n> - \"$test_name: restartfile was not cleaned up\");\n> + if (grep {$_ eq '-b'} @options) {\n> + ok(!-f \"$tempdir/$walfiles[2]\",\n> + \"$test_name: Backup history file was cleaned \n> up\");\n> + } else {\n> + ok(-f \"$tempdir/$walfiles[2]\",\n> + \"$test_name: Backup history file was not \n> cleaned up\");\n> + }\n> \n> That's over-complicated, caused by the fact that all the tests rely on\n> the same list of WAL files to create, aka @walfiles defined at the\n> beginning of the script. Wouldn't it be cleaner to handle the fake\n> file creations with more than one global list, before each command\n> run? The list of files to be created could just be passed down as an\n> argument of run_check(), for example, before cleaning up the contents\n> for the next command.\n> --\n> Michael\n\nThanks for reviewing!\nUpdated patches according to your comment.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Thu, 25 May 2023 23:51:18 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "At Thu, 25 May 2023 23:51:18 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> Updated patches according to your comment.\n\n+\tprintf(_(\" -x --strip-extension=EXT clean up files if they have this extension\\n\"));\n\nPerhaps a comma is needed after \"-x\". The apparent inconsistency\nbetween the long name and the description is perplexing. I think it\nmight be more suitable to reword the descrition to \"ignore this\nextension while identifying files for cleanup\" or something along\nthose lines..., and then name the option as \"--ignore-extension\".\n\n\nThe patch leaves the code.\n\n>\t\t\t * Truncation is essentially harmless, because we skip names of\n>\t\t\t * length other than XLOG_FNAME_LEN. (In principle, one could use\n>\t\t\t * a 1000-character additional_ext and get trouble.)\n>\t\t\t */\n>\t\t\tstrlcpy(walfile, xlde->d_name, MAXPGPATH);\n>\t\t\tTrimExtension(walfile, additional_ext);\n\nThe comment is no longer correct about the file name length.\n\n\n\n+\t\t\tif (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile) &&\n+\t\t\t\t(!cleanBackupHistory || !IsBackupHistoryFileName(walfile)))\n+\t\t\t\tcontinue;\n\nI'm not certain about the others, but I see this a tad tricky to\nunderstand at first glance. Here's how I would phrase it. (A nearby\ncomment might require a tweak if we go ahead with this change.)\n\n\t\tif (!(IsXLogFileName(walfile) || IsPartialXLogFileName(walfile) ||\n\t\t\t(cleanBackupHistory && IsBackupHistoryFileName(walfile))))\n\nor \n\n\t\tif (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile) &&\n\t\t\t!(cleanBackupHistory && IsBackupHistoryFileName(walfile)))\n\n\nBy the way, this patch reduces the nesting level. If we go that\ndirection, the following structure could be reworked similarly, and I\nbelieve it results in a more standard structure.\n\n\tif ((xldir = opendir(archiveLocation)) != NULL)\n\t{\n ...\n\t}\n\telse\n\t pg_fatal(\"could not open archive location..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 26 May 2023 10:07:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Fri, May 26, 2023 at 10:07:48AM +0900, Kyotaro Horiguchi wrote:\n> +\t\t\tif (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile) &&\n> +\t\t\t\t(!cleanBackupHistory || !IsBackupHistoryFileName(walfile)))\n> +\t\t\t\tcontinue;\n> \n> I'm not certain about the others, but I see this a tad tricky to\n> understand at first glance. Here's how I would phrase it. (A nearby\n> comment might require a tweak if we go ahead with this change.)\n> \n> \t\tif (!(IsXLogFileName(walfile) || IsPartialXLogFileName(walfile) ||\n> \t\t\t(cleanBackupHistory && IsBackupHistoryFileName(walfile))))\n> or \n> \t\tif (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile) &&\n> \t\t\t!(cleanBackupHistory && IsBackupHistoryFileName(walfile)))\n\nFWIW, I am OK with what's written in the patch, but it is true that\nyour second suggestion makes things a bit easier to read, perhaps.\n--\nMichael", "msg_date": "Fri, 26 May 2023 14:06:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Thu, May 25, 2023 at 11:51:18PM +0900, torikoshia wrote:\n> Updated patches according to your comment.\n\n- ok(!-f \"$tempdir/$walfiles[1]\",\n- \"$test_name: second older WAL file was cleaned up\");\n- ok(-f \"$tempdir/$walfiles[2]\",\n+ ok(!-f \"$tempdir/@$walfiles[1]\",\n+ \"$test_name: second older WAL/backup history file was cleaned up\");\n+ ok(-f \"$tempdir/@$walfiles[2]\",\n\nThis is still a bit confusing, because as designed the test has a\ndependency on the number of elements present in the list, and the\ndescription of the test may not refer to what's actually used (the\nsecond element of each list is either a WAL segment or a backup\nhistory file). I think that I would just rewrite that so as we have a\nlist of WAL segments in an array with the expected result associated\nto each one of them. Basically, something like that:\nmy @wal_segments = (\n { name => \"SEGMENT1\", present => 0 },\n { name => \"BACKUPFILE1\", present => 1 },\n { name => \"SEGMENT2\", present => 0 });\n\nThen the last part of run_check() would loop through all the elements\nlisted.\n--\nMichael", "msg_date": "Fri, 26 May 2023 14:19:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-05-26 10:07, Kyotaro Horiguchi wrote:\nThanks for your review!\n\n> +\tprintf(_(\" -x --strip-extension=EXT clean up files if they have\n> this extension\\n\"));\n> \n> Perhaps a comma is needed after \"-x\".\n\nThat's an oversight. Modified it.\n\n> The apparent inconsistency\n> between the long name and the description is perplexing. I think it\n> might be more suitable to reword the descrition to \"ignore this\n> extension while identifying files for cleanup\" or something along\n> those lines..., and then name the option as \"--ignore-extension\".\n\nI used 'strip' since it is used in the manual as below:\n\n| -x extension\n| Provide an extension that will be stripped from all file names before \ndeciding if they should be deleted\n\nI think using the same verb both in long name option and in the manual \nis natural.\nHow about something like this?\n\n| printf(_(\" -x, --strip-extension=EXT strip this extention before \nidentifying files fo clean up\\n\"));\n\n> The patch leaves the code.\n> \n>> \t\t\t * Truncation is essentially harmless, because we skip names of\n>> \t\t\t * length other than XLOG_FNAME_LEN. (In principle, one could use\n>> \t\t\t * a 1000-character additional_ext and get trouble.)\n>> \t\t\t */\n>> \t\t\tstrlcpy(walfile, xlde->d_name, MAXPGPATH);\n>> \t\t\tTrimExtension(walfile, additional_ext);\n> \n> The comment is no longer correct about the file name length.\n\nYeah.\nconsidering parital WAL, it would be not correct even before applying \nthe patch.\nI modifiied the comments as below:\n\n| * Truncation is essentially harmless, because we check the file\n| * format including the length immediately after this.\n\n> +\t\t\tif (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile) &&\n> +\t\t\t\t(!cleanBackupHistory || !IsBackupHistoryFileName(walfile)))\n> +\t\t\t\tcontinue;\n> \n> I'm not certain about the others, but I see this a tad tricky to\n> understand at first glance. Here's how I would phrase it. (A nearby\n> comment might require a tweak if we go ahead with this change.)\n> \n> \t\tif (!(IsXLogFileName(walfile) || IsPartialXLogFileName(walfile) ||\n> \t\t\t(cleanBackupHistory && IsBackupHistoryFileName(walfile))))\n> \n> or\n> \n> \t\tif (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile) &&\n> \t\t\t!(cleanBackupHistory && IsBackupHistoryFileName(walfile)))\n\nThanks for the suggestion, I used the second one.\n\n> By the way, this patch reduces the nesting level. If we go that\n> direction, the following structure could be reworked similarly, and I\n> believe it results in a more standard structure.\n> \n> \tif ((xldir = opendir(archiveLocation)) != NULL)\n> \t{\n> ...\n> \t}\n> \telse\n> \t pg_fatal(\"could not open archive location..\n\nAgreed. Attached 0003 patch for this.\n\n\n\nOn 2023-05-26 14:19, Michael Paquier wrote:\n> On Thu, May 25, 2023 at 11:51:18PM +0900, torikoshia wrote:\n>> Updated patches according to your comment.\n> \n> - ok(!-f \"$tempdir/$walfiles[1]\",\n> - \"$test_name: second older WAL file was cleaned up\");\n> - ok(-f \"$tempdir/$walfiles[2]\",\n> + ok(!-f \"$tempdir/@$walfiles[1]\",\n> + \"$test_name: second older WAL/backup history file was cleaned \n> up\");\n> + ok(-f \"$tempdir/@$walfiles[2]\",\n> \n> This is still a bit confusing, because as designed the test has a\n> dependency on the number of elements present in the list, and the\n> description of the test may not refer to what's actually used (the\n> second element of each list is either a WAL segment or a backup\n> history file). I think that I would just rewrite that so as we have a\n> list of WAL segments in an array with the expected result associated\n> to each one of them. Basically, something like that:\n> my @wal_segments = (\n> { name => \"SEGMENT1\", present => 0 },\n> { name => \"BACKUPFILE1\", present => 1 },\n> { name => \"SEGMENT2\", present => 0 });\n> \n> Then the last part of run_check() would loop through all the elements\n> listed.\n\nThanks.\nUpdate the patch according to the advice.\nI also changed the parameter of run_check() from specifying extension of \noldestkeptwalfile to oldestkeptwalfile itself.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 31 May 2023 10:51:19 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "\n\nOn 2023/05/31 10:51, torikoshia wrote:\n> Update the patch according to the advice.\n\nThanks for updating the patches! I have small comments regarding 0002 patch.\n\n+ <para>\n+ Remove backup history files.\n\nIsn't it better to document clearly which backup history files to be removed? For example, \"In addition to removing WAL files, remove backup history files with prefixes logically preceding the oldestkeptwalfile.\".\n\n\n \tprintf(_(\" -n, --dry-run dry run, show the names of the files that would be removed\\n\"));\n+\tprintf(_(\" -b, --clean-backup-history clean up files including backup history files\\n\"));\n\nShouldn't -b option be placed in alphabetical order?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 9 Jun 2023 00:32:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Fri, Jun 09, 2023 at 12:32:15AM +0900, Fujii Masao wrote:\n> + <para>\n> + Remove backup history files.\n> \n> Isn't it better to document clearly which backup history files to be removed? For example, \"In addition to removing WAL files, remove backup history files with prefixes logically preceding the oldestkeptwalfile.\".\n\nI've written about this part at the beginning of this one, where this\nsounds like a duplicated description of the Description section:\nhttps://www.postgresql.org/message-id/ZG1nq13v411y4TFL@paquier.xyz\n\n> \tprintf(_(\" -n, --dry-run dry run, show the names of the files that would be removed\\n\"));\n> +\tprintf(_(\" -b, --clean-backup-history clean up files including backup history files\\n\"));\n> \n> Shouldn't -b option be placed in alphabetical order?\n\n+1.\n--\nMichael", "msg_date": "Mon, 12 Jun 2023 16:33:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-06-12 16:33, Michael Paquier wrote:\n> On Fri, Jun 09, 2023 at 12:32:15AM +0900, Fujii Masao wrote:\nThanks for reviewing!\n\n>> \tprintf(_(\" -n, --dry-run dry run, show the names of \n>> the files that would be removed\\n\"));\n>> +\tprintf(_(\" -b, --clean-backup-history clean up files including \n>> backup history files\\n\"));\n>> \n>> Shouldn't -b option be placed in alphabetical order?\n> \n> +1.\n\nModified the place.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 14 Jun 2023 00:49:39 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "At Wed, 14 Jun 2023 00:49:39 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> On 2023-06-12 16:33, Michael Paquier wrote:\n> > On Fri, Jun 09, 2023 at 12:32:15AM +0900, Fujii Masao wrote:\n> Thanks for reviewing!\n> \n> >> \tprintf(_(\" -n, --dry-run dry run, show the names of the files that\n> >> \twould be removed\\n\"));\n> >> + printf(_(\" -b, --clean-backup-history clean up files including\n> >> backup history files\\n\"));\n> >> Shouldn't -b option be placed in alphabetical order?\n> > +1.\n> \n> Modified the place.\n\n-\tprintf(_(\" -d generate debug output (verbose mode)\\n\"));\n-\tprintf(_(\" -n dry run, show the names of the files that would be removed\\n\"));\n-\tprintf(_(\" -V, --version output version information, then exit\\n\"));\n-\tprintf(_(\" -x EXT clean up files if they have this extension\\n\"));\n-\tprintf(_(\" -?, --help show this help, then exit\\n\"));\n+\tprintf(_(\" -d, --debug generate debug output (verbose mode)\\n\"));\n+\tprintf(_(\" -n, --dry-run dry run, show the names of the files that would be removed\\n\"));\n+\tprintf(_(\" -V, --version output version information, then exit\\n\"));\n+\tprintf(_(\" -x, --strip-extension=EXT strip this extention before identifying files fo clean up\\n\"));\n+\tprintf(_(\" -?, --help show this help, then exit\\n\"));\n\nAfter this change, some of these lines corss the boundary of the 80\ncolumns width. (is that policy viable noadays? I am usually working\nusing terminal windows with such a width..) It's somewhat unrelated to\nthis patch, but a help line a few lines further down also exceeds the\nwidth. We could shorten it by removing the \"/mnt/server\" portion, but\nI'm not sure if it's worth doing.\n\n\n> Or for use as a standalone archive cleaner:\n> e.g.\n> pg_archivecleanup /mnt/server/archiverdir 000000010000000000000010.00000020.backup\n\n\n+\tprintf(_(\" -x, --strip-extension=EXT strip this extention before identifying files fo clean up\\n\"));\n\ns/fo/for/ ?\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 15 Jun 2023 15:20:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-06-15 15:20, Kyotaro Horiguchi wrote:\nThanks for your review!\n\n> At Wed, 14 Jun 2023 00:49:39 +0900, torikoshia\n> <torikoshia@oss.nttdata.com> wrote in\n>> On 2023-06-12 16:33, Michael Paquier wrote:\n>> > On Fri, Jun 09, 2023 at 12:32:15AM +0900, Fujii Masao wrote:\n>> Thanks for reviewing!\n>> \n>> >> \tprintf(_(\" -n, --dry-run dry run, show the names of the files that\n>> >> \twould be removed\\n\"));\n>> >> + printf(_(\" -b, --clean-backup-history clean up files including\n>> >> backup history files\\n\"));\n>> >> Shouldn't -b option be placed in alphabetical order?\n>> > +1.\n>> \n>> Modified the place.\n> \n> -\tprintf(_(\" -d generate debug output (verbose mode)\\n\"));\n> -\tprintf(_(\" -n dry run, show the names of the files that\n> would be removed\\n\"));\n> -\tprintf(_(\" -V, --version output version information, then \n> exit\\n\"));\n> -\tprintf(_(\" -x EXT clean up files if they have this \n> extension\\n\"));\n> -\tprintf(_(\" -?, --help show this help, then exit\\n\"));\n> +\tprintf(_(\" -d, --debug generate debug output\n> (verbose mode)\\n\"));\n> +\tprintf(_(\" -n, --dry-run dry run, show the names of\n> the files that would be removed\\n\"));\n> +\tprintf(_(\" -V, --version output version information,\n> then exit\\n\"));\n> +\tprintf(_(\" -x, --strip-extension=EXT strip this extention before\n> identifying files fo clean up\\n\"));\n> +\tprintf(_(\" -?, --help show this help, then \n> exit\\n\"));\n> \n> After this change, some of these lines corss the boundary of the 80\n> columns width. (is that policy viable noadays? I am usually working\n> using terminal windows with such a width..) It's somewhat unrelated to\n> this patch, but a help line a few lines further down also exceeds the\n> width. We could shorten it by removing the \"/mnt/server\" portion, but\n> I'm not sure if it's worth doing.\n\nI also highlight 80th column according to the wiki[1].\nSince usage() in other files like pg_rewind.c and initdb.c also\nexceeded the 80th column, I thought that was something like a guide.\n\n>> Or for use as a standalone archive cleaner:\n>> e.g.\n>> pg_archivecleanup /mnt/server/archiverdir \n>> 000000010000000000000010.00000020.backup\n> \n> \n> +\tprintf(_(\" -x, --strip-extension=EXT strip this extention before\n> identifying files fo clean up\\n\"));\n> \n> s/fo/for/ ?\n\nYeah, it's a typo. Fixed it.\n\n[1] \nhttps://wiki.postgresql.org/wiki/Configuring_vim_for_postgres_development\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Thu, 15 Jun 2023 21:38:28 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "At Thu, 15 Jun 2023 21:38:28 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> On 2023-06-15 15:20, Kyotaro Horiguchi wrote:\n> Thanks for your review!\n> > + printf(_(\" -x, --strip-extension=EXT strip this extention before\n> > identifying files fo clean up\\n\"));\n> > + printf(_(\" -?, --help show this help, then exit\\n\"));\n> > After this change, some of these lines corss the boundary of the 80\n> > columns width. (is that policy viable noadays? I am usually working\n> > using terminal windows with such a width..) It's somewhat unrelated to\n> > this patch, but a help line a few lines further down also exceeds the\n> > width. We could shorten it by removing the \"/mnt/server\" portion, but\n> > I'm not sure if it's worth doing.\n> \n> I also highlight 80th column according to the wiki[1].\n> Since usage() in other files like pg_rewind.c and initdb.c also\n> exceeded the 80th column, I thought that was something like a guide.\n\nI think the page is suggesting about program code, not the messages\nthat binaries print.\n\nASAICS the main section of the \"pg_rewind --help\" fits within 80\ncolumns. However, \"initdb --help\" does output a few lines exceeding\nthe 80-column limit. Judging by the surrounding lines, I believe we're\nstill aiming to maintain these the given width. I think we need to fix\ninitdb in that regard.\n\n> [1]\n> https://wiki.postgresql.org/wiki/Configuring_vim_for_postgres_development\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 16 Jun 2023 11:22:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "At Fri, 16 Jun 2023 11:22:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> ASAICS the main section of the \"pg_rewind --help\" fits within 80\n> columns. However, \"initdb --help\" does output a few lines exceeding\n> the 80-column limit. Judging by the surrounding lines, I believe we're\n> still aiming to maintain these the given width. I think we need to fix\n> initdb in that regard.\n\nMmm, the message was introduced in 2012 by 8a02339e9b. I haven't\nnoticed this until now...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 16 Jun 2023 11:30:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-06-16 11:22, Kyotaro Horiguchi wrote:\n> At Thu, 15 Jun 2023 21:38:28 +0900, torikoshia\n> <torikoshia@oss.nttdata.com> wrote in\n>> On 2023-06-15 15:20, Kyotaro Horiguchi wrote:\n>> Thanks for your review!\n>> > + printf(_(\" -x, --strip-extension=EXT strip this extention before\n>> > identifying files fo clean up\\n\"));\n>> > + printf(_(\" -?, --help show this help, then exit\\n\"));\n>> > After this change, some of these lines corss the boundary of the 80\n>> > columns width. (is that policy viable noadays? I am usually working\n>> > using terminal windows with such a width..) It's somewhat unrelated to\n>> > this patch, but a help line a few lines further down also exceeds the\n>> > width. We could shorten it by removing the \"/mnt/server\" portion, but\n>> > I'm not sure if it's worth doing.\n>> \n>> I also highlight 80th column according to the wiki[1].\n>> Since usage() in other files like pg_rewind.c and initdb.c also\n>> exceeded the 80th column, I thought that was something like a guide.\n> \n> I think the page is suggesting about program code, not the messages\n> that binaries print.\n\nThanks, now I understand what you meant.\n\n> ASAICS the main section of the \"pg_rewind --help\" fits within 80\n> columns. However, \"initdb --help\" does output a few lines exceeding\n> the 80-column limit. Judging by the surrounding lines, I believe we're\n> still aiming to maintain these the given width. I think we need to fix\n> initdb in that regard.\n\nHmm, it seems some other commands also exceeds 80 columns:\n\n pg_amcheck:\n --skip=OPTION do NOT check \"all-frozen\" or \n\"all-visible\" blocks\n --startblock=BLOCK begin checking table(s) at the given \nblock number\n --endblock=BLOCK check table(s) only up to the given \nblock number\n\n --no-synchronized-snapshots do not use synchronized snapshots in \nparallel jobs\n\n pg_isready:\n -t, --timeout=SECS seconds to wait when attempting connection, 0 \ndisables (default: 3)\n\n pg_receivewal:\n --create-slot create a new replication slot (for the slot's \nname see --slot)\n --drop-slot drop the replication slot (for the slot's name \nsee --slot)\n\nIf you don't mind, I'm going to create another thread about this point.\nI'll also discuss below line since it's unrelated to current thread\nas you pointed out:\n\n| pg_archivecleanup /mnt/server/archiverdir \n000000010000000000000010.00000020.backup\n\n\nAttached patch fixes the number of columns per row exceeding 80 by\nchanging to use getopt_long.\n\n\nOn 2023-06-16 11:30, Kyotaro Horiguchi wrote:\n> At Fri, 16 Jun 2023 11:22:31 +0900 (JST), Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote in\n>> ASAICS the main section of the \"pg_rewind --help\" fits within 80\n>> columns. However, \"initdb --help\" does output a few lines exceeding\n>> the 80-column limit. Judging by the surrounding lines, I believe we're\n>> still aiming to maintain these the given width. I think we need to fix\n>> initdb in that regard.\n> \n> Mmm, the message was introduced in 2012 by 8a02339e9b. I haven't\n> noticed this until now...\n> \n> regards.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Mon, 19 Jun 2023 11:24:29 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Mon, Jun 19, 2023 at 11:24:29AM +0900, torikoshia wrote:\n> Thanks, now I understand what you meant.\n\nIf I may ask, why is the refactoring of 0003 done after the feature in\n0002? Shouldn't the order be reversed? That would make for a cleaner\ngit history.\n--\nMichael", "msg_date": "Mon, 19 Jun 2023 14:37:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-06-19 14:37, Michael Paquier wrote:\n> On Mon, Jun 19, 2023 at 11:24:29AM +0900, torikoshia wrote:\n>> Thanks, now I understand what you meant.\n> \n> If I may ask, why is the refactoring of 0003 done after the feature in\n> 0002? Shouldn't the order be reversed? That would make for a cleaner\n> git history.\n> --\n> Michael\n\nAgreed.\nReversed the order of patches 0002 and 0003.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 20 Jun 2023 22:27:36 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "At Tue, 20 Jun 2023 22:27:36 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> On 2023-06-19 14:37, Michael Paquier wrote:\n> > On Mon, Jun 19, 2023 at 11:24:29AM +0900, torikoshia wrote:\n> >> Thanks, now I understand what you meant.\n> > If I may ask, why is the refactoring of 0003 done after the feature in\n> > 0002? Shouldn't the order be reversed? That would make for a cleaner\n> > git history.\n> > --\n> > Michael\n> \n> Agreed.\n> Reversed the order of patches 0002 and 0003.\n\nYeah, that is a possible division. However, I meant that we have room\nto refactor and decrease the nesting level even further, considering\nthat 0003 already does this to some extent, when I suggested it. In\nthat sense, moving the nest-reduction part of 0003 into 0002 makes us\npossible to focus on the point of this patch.\n\nWhat do you think about the attached version? \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 21 Jun 2023 11:59:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-06-21 11:59, Kyotaro Horiguchi wrote:\n> At Tue, 20 Jun 2023 22:27:36 +0900, torikoshia\n> <torikoshia@oss.nttdata.com> wrote in\n>> On 2023-06-19 14:37, Michael Paquier wrote:\n>> > On Mon, Jun 19, 2023 at 11:24:29AM +0900, torikoshia wrote:\n>> >> Thanks, now I understand what you meant.\n>> > If I may ask, why is the refactoring of 0003 done after the feature in\n>> > 0002? Shouldn't the order be reversed? That would make for a cleaner\n>> > git history.\n>> > --\n>> > Michael\n>> \n>> Agreed.\n>> Reversed the order of patches 0002 and 0003.\n> \n> Yeah, that is a possible division. However, I meant that we have room\n> to refactor and decrease the nesting level even further, considering\n> that 0003 already does this to some extent, when I suggested it. In\n> that sense, moving the nest-reduction part of 0003 into 0002 makes us\n> possible to focus on the point of this patch.\n\nThanks for the comment, it seems better than v9 patch.\n\n> What do you think about the attached version?\n\n--v10-0002-Preliminary-refactoring-for-a-subsequent-patch.patch\n+ * Also we skip backup history files when --clean-backup-history\n+ * is not specified.\n+ */\n+ if (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile))\n+ continue;\n\nI think this comment should be located in 0003.\n\nAttached updated patches.\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 21 Jun 2023 23:41:33 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "At Wed, 21 Jun 2023 23:41:33 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> --v10-0002-Preliminary-refactoring-for-a-subsequent-patch.patch\n> + * Also we skip backup history files when --clean-backup-history\n> + * is not specified.\n> + */\n> + if (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile))\n> + continue;\n> \n> I think this comment should be located in 0003.\n\nAuch! Right.\n\n> Attached updated patches.\n\nThanks!\n\n\n0001:\n\nLooks good to me.\n\n0002:\n\n+\t\t * Check file name.\n+\t\t *\n+\t\t * We skip files which are not WAL file or partial WAL file.\n\nThere's no need to spend this many lines describing this, and it's\nsuggested to avoid restating what the code does. I think a comment\nmight not be necessary. But if we decide to have one, it could look\nlike this:\n\n> /* we're only removing specific types of files */\n\nOther than that, it looks good to me.\n\n\n0003:\n+ <para>\n+ Remove backup history files.\n\nI might be overthinking it, but I'd phrase it as, \"Remove backup\nhistory files *as well*.\". (The --help message describes the same\nthing using \"including\".)\n\n\n\n+ For details about backup history file, please refer to the <xref linkend=\"backup-base-backup\"/>.\n\nWe usually write this as simple as \"See <xref...> for details (of the\nbackup history files)\" or \"Refer to <xref..> for more information\n(about the backup history files).\" or such like... (I think)\n\n\n\n+bool\t\tcleanBackupHistory = false;\t/* remove files including\n+\t\t\t\t\t\t\t\t\t\t\t\t * backup history files */\n\nThe indentation appears to be inconsistent.\n\n\n-\t\t * Truncation is essentially harmless, because we skip names of\n-\t\t * length other than XLOG_FNAME_LEN. (In principle, one could use\n-\t\t * a 1000-character additional_ext and get trouble.)\n+\t\t * Truncation is essentially harmless, because we check the file\n+\t\t * format including the length immediately after this.\n+\t\t * (In principle, one could use a 1000-character additional_ext\n+\t\t * and get trouble.)\n \t\t */\n \t\tstrlcpy(walfile, xlde->d_name, MAXPGPATH);\n \t\tTrimExtension(walfile, additional_ext);\n\nThe revised comment seems to drift from the original point. Except for\na potential exception by a too-long addition_ext, the original comment\nasserted that the name truncation was safe since it wouldn't affect\nthe files we're removing. In other words, it asserted that the\nfilenames to be removed won't be truncated and any actual truncation\nwouldn't lead to accidental deletions.\n\nHence, I think we should adjust the comment to maintain its original\npoint, and extend it to include backup history files. A possible\nrevision could be (very simple):\n\n>\t\t * Truncation is essentially harmless, because we skip names of length\n>\t\t * longer than the length of backup history file. (In principle, one\n>\t\t * could use a 1000-character additional_ext and get trouble.)\n\n\nRegarding the TAP test, it checks that the --clean-backup-history does\nindeed remove backup history files. However, it doesn't check that\nthis doesn't occur if the option isn't specified. Shouldn't we test\nfor the latter scenario as well?\n\n\n+sub get_walfiles\n+{\n<snip..>\n+\n+create_files(get_walfiles(@walfiles_with_gz));\n\nThe new function get_walfiels() just increases the line count without\ncutting any lines. The following changes are sufficient and easier to\nread (at least for me).\n\n>\t\topen my $file, '>', \"$tempdir/$fn->{name}\";\n\n>\tforeach my $fn (map {$_->{name}} @walfiles_with_gz)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Jun 2023 16:47:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-06-22 16:47, Kyotaro Horiguchi wrote:\nThanks for your review!\n\n> \n> 0002:\n> \n> +\t\t * Check file name.\n> +\t\t *\n> +\t\t * We skip files which are not WAL file or partial WAL file.\n> \n> There's no need to spend this many lines describing this, and it's\n> suggested to avoid restating what the code does. I think a comment\n> might not be necessary. But if we decide to have one, it could look\n> like this:\n> \n>> /* we're only removing specific types of files */\n\nAs you mentioned, this comment is restatement of the codes.\nRemoved the comment.\n\n> \n> Other than that, it looks good to me.\n> \n> \n> 0003:\n> + <para>\n> + Remove backup history files.\n> \n> I might be overthinking it, but I'd phrase it as, \"Remove backup\n> history files *as well*.\". (The --help message describes the same\n> thing using \"including\".)\n\nAgreed.\n\n> + For details about backup history file, please refer to the\n> <xref linkend=\"backup-base-backup\"/>.\n> \n> We usually write this as simple as \"See <xref...> for details (of the\n> backup history files)\" or \"Refer to <xref..> for more information\n> (about the backup history files).\" or such like... (I think)\n\nAgreed. I used the former one.\n\n> +bool\t\tcleanBackupHistory = false;\t/* remove files including\n> +\t\t\t\t\t\t\t\t\t\t\t\t * backup history files */\n> \n> The indentation appears to be inconsistent.\n\nModified.\n\n> \n> \n> -\t\t * Truncation is essentially harmless, because we skip names of\n> -\t\t * length other than XLOG_FNAME_LEN. (In principle, one could use\n> -\t\t * a 1000-character additional_ext and get trouble.)\n> +\t\t * Truncation is essentially harmless, because we check the file\n> +\t\t * format including the length immediately after this.\n> +\t\t * (In principle, one could use a 1000-character additional_ext\n> +\t\t * and get trouble.)\n> \t\t */\n> \t\tstrlcpy(walfile, xlde->d_name, MAXPGPATH);\n> \t\tTrimExtension(walfile, additional_ext);\n> \n> The revised comment seems to drift from the original point. Except for\n> a potential exception by a too-long addition_ext, the original comment\n> asserted that the name truncation was safe since it wouldn't affect\n> the files we're removing. In other words, it asserted that the\n> filenames to be removed won't be truncated and any actual truncation\n> wouldn't lead to accidental deletions.\n> \n> Hence, I think we should adjust the comment to maintain its original\n> point, and extend it to include backup history files. A possible\n> revision could be (very simple):\n> \n>> \t\t * Truncation is essentially harmless, because we skip names of \n>> length\n>> \t\t * longer than the length of backup history file. (In principle, one\n>> \t\t * could use a 1000-character additional_ext and get trouble.)\n\nThis is true, but we do stricter check for preventing accidental\ndeletion at the below code than just skipping \"names of length longer\nthan the length of backup history file\".\n\n| if (!IsXLogFileName(walfile) && !IsPartialXLogFileName(walfile) \n&&\n| !(cleanBackupHistory && IsBackupHistoryFileName(walfile)))\n| continue;\n\nHow about something like this?\n\n| Truncation is essentially harmless, because we skip files whose format \nis different from WAL files and backup history files.\n\n> Regarding the TAP test, it checks that the --clean-backup-history does\n> indeed remove backup history files. However, it doesn't check that\n> this doesn't occur if the option isn't specified. Shouldn't we test\n> for the latter scenario as well?\n\nAgreed.\nAdded a backup history file to @walfiles_with_gz.\n\n> +sub get_walfiles\n> +{\n> <snip..>\n> +\n> +create_files(get_walfiles(@walfiles_with_gz));\n> \n> The new function get_walfiels() just increases the line count without\n> cutting any lines. The following changes are sufficient and easier to\n> read (at least for me).\n> \n>> \t\topen my $file, '>', \"$tempdir/$fn->{name}\";\n> \n>> \tforeach my $fn (map {$_->{name}} @walfiles_with_gz)\n\nAgreed.\nRemove get_walfiles() and added some changes as above.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Fri, 23 Jun 2023 17:37:09 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Fri, Jun 23, 2023 at 05:37:09PM +0900, torikoshia wrote:\n> On 2023-06-22 16:47, Kyotaro Horiguchi wrote:\n> Thanks for your review!\n\nI have begun cleaning up my board, and applied 0001 for the moment.\n--\nMichael", "msg_date": "Fri, 30 Jun 2023 15:48:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On Fri, Jun 30, 2023 at 03:48:43PM +0900, Michael Paquier wrote:\n> I have begun cleaning up my board, and applied 0001 for the moment.\n\nAnd a few weeks later.. I have come around this thread and applied\n0002 and 0003.\n\nThe flow of 0002 was straight-forward. My main issue was in 0003,\nactually, where the TAP tests were kind of confusing as written:\n- There was no cleanup of the files still present after a single\ncommand check, which could easily mess up the tests.\n- The --dry-run case was using the list of WAL files for the extension\npattern checks, hardcoding names based on the position of its array.\nI have switched that to use a third list of files, instead.\n\nThe result looked OK and that can be extended easily for more\npatterns or more commands, so applied 0003 after doing these\nadjustments, coupled with a pgperltidy run, a pgperlcritic check and\nan indentation.\n--\nMichael", "msg_date": "Wed, 19 Jul 2023 13:58:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" }, { "msg_contents": "On 2023-07-19 13:58, Michael Paquier wrote:\n> On Fri, Jun 30, 2023 at 03:48:43PM +0900, Michael Paquier wrote:\n>> I have begun cleaning up my board, and applied 0001 for the moment.\n> \n> And a few weeks later.. I have come around this thread and applied\n> 0002 and 0003.\n> \n> The flow of 0002 was straight-forward. My main issue was in 0003,\n> actually, where the TAP tests were kind of confusing as written:\n> - There was no cleanup of the files still present after a single\n> command check, which could easily mess up the tests.\n> - The --dry-run case was using the list of WAL files for the extension\n> pattern checks, hardcoding names based on the position of its array.\n> I have switched that to use a third list of files, instead.\n> \n> The result looked OK and that can be extended easily for more\n> patterns or more commands, so applied 0003 after doing these\n> adjustments, coupled with a pgperltidy run, a pgperlcritic check and\n> an indentation.\n> --\n> Michael\n\nThanks for the reviewing and applying the patches!\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 19 Jul 2023 21:44:02 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Allow pg_archivecleanup to remove backup history files" } ]
[ { "msg_contents": "Hi\n\nWhen I implemented profiler and coverage check to plpgsql_check I had to\nwrite a lot of hard maintaining code related to corect finishing some\noperations (counter incrementing) usually executed by stmt_end and func_end\nhooks. It is based on the fmgr hook and its own statement call stack. Can\nbe nice if I can throw this code and use some nice buildin API.\n\nCan we enhance dbg API with two hooks stmt_end_err func_end_err ?\n\nThese hooks can be called from exception handlers before re raising.\n\nOr we can define new hooks like executor hooks - stmt_exec and func_exec.\nIn custom hooks the exception can be catched.\n\nWhat do you think about this proposal?\n\nregards\n\nPavel\n\nHiWhen I implemented profiler and coverage check to plpgsql_check I had to write a lot of hard maintaining code related to corect finishing some operations (counter incrementing) usually executed by stmt_end and func_end hooks. It is based on the fmgr hook and its own statement call stack. Can be nice if I can throw this code and use some nice buildin API.Can we enhance dbg API with two hooks stmt_end_err func_end_err ?These hooks can be called from exception handlers before re raising. Or we can define new hooks like executor hooks - stmt_exec and func_exec. In custom hooks the exception can be catched.What do you think about this proposal?regardsPavel", "msg_date": "Tue, 25 Apr 2023 10:27:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "enhancing plpgsql debug api - hooks on statements errors and function\n errors" }, { "msg_contents": "Hi\n\n\nút 25. 4. 2023 v 10:27 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> When I implemented profiler and coverage check to plpgsql_check I had to\n> write a lot of hard maintaining code related to corect finishing some\n> operations (counter incrementing) usually executed by stmt_end and func_end\n> hooks. It is based on the fmgr hook and its own statement call stack. Can\n> be nice if I can throw this code and use some nice buildin API.\n>\n> Can we enhance dbg API with two hooks stmt_end_err func_end_err ?\n>\n> These hooks can be called from exception handlers before re raising.\n>\n> Or we can define new hooks like executor hooks - stmt_exec and func_exec.\n> In custom hooks the exception can be catched.\n>\n> What do you think about this proposal?\n>\n>\nI did quick and ugly benchmark on worst case\n\nCREATE OR REPLACE FUNCTION public.speedtest(i integer)\n RETURNS void\n LANGUAGE plpgsql\n IMMUTABLE\nAS $function$\ndeclare c int = 0;\nbegin\n while c < i\n loop\n c := c + 1;\n end loop;\n raise notice '%', c;\nend;\n$function$\n\nand is possible to write some code (see ugly patch) without any performance\nimpacts if the hooks are not used. When hooks are active, then there is 7%\nperformance lost. It is not nice - but this is the worst case. The impact\non real code should be significantly lower\n\nRegards\n\nPavel", "msg_date": "Tue, 25 Apr 2023 17:32:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: enhancing plpgsql debug api - hooks on statements errors and\n function errors" }, { "msg_contents": "On Tue, Apr 25, 2023 at 11:33 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n> út 25. 4. 2023 v 10:27 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> When I implemented profiler and coverage check to plpgsql_check I had to\n>> write a lot of hard maintaining code related to corect finishing some\n>> operations (counter incrementing) usually executed by stmt_end and func_end\n>> hooks. It is based on the fmgr hook and its own statement call stack. Can\n>> be nice if I can throw this code and use some nice buildin API.\n>>\n>> Can we enhance dbg API with two hooks stmt_end_err func_end_err ?\n>>\n>> These hooks can be called from exception handlers before re raising.\n>>\n>> Or we can define new hooks like executor hooks - stmt_exec and func_exec.\n>> In custom hooks the exception can be catched.\n>>\n>> What do you think about this proposal?\n>>\n>> +1. I believe I bumped into a few of these use cases with plpgsql_check\n(special handling for security definer and exception handling).\n My cursory review of the patch file is that despite the movement of the\ncode, it feels pretty straight forward.\n\nThe 7% overhead appears in a \"tight loop\", so it's probably really\noverstated. I will see if I can apply this and do a more realistic test.\n[I have a procedure that takes ~2hrs to process a lot of data, I would be\ncurious to see this impact and report back]\n\n\n> I did quick and ugly benchmark on worst case\n>\n> CREATE OR REPLACE FUNCTION public.speedtest(i integer)\n> RETURNS void\n> LANGUAGE plpgsql\n> IMMUTABLE\n> AS $function$\n> declare c int = 0;\n> begin\n> while c < i\n> loop\n> c := c + 1;\n> end loop;\n> raise notice '%', c;\n> end;\n> $function$\n>\n> and is possible to write some code (see ugly patch) without any\n> performance impacts if the hooks are not used. When hooks are active, then\n> there is 7% performance lost. It is not nice - but this is the worst case.\n> The impact on real code should be significantly lower\n>\n> Regards\n>\n> Pavel\n>\n>\n\nOn Tue, Apr 25, 2023 at 11:33 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hiút 25. 4. 2023 v 10:27 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiWhen I implemented profiler and coverage check to plpgsql_check I had to write a lot of hard maintaining code related to corect finishing some operations (counter incrementing) usually executed by stmt_end and func_end hooks. It is based on the fmgr hook and its own statement call stack. Can be nice if I can throw this code and use some nice buildin API.Can we enhance dbg API with two hooks stmt_end_err func_end_err ?These hooks can be called from exception handlers before re raising. Or we can define new hooks like executor hooks - stmt_exec and func_exec. In custom hooks the exception can be catched.What do you think about this proposal?+1.  I believe I bumped into a few of these use cases with plpgsql_check (special handling for security definer and exception handling).  My cursory review of the patch file is that despite the movement of the code, it feels pretty straight forward.The 7% overhead appears in a \"tight loop\", so it's probably really overstated.  I will see if I can apply this and do a more realistic test.[I have a procedure that takes ~2hrs to process a lot of data, I would be curious to see this impact and report back]  I did quick and ugly benchmark on worst caseCREATE OR REPLACE FUNCTION public.speedtest(i integer) RETURNS void LANGUAGE plpgsql IMMUTABLEAS $function$declare c int = 0;begin  while c < i  loop    c := c + 1;  end loop;  raise notice '%', c;end;$function$and is possible to write some code (see ugly patch) without any performance impacts if the hooks are not used. When hooks are active, then there is 7% performance lost. It is not nice - but this is the worst case. The impact on real code should be significantly lowerRegardsPavel", "msg_date": "Wed, 10 May 2023 00:11:20 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: enhancing plpgsql debug api - hooks on statements errors and\n function errors" } ]
[ { "msg_contents": "Hi,\n\nRegarding pg_stat_io for the startup process, I noticed that the counters\nare only incremented after the startup process exits, not during WAL replay\nin standby mode. This is because pgstat_flush_io() is only called when\nthe startup process exits. Shouldn't it be called during WAL replay as well\nto report IO statistics by the startup process even in standby mode?\n\nAlso, the pg_stat_io view includes a row with backend_type=startup and\ncontext=vacuum, but it seems that the startup process doesn't perform\nany I/O operations with BAS_VACUUM. If this understanding is right,\nshouldn't we omit this row from the view? Additionally, I noticed that\nthe view also includes a row with backend_type=startup and\ncontext=bulkread / bulkwrite. Do these operations actually occur\nduring startup process?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 25 Apr 2023 22:51:14 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "pg_stat_io for the startup process" }, { "msg_contents": "On Tue, Apr 25, 2023 at 10:51:14PM +0900, Fujii Masao wrote:\n> Hi,\n> \n> Regarding pg_stat_io for the startup process, I noticed that the counters\n> are only incremented after the startup process exits, not during WAL replay\n> in standby mode. This is because pgstat_flush_io() is only called when\n> the startup process exits. Shouldn't it be called during WAL replay as well\n> to report IO statistics by the startup process even in standby mode?\n\nYes, we definitely want stats from the startup process on the standby.\nElsewhere on the internet where you originally raised this, I mentioned\nthat I hacked a pgstat_flush_io() into the redo apply loop in\nPerformWalRecovery() but that I wasn't sure that this was affordable.\nAndres Freund replied saying that it would be too expensive and\nsuggested that the set up a regular timeout which sets a flag that's\nchecked by HandleStartupProcInterrupts().\n\nI'm wondering if this is something we consider a bug and thus would be\nunder consideration for 16.\n \n> Also, the pg_stat_io view includes a row with backend_type=startup and\n> context=vacuum, but it seems that the startup process doesn't perform\n> any I/O operations with BAS_VACUUM. If this understanding is right,\n> shouldn't we omit this row from the view? Additionally, I noticed that\n> the view also includes a row with backend_type=startup and\n> context=bulkread / bulkwrite. Do these operations actually occur\n> during startup process?\n\nHmm. Yes, I remember posing this question on the thread and not getting\nan answer. I read some code and did some testing and can't see a way we\nwould end up with the startup process doing IO in a non-normal context.\n\nCertainly I can't see how startup process would ever use a BAS_VACUUM\ncontext given that it executes heap_xlog_vacuum().\n\nI thought at some point I had encountered an assertion failure when I\nbanned the startup process from tracking io operations in bulkread and\nbulkwrite contexts. But, I'm not seeing how that could happen.\n\n- Melanie\n\n\n", "msg_date": "Tue, 25 Apr 2023 13:54:43 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi,\n\nOn 2023-04-25 13:54:43 -0400, Melanie Plageman wrote:\n> On Tue, Apr 25, 2023 at 10:51:14PM +0900, Fujii Masao wrote:\n> > Regarding pg_stat_io for the startup process, I noticed that the counters\n> > are only incremented after the startup process exits, not during WAL replay\n> > in standby mode. This is because pgstat_flush_io() is only called when\n> > the startup process exits. Shouldn't it be called during WAL replay as well\n> > to report IO statistics by the startup process even in standby mode?\n> \n> Yes, we definitely want stats from the startup process on the standby.\n> Elsewhere on the internet where you originally raised this, I mentioned\n> that I hacked a pgstat_flush_io() into the redo apply loop in\n> PerformWalRecovery() but that I wasn't sure that this was affordable.\n> Andres Freund replied saying that it would be too expensive and\n> suggested that the set up a regular timeout which sets a flag that's\n> checked by HandleStartupProcInterrupts().\n\nIt's tempting to try to reuse the STARTUP_PROGRESS_TIMEOUT timer. But it's\ncontrolled by a GUC, so it's not really suitable.\n\n\n> I'm wondering if this is something we consider a bug and thus would be\n> under consideration for 16.\n\nI'm mildly inclined to not consider it a bug, given that this looks to have\nbeen true for other stats for quite a while? But it does still seem worth\nimproving upon - I'd make the consideration when to apply the relevant patches\ndepend on the complexity. I'm worried we'd need to introduce sufficiently new\ninfrastructure that 16 doesn't seem like a good idea. Let's come up with a\npatch and judge it after?\n\n\n> > Also, the pg_stat_io view includes a row with backend_type=startup and\n> > context=vacuum, but it seems that the startup process doesn't perform\n> > any I/O operations with BAS_VACUUM. If this understanding is right,\n> > shouldn't we omit this row from the view? Additionally, I noticed that\n> > the view also includes a row with backend_type=startup and\n> > context=bulkread / bulkwrite. Do these operations actually occur\n> > during startup process?\n> \n> Hmm. Yes, I remember posing this question on the thread and not getting\n> an answer. I read some code and did some testing and can't see a way we\n> would end up with the startup process doing IO in a non-normal context.\n> \n> Certainly I can't see how startup process would ever use a BAS_VACUUM\n> context given that it executes heap_xlog_vacuum().\n> \n> I thought at some point I had encountered an assertion failure when I\n> banned the startup process from tracking io operations in bulkread and\n> bulkwrite contexts. But, I'm not seeing how that could happen.\n\nIt's possible that we decided to not apply such restrictions because the\nstartup process can be made to execute more code via the extensible\nrmgrs.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Apr 2023 11:39:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "On Tue, Apr 25, 2023 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm mildly inclined to not consider it a bug, given that this looks to have\n> been true for other stats for quite a while? But it does still seem worth\n> improving upon - I'd make the consideration when to apply the relevant patches\n> depend on the complexity. I'm worried we'd need to introduce sufficiently new\n> infrastructure that 16 doesn't seem like a good idea. Let's come up with a\n> patch and judge it after?\n\nISTM that it's pretty desirable to do something about this. If the\nprocess isn't going to report statistics properly, at least remove it\nfrom the view. If it can be made to report properly, that would be\neven better. But shipping a new view with information that will nearly\nalways be zeroes instead of real data seems like a bad call, even if\nthere are existing cases that have the same problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Apr 2023 16:00:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "On Tue, Apr 25, 2023 at 04:00:24PM -0400, Robert Haas wrote:\n> ISTM that it's pretty desirable to do something about this. If the\n> process isn't going to report statistics properly, at least remove it\n> from the view. If it can be made to report properly, that would be\n> even better. But shipping a new view with information that will nearly\n> always be zeroes instead of real data seems like a bad call, even if\n> there are existing cases that have the same problem.\n\nAgreed that reporting no information may be better than reporting\nincorrect information, even if it means an extra qual. As mentioned\nupthread, if this requires an extra design piece, adding the correct\ninformation had better be pushed to v17~.\n\nPerhaps an open item should be added?\n--\nMichael", "msg_date": "Wed, 26 Apr 2023 07:45:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi,\n\nOn 2023-04-25 16:00:24 -0400, Robert Haas wrote:\n> On Tue, Apr 25, 2023 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm mildly inclined to not consider it a bug, given that this looks to have\n> > been true for other stats for quite a while? But it does still seem worth\n> > improving upon - I'd make the consideration when to apply the relevant patches\n> > depend on the complexity. I'm worried we'd need to introduce sufficiently new\n> > infrastructure that 16 doesn't seem like a good idea. Let's come up with a\n> > patch and judge it after?\n> \n> ISTM that it's pretty desirable to do something about this. If the\n> process isn't going to report statistics properly, at least remove it\n> from the view.\n\nIt's populated after crash recovery, when shutting down and at the time of\npromotion, that isn't *completely* crazy.\n\n\n> If it can be made to report properly, that would be even better. But\n> shipping a new view with information that will nearly always be zeroes\n> instead of real data seems like a bad call, even if there are existing cases\n> that have the same problem.\n\nI refreshed my memory: The startup process has indeed behaved that way for\nmuch longer than pg_stat_io existed - but it's harder to spot, because the\nstats are more coarsely aggregated :/. And it's very oddly inconsistent:\n\nThe startup process doesn't report per-relation read/hit (it might when we\ncreate a fake relcache entry, to lazy to see what happens exactly), because we\nkey those stats by oid. However, it *does* report the read/write time. But\nonly at process exit, of course. The weird part is that the startup process\ndoes *NOT* increase pg_stat_database.blks_read/blks_hit, because instead of\nbasing those on pgBufferUsage.shared_blks_read etc, we compute them based on\nthe relation level stats. pgBufferUsage is just used for EXPLAIN. This isn't\nrecent, afaict.\n\nTL;DR: Currently the startup process maintains blk_read_time, blk_write_time,\nbut doesn't maintain blks_read, blks_hit - which doesn't make sense.\n\nYikes.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Apr 2023 16:04:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "At Tue, 25 Apr 2023 16:04:23 -0700, Andres Freund <andres@anarazel.de> wrote in \n> I refreshed my memory: The startup process has indeed behaved that way for\n> much longer than pg_stat_io existed - but it's harder to spot, because the\n> stats are more coarsely aggregated :/. And it's very oddly inconsistent:\n> \n> The startup process doesn't report per-relation read/hit (it might when we\n> create a fake relcache entry, to lazy to see what happens exactly), because we\n\nThe key difference lies between relation-level and smgr-level;\nrecovery doesn't call ReadBufferExtended.\n\n> key those stats by oid. However, it *does* report the read/write time. But\n> only at process exit, of course. The weird part is that the startup process\n> does *NOT* increase pg_stat_database.blks_read/blks_hit, because instead of\n> basing those on pgBufferUsage.shared_blks_read etc, we compute them based on\n> the relation level stats. pgBufferUsage is just used for EXPLAIN. This isn't\n> recent, afaict.\n\nI see four issues here.\n\n1. The current database stats omits buffer fetches that don't\n originate from a relation.\n\nIn this case pgstat_relation can't work since recovery isn't conscious\nof relids. We might be able to resolve relfilenode into a relid, but\nit may not be that simple. Fortunately we already count fetches and\nhits process-wide using pgBufferUsage, so we can use this for database\nstats.\n\n2. Even if we wanted to report stats for the startup process,\n pgstat_report_stats wouldn't permit it since transaction-end\n timestamp doesn't advance.\n\nI'm not certain if it's the correct approach, but perhaps we could use\nGetCurrentTimestamp() instead of GetCurrentTransactionStopTimestamp()\nspecifically for the startup process.\n\n3. When should we call pgstat_report_stats on the startup process?\n\nDuring recovery, I think we can call pgstat_report_stats() (or a\nsubset of it) right before invoking WaitLatch and at segment\nboundaries.\n\n4. In the existing ReadBuffer_common, there's an inconsistency in\ncounting hits and reads between pgstat_io and pgBufferUsage.\n\nThe difference comes from the case of RBM_ZERO pages. We should simply\nalign them.\n\n\n> TL;DR: Currently the startup process maintains blk_read_time, blk_write_time,\n> but doesn't maintain blks_read, blks_hit - which doesn't make sense.\n\nAs a result, the attached patch, which is meant for discussion, allows\npg_stat_database to show fetches and reads by the startup process as\nthe counts for the database with id 0.\n\nThere's still some difference between pg_stat_io and pg_stat_database,\nbut I haven't examined it in detail.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 26 Apr 2023 18:47:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "On Wed, Apr 26, 2023 at 5:47 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> 3. When should we call pgstat_report_stats on the startup process?\n>\n> During recovery, I think we can call pgstat_report_stats() (or a\n> subset of it) right before invoking WaitLatch and at segment\n> boundaries.\n\nI think this kind of idea is worth exploring. Andres mentioned timers,\nbut it seems to me that something where we just do it at certain\n\"convenient\" points might be good enough and simpler. I'd much rather\nhave statistics that were up to date as of the last time we finished a\nsegment than have nothing at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Apr 2023 08:34:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi all,\n\nRobert Haas <robertmhaas@gmail.com>, 26 Nis 2023 Çar, 15:34 tarihinde şunu\nyazdı:\n\n> On Wed, Apr 26, 2023 at 5:47 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > 3. When should we call pgstat_report_stats on the startup process?\n> >\n> > During recovery, I think we can call pgstat_report_stats() (or a\n> > subset of it) right before invoking WaitLatch and at segment\n> > boundaries.\n>\n> I think this kind of idea is worth exploring. Andres mentioned timers,\n> but it seems to me that something where we just do it at certain\n> \"convenient\" points might be good enough and simpler. I'd much rather\n> have statistics that were up to date as of the last time we finished a\n> segment than have nothing at all.\n>\n\nI created a rough prototype of a timer-based approach for comparison.\nPlease see attached.\n\nRegistered a new timeout named \"STARTUP_STAT_FLUSH_TIMEOUT\", The timeout's\nhandler sets a flag indicating that io stats need to be flushed.\nHandleStartupProcInterrupts checks the flag to flush stats.\nIt's enabled if any WAL record is read (in the main loop of\nPerformWalRecovery). And it's disabled before WaitLatch to avoid\nunnecessary wakeups if the startup process decides to sleep.\n\nWith those changes, I see that startup related rows in pg_stat_io are\nupdated without waiting for the startup process to exit.\n\npostgres=# select context, reads, extends, hits from pg_stat_io where\n> backend_type = 'startup';\n> context | reads | extends | hits\n> -----------+--------+---------+----------\n> bulkread | 0 | | 0\n> bulkwrite | 0 | 0 | 0\n> normal | 6 | 1 | 41\n> vacuum | 0 | 0 | 0\n> (4 rows)\n\n\nI'm not sure this is the correct way to implement this approach though. I\nappreciate any comment.\n\nAlso; some questions about this implementation if you think it's worth\ndiscussing:\n1- I set an arbitrary timeout (1 sec) for testing. I don't know what the\ncorrect value should be. Does it make sense to use\nPGSTAT_MIN/MAX/IDLE_INTERVAL instead?\n2- I'm also not sure if this timeout should be registered at the beginning\nof StartupProcessMain, or does it even make any difference. I tried to do\nthis just before the main redo loop in PerformWalRecovery, but that made\nthe CI red.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 26 Apr 2023 21:53:16 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi,\n\nOn 2023-04-26 21:53:16 +0300, Melih Mutlu wrote:\n> Robert Haas <robertmhaas@gmail.com>, 26 Nis 2023 Çar, 15:34 tarihinde şunu\n> yazdı:\n> \n> > On Wed, Apr 26, 2023 at 5:47 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > 3. When should we call pgstat_report_stats on the startup process?\n> > >\n> > > During recovery, I think we can call pgstat_report_stats() (or a\n> > > subset of it) right before invoking WaitLatch and at segment\n> > > boundaries.\n> >\n> > I think this kind of idea is worth exploring. Andres mentioned timers,\n> > but it seems to me that something where we just do it at certain\n> > \"convenient\" points might be good enough and simpler. I'd much rather\n> > have statistics that were up to date as of the last time we finished a\n> > segment than have nothing at all.\n> >\n> \n> I created a rough prototype of a timer-based approach for comparison.\n> Please see attached.\n\nThanks!\n\n\n> 2- I'm also not sure if this timeout should be registered at the beginning\n> of StartupProcessMain, or does it even make any difference. I tried to do\n> this just before the main redo loop in PerformWalRecovery, but that made\n> the CI red.\n\nHuh - do you have a link to the failure? That's how I would expect it to be\ndone.\n\n\n> /* Unsupported old recovery command file names (relative to $PGDATA) */\n> #define RECOVERY_COMMAND_FILE\t\"recovery.conf\"\n> @@ -1675,6 +1676,9 @@ PerformWalRecovery(void)\n> \t\t\t\tereport_startup_progress(\"redo in progress, elapsed time: %ld.%02d s, current LSN: %X/%X\",\n> \t\t\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(xlogreader->ReadRecPtr));\n> \n> +\t\t\t/* Is this the right place to enable this? */\n> +\t\t\tenable_startup_stat_flush_timeout();\n> +\n\nI think we should try not have additional code for this inside the loop - we\nshould enable the timer once, when needed, not repeatedly.\n\n\n> #ifdef WAL_DEBUG\n> \t\t\tif (XLOG_DEBUG ||\n> \t\t\t\t(record->xl_rmid == RM_XACT_ID && trace_recovery_messages <= DEBUG2) ||\n> @@ -3617,6 +3621,13 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n> \t\t\t\t\t\t/* Do background tasks that might benefit us later. */\n> \t\t\t\t\t\tKnownAssignedTransactionIdsIdleMaintenance();\n> \n> +\t\t\t\t\t\t/* \n> +\t\t\t\t\t\t * Need to disable flush timeout to avoid unnecessary\n> +\t\t\t\t\t\t * wakeups. Enable it again after a WAL record is read\n> +\t\t\t\t\t\t * in PerformWalRecovery.\n> +\t\t\t\t\t\t */\n> +\t\t\t\t\t\tdisable_startup_stat_flush_timeout();\n> +\n> \t\t\t\t\t\t(void) WaitLatch(&XLogRecoveryCtl->recoveryWakeupLatch,\n> \t\t\t\t\t\t\t\t\t\t WL_LATCH_SET | WL_TIMEOUT |\n> \t\t\t\t\t\t\t\t\t\t WL_EXIT_ON_PM_DEATH,\n\nI think always disabling the timer here isn't quite right - we want to wake up\n*once* in WaitForWALToBecomeAvailable(), otherwise we'll not submit pending\nstats before waiting - potentially for a long time - for WAL. One way would be\njust explicitly report before the WaitLatch().\n\nAnother approach, I think better, would be to not use enable_timeout_every(),\nand to explicitly rearm the timer in HandleStartupProcInterrupts(). When\ncalled from WaitForWALToBecomeAvailable(), we'd not rearm, and instead do so\nat the end of WaitForWALToBecomeAvailable(). That way we also wouldn't\nrepeatedly fire between calls to HandleStartupProcInterrupts().\n\n\n> diff --git a/src/backend/postmaster/startup.c b/src/backend/postmaster/startup.c\n> index efc2580536..b250fa95f9 100644\n> --- a/src/backend/postmaster/startup.c\n> +++ b/src/backend/postmaster/startup.c\n> @@ -72,6 +72,11 @@ static TimestampTz startup_progress_phase_start_time;\n> */\n> static volatile sig_atomic_t startup_progress_timer_expired = false;\n> \n> +/* Indicates whether flushing stats is needed. */\n> +static volatile sig_atomic_t startup_stat_need_flush = false;\n> +\n> +int\t\t\tpgstat_stat_flush_timeout = 1000;\t/* 1 sec ?? */\n\nWe probably should move the existing PGSTAT_MIN_INTERVAL constant from\npgstat.c to pgstat.h.\n\n\n> +extern void enable_startup_stat_flush_timeout(void);\n> +extern void disable_startup_stat_flush_timeout(void);\n> +extern void startup_stat_flush_timeout_handler(void);\n> +\n> #endif\t\t\t\t\t\t\t/* _STARTUP_H */\n> diff --git a/src/include/utils/timeout.h b/src/include/utils/timeout.h\n> index e561a1cde9..a8d360e255 100644\n> --- a/src/include/utils/timeout.h\n> +++ b/src/include/utils/timeout.h\n> @@ -35,6 +35,7 @@ typedef enum TimeoutId\n> \tIDLE_STATS_UPDATE_TIMEOUT,\n> \tCLIENT_CONNECTION_CHECK_TIMEOUT,\n> \tSTARTUP_PROGRESS_TIMEOUT,\n> +\tSTARTUP_STAT_FLUSH_TIMEOUT,\n> \t/* First user-definable timeout reason */\n> \tUSER_TIMEOUT,\n> \t/* Maximum number of timeout reasons */\n\nI think we could just reuse IDLE_STATS_UPDATE_TIMEOUT?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Apr 2023 09:27:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "On Wed, Apr 26, 2023 at 2:53 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi all,\n>\n> Robert Haas <robertmhaas@gmail.com>, 26 Nis 2023 Çar, 15:34 tarihinde şunu yazdı:\n>>\n>> On Wed, Apr 26, 2023 at 5:47 AM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>> > 3. When should we call pgstat_report_stats on the startup process?\n>> >\n>> > During recovery, I think we can call pgstat_report_stats() (or a\n>> > subset of it) right before invoking WaitLatch and at segment\n>> > boundaries.\n>>\n>> I think this kind of idea is worth exploring. Andres mentioned timers,\n>> but it seems to me that something where we just do it at certain\n>> \"convenient\" points might be good enough and simpler. I'd much rather\n>> have statistics that were up to date as of the last time we finished a\n>> segment than have nothing at all.\n>\n>\n> I created a rough prototype of a timer-based approach for comparison. Please see attached.\n>\n> Registered a new timeout named \"STARTUP_STAT_FLUSH_TIMEOUT\", The timeout's\n> handler sets a flag indicating that io stats need to be flushed.\n> HandleStartupProcInterrupts checks the flag to flush stats. It's enabled if\n> any WAL record is read (in the main loop of PerformWalRecovery). And it's\n> disabled before WaitLatch to avoid unnecessary wakeups if the startup process\n> decides to sleep.\n\nI was reviewing this and found that if I remove the calls to\ndisable_startup_stat_flush_timeout() the number of calls to\npgstat_report_wal() on a briefly active and then idle standby are about\n8 in 100 seconds whereas with timer disablement, the calls over the same\nperiod are about 40. I would have thought that disabling the timer would\nhave caused us to try and flush the stats less often.\n\nWith the calls to disable_startup_stat_flush_timeout(), we do have much\nfewer calls to setitimer(), of course.\n\nGiven the below suggestion by Andres, I tried doing some traces of the\nvarious approaches.\n\n> > + disable_startup_stat_flush_timeout();\n> > +\n> > (void) WaitLatch(&XLogRecoveryCtl->recoveryWakeupLatch,\n> > WL_LATCH_SET | WL_TIMEOUT |\n> > WL_EXIT_ON_PM_DEATH,\n>\n> I think always disabling the timer here isn't quite right - we want to wake up\n> *once* in WaitForWALToBecomeAvailable(), otherwise we'll not submit pending\n> stats before waiting - potentially for a long time - for WAL. One way would be\n> just explicitly report before the WaitLatch().\n>\n> Another approach, I think better, would be to not use enable_timeout_every(),\n> and to explicitly rearm the timer in HandleStartupProcInterrupts(). When\n> called from WaitForWALToBecomeAvailable(), we'd not rearm, and instead do so\n> at the end of WaitForWALToBecomeAvailable(). That way we also wouldn't\n> repeatedly fire between calls to HandleStartupProcInterrupts().\n\nAfter a quick example implementation of this, I found that it seemed to\ntry and flush the stats less often on an idle standby (good) than using\nenable_timeout_every().\n\n- Melanie\n\n\n", "msg_date": "Thu, 27 Apr 2023 17:30:40 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "At Thu, 27 Apr 2023 17:30:40 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in \r\n> On Wed, Apr 26, 2023 at 2:53 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> >\r\n> > Hi all,\r\n> >\r\n> > Robert Haas <robertmhaas@gmail.com>, 26 Nis 2023 Çar, 15:34 tarihinde şunu yazdı:\r\n> >>\r\n> >> On Wed, Apr 26, 2023 at 5:47 AM Kyotaro Horiguchi\r\n> >> <horikyota.ntt@gmail.com> wrote:\r\n> >> > 3. When should we call pgstat_report_stats on the startup process?\r\n> >> >\r\n> >> > During recovery, I think we can call pgstat_report_stats() (or a\r\n> >> > subset of it) right before invoking WaitLatch and at segment\r\n> >> > boundaries.\r\n> >>\r\n> >> I think this kind of idea is worth exploring. Andres mentioned timers,\r\n> >> but it seems to me that something where we just do it at certain\r\n> >> \"convenient\" points might be good enough and simpler. I'd much rather\r\n> >> have statistics that were up to date as of the last time we finished a\r\n> >> segment than have nothing at all.\r\n> >\r\n> >\r\n> > I created a rough prototype of a timer-based approach for comparison. Please see attached.\r\n> >\r\n> > Registered a new timeout named \"STARTUP_STAT_FLUSH_TIMEOUT\", The timeout's\r\n> > handler sets a flag indicating that io stats need to be flushed.\r\n> > HandleStartupProcInterrupts checks the flag to flush stats. It's enabled if\r\n> > any WAL record is read (in the main loop of PerformWalRecovery). And it's\r\n> > disabled before WaitLatch to avoid unnecessary wakeups if the startup process\r\n> > decides to sleep.\r\n> \r\n> I was reviewing this and found that if I remove the calls to\r\n> disable_startup_stat_flush_timeout() the number of calls to\r\n> pgstat_report_wal() on a briefly active and then idle standby are about\r\n> 8 in 100 seconds whereas with timer disablement, the calls over the same\r\n> period are about 40. I would have thought that disabling the timer would\r\n> have caused us to try and flush the stats less often.\r\n> \r\n> With the calls to disable_startup_stat_flush_timeout(), we do have much\r\n> fewer calls to setitimer(), of course.\r\n> \r\n> Given the below suggestion by Andres, I tried doing some traces of the\r\n> various approaches.\r\n> \r\n> > > + disable_startup_stat_flush_timeout();\r\n> > > +\r\n> > > (void) WaitLatch(&XLogRecoveryCtl->recoveryWakeupLatch,\r\n> > > WL_LATCH_SET | WL_TIMEOUT |\r\n> > > WL_EXIT_ON_PM_DEATH,\r\n> >\r\n> > I think always disabling the timer here isn't quite right - we want to wake up\r\n> > *once* in WaitForWALToBecomeAvailable(), otherwise we'll not submit pending\r\n> > stats before waiting - potentially for a long time - for WAL. One way would be\r\n> > just explicitly report before the WaitLatch().\r\n> >\r\n> > Another approach, I think better, would be to not use enable_timeout_every(),\r\n> > and to explicitly rearm the timer in HandleStartupProcInterrupts(). When\r\n> > called from WaitForWALToBecomeAvailable(), we'd not rearm, and instead do so\r\n> > at the end of WaitForWALToBecomeAvailable(). That way we also wouldn't\r\n> > repeatedly fire between calls to HandleStartupProcInterrupts().\r\n> \r\n> After a quick example implementation of this, I found that it seemed to\r\n> try and flush the stats less often on an idle standby (good) than using\r\n> enable_timeout_every().\r\n\r\nJust rearming with the full-interval will work that way. Our existing\r\nstrategy for this is seen in PostgresMain().\r\n\r\n stats_timeout = pgstat_report_stat(false);\r\n if (stats_timeout > 0)\r\n {\r\n if (!get_timeout_active(BLAH_TIMEOUT))\r\n\t enable_timeout_after(BLAH_TIMEOUT, stats_timeout);\r\n }\r\n else\r\n {\r\n if (get_timeout_active(BLAH_TIMEOUT))\r\n\t disable_timeout(BLAH_TIMEOUT, false);\r\n }\r\n WaitLatch();\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 28 Apr 2023 11:15:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "At Fri, 28 Apr 2023 11:15:51 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 27 Apr 2023 17:30:40 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in \n> > After a quick example implementation of this, I found that it seemed to\n> > try and flush the stats less often on an idle standby (good) than using\n> > enable_timeout_every().\n> \n> Just rearming with the full-interval will work that way. Our existing\n> strategy for this is seen in PostgresMain().\n> \n> stats_timeout = pgstat_report_stat(false);\n> if (stats_timeout > 0)\n> {\n> if (!get_timeout_active(BLAH_TIMEOUT))\n> \t enable_timeout_after(BLAH_TIMEOUT, stats_timeout);\n> }\n> else\n> {\n> if (get_timeout_active(BLAH_TIMEOUT))\n> \t disable_timeout(BLAH_TIMEOUT, false);\n> }\n> WaitLatch();\n\nIm my example, I left out idle-time flushing, but I realized we don't\nneed the timeout mechanism here since we're already managing it. So\nthe following should work (assuming the timestamp updates with\nGetCurrentTimestamp() in my last patch).\n\n@@ -3889,13 +3900,23 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n \t\t\t\t\t/* Update pg_stat_recovery_prefetch before sleeping. */\n \t\t\t\t\tXLogPrefetcherComputeStats(xlogprefetcher);\n \n+\t\t\t\t\t/*\n+\t\t\t\t\t * Report stats; if not time yet, set next WaitLatch to\n+\t\t\t\t\t * wake up at the next reporing time.\n+\t\t\t\t\t */\n+\t\t\t\t\twait_time = pgstat_report_stat(false);\n+\n+\t\t\t\t\t/* if no pending stats, sleep forever */\n+\t\t\t\t\tif (wait_time == 0)\n+\t\t\t\t\t\twait_time = -1L;\n+\n \t\t\t\t\t/*\n \t\t\t\t\t * Wait for more WAL to arrive, when we will be woken\n \t\t\t\t\t * immediately by the WAL receiver.\n \t\t\t\t\t */\n \t\t\t\t\t(void) WaitLatch(&XLogRecoveryCtl->recoveryWakeupLatch,\n \t\t\t\t\t\t\t\t\t WL_LATCH_SET | WL_EXIT_ON_PM_DEATH,\n-\t\t\t\t\t\t\t\t\t -1L,\n+\t\t\t\t\t\t\t\t\t wait_time,\n \t\t\t\t\t\t\t\t\t WAIT_EVENT_RECOVERY_WAL_STREAM);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 28 Apr 2023 11:43:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi,\n\nAndres Freund <andres@anarazel.de>, 27 Nis 2023 Per, 19:27 tarihinde şunu\nyazdı:\n\n> Huh - do you have a link to the failure? That's how I would expect it to be\n> done.\n\n\nI think I should have registered it in the beginning of\nPerformWalRecovery() and not just before the main redo loop\nsince HandleStartupProcInterrupts is called before the loop too.\nHere's the error log otherise [1]\n\n> #ifdef WAL_DEBUG\n> > if (XLOG_DEBUG ||\n> > (record->xl_rmid == RM_XACT_ID &&\n> trace_recovery_messages <= DEBUG2) ||\n> > @@ -3617,6 +3621,13 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr,\n> bool randAccess,\n> > /* Do background tasks\n> that might benefit us later. */\n> >\n> KnownAssignedTransactionIdsIdleMaintenance();\n> >\n> > + /*\n> > + * Need to disable flush\n> timeout to avoid unnecessary\n> > + * wakeups. Enable it\n> again after a WAL record is read\n> > + * in PerformWalRecovery.\n> > + */\n> > +\n> disable_startup_stat_flush_timeout();\n> > +\n> > (void)\n> WaitLatch(&XLogRecoveryCtl->recoveryWakeupLatch,\n> >\n> WL_LATCH_SET | WL_TIMEOUT |\n> >\n> WL_EXIT_ON_PM_DEATH,\n>\n> I think always disabling the timer here isn't quite right - we want to\n> wake up\n> *once* in WaitForWALToBecomeAvailable(), otherwise we'll not submit pending\n> stats before waiting - potentially for a long time - for WAL. One way\n> would be\n> just explicitly report before the WaitLatch().\n>\n> Another approach, I think better, would be to not use\n> enable_timeout_every(),\n> and to explicitly rearm the timer in HandleStartupProcInterrupts(). When\n> called from WaitForWALToBecomeAvailable(), we'd not rearm, and instead do\n> so\n> at the end of WaitForWALToBecomeAvailable(). That way we also wouldn't\n> repeatedly fire between calls to HandleStartupProcInterrupts().\n>\n\nAttached patch is probably not doing what you asked. IIUC\nHandleStartupProcInterrupts should arm the timer but also shouldn't arm it\nif it's called from WaitForWALToBecomeAvailable. But the timer should be\narmed again at the very end of WaitForWALToBecomeAvailable. I've been\nthinking about how to do this properly. Should HandleStartupProcInterrupts\ntake a parameter to decide whether the timer needs to be armed? Or need to\nadd an additional global flag to rearm the timer? Any thoughts?\n\n[1]\nhttps://api.cirrus-ci.com/v1/artifact/task/5282291971260416/testrun/build/testrun/recovery/010_logical_decoding_timelines/log/010_logical_decoding_timelines_replica.log\n\nBest,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 3 May 2023 16:11:33 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "I've reviewed both your patch versions and responded to the ideas in\nboth of them and the associated emails below.\n\nOn Wed, Apr 26, 2023 at 5:47 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 25 Apr 2023 16:04:23 -0700, Andres Freund <andres@anarazel.de> wrote in\n> > key those stats by oid. However, it *does* report the read/write time. But\n> > only at process exit, of course. The weird part is that the startup process\n> > does *NOT* increase pg_stat_database.blks_read/blks_hit, because instead of\n> > basing those on pgBufferUsage.shared_blks_read etc, we compute them based on\n> > the relation level stats. pgBufferUsage is just used for EXPLAIN. This isn't\n> > recent, afaict.\n>\n> I see four issues here.\n>\n> 1. The current database stats omits buffer fetches that don't\n> originate from a relation.\n>\n> In this case pgstat_relation can't work since recovery isn't conscious\n> of relids. We might be able to resolve relfilenode into a relid, but\n> it may not be that simple. Fortunately we already count fetches and\n> hits process-wide using pgBufferUsage, so we can use this for database\n> stats.\n...\n> > TL;DR: Currently the startup process maintains blk_read_time, blk_write_time,\n> > but doesn't maintain blks_read, blks_hit - which doesn't make sense.\n>\n> As a result, the attached patch, which is meant for discussion, allows\n> pg_stat_database to show fetches and reads by the startup process as\n> the counts for the database with id 0.\n\nI would put this in its own patch in a patchset. Of course it relies on\nhaving pgstat_report_stat() called at appropriate times by the startup\nprocess, but having pg_stat_database show read/hit counts is a separate\nissue than having pg_stat_io do so. I'm not suggesting we do this, but\nyou could argue that if we fix the startup process stats reporting that\npg_stat_database not showing reads and hits for the startup process is a\nbug that also exists in 15.\n\nNot directly related, but I do not get why the existing stats counters\nfor pg_stat_database count \"fetches\" and \"hits\" and then use subtraction\nto get the number of reads. I find it confusing and seems like it could\nlead to subtle inconsistencies with those counters counting reads closer\nto where they are actually happening.\n\n> 2. Even if we wanted to report stats for the startup process,\n> pgstat_report_stats wouldn't permit it since transaction-end\n> timestamp doesn't advance.\n>\n> I'm not certain if it's the correct approach, but perhaps we could use\n> GetCurrentTimestamp() instead of GetCurrentTransactionStopTimestamp()\n> specifically for the startup process.\n\nIn theory, since all of the approaches proposed in this thread would\nexercise rigid control over how often we flush stats in the startup\nprocess, I think it is okay to use GetCurrentTimestamp() when\npgstat_report_stat() is called by the startup process (i.e. we don't\nhave to worry about overhead of doing it). But looking at it implemented\nin the patch made me feel unsettled for some reason.\n\n> 3. When should we call pgstat_report_stats on the startup process?\n>\n> During recovery, I think we can call pgstat_report_stats() (or a\n> subset of it) right before invoking WaitLatch and at segment\n> boundaries.\n\nI see in the patch you call pgstat_report_stat() in XLogPageRead(). Will\nthis only be called on segment boundaries?\n\n> 4. In the existing ReadBuffer_common, there's an inconsistency in\n> counting hits and reads between pgstat_io and pgBufferUsage.\n>\n> The difference comes from the case of RBM_ZERO pages. We should simply\n> align them.\n\nI would definitely make this a separate patch and probably a separate\nthread. It isn't related to the startup process and is worth a separate\ndiscussion.\n\nOn Thu, Apr 27, 2023 at 10:43 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 28 Apr 2023 11:15:51 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Thu, 27 Apr 2023 17:30:40 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in\n> > > After a quick example implementation of this, I found that it seemed to\n> > > try and flush the stats less often on an idle standby (good) than using\n> > > enable_timeout_every().\n> >\n> > Just rearming with the full-interval will work that way. Our existing\n> > strategy for this is seen in PostgresMain().\n> >\n> > stats_timeout = pgstat_report_stat(false);\n> > if (stats_timeout > 0)\n> > {\n> > if (!get_timeout_active(BLAH_TIMEOUT))\n> > enable_timeout_after(BLAH_TIMEOUT, stats_timeout);\n> > }\n> > else\n> > {\n> > if (get_timeout_active(BLAH_TIMEOUT))\n> > disable_timeout(BLAH_TIMEOUT, false);\n> > }\n> > WaitLatch();\n>\n> Im my example, I left out idle-time flushing, but I realized we don't\n> need the timeout mechanism here since we're already managing it. So\n> the following should work (assuming the timestamp updates with\n> GetCurrentTimestamp() in my last patch).\n>\n> @@ -3889,13 +3900,23 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n> /* Update pg_stat_recovery_prefetch before sleeping. */\n> XLogPrefetcherComputeStats(xlogprefetcher);\n>\n> + /*\n> + * Report stats; if not time yet, set next WaitLatch to\n> + * wake up at the next reporing time.\n> + */\n> + wait_time = pgstat_report_stat(false);\n> +\n> + /* if no pending stats, sleep forever */\n> + if (wait_time == 0)\n> + wait_time = -1L;\n> +\n> /*\n> * Wait for more WAL to arrive, when we will be woken\n> * immediately by the WAL receiver.\n> */\n> (void) WaitLatch(&XLogRecoveryCtl->recoveryWakeupLatch,\n> WL_LATCH_SET | WL_EXIT_ON_PM_DEATH,\n> - -1L,\n> + wait_time,\n> WAIT_EVENT_RECOVERY_WAL_STREAM);\n\nThe idle-time flushing is a great point I did not think of. I think\nAndres did have some concern with unconditionally calling\npgstat_report_stat() in WaitForWalToBecomeAvailable() before WaitLatch()\n-- I believe because it would be called too often and attempt flushing\nmultiple times between HandleStartupProcInterrupts().\n\n- Melanie\n\n\n", "msg_date": "Mon, 8 May 2023 15:54:32 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "On Wed, May 03, 2023 at 04:11:33PM +0300, Melih Mutlu wrote:\n> Andres Freund <andres@anarazel.de>, 27 Nis 2023 Per, 19:27 tarihinde şunu yazdı:\n> > #ifdef WAL_DEBUG\n> > > if (XLOG_DEBUG ||\n> > > (record->xl_rmid == RM_XACT_ID &&\n> > trace_recovery_messages <= DEBUG2) ||\n> > > @@ -3617,6 +3621,13 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr,\n> > bool randAccess,\n> > > /* Do background tasks\n> > that might benefit us later. */\n> > >\n> > KnownAssignedTransactionIdsIdleMaintenance();\n> > >\n> > > + /*\n> > > + * Need to disable flush\n> > timeout to avoid unnecessary\n> > > + * wakeups. Enable it\n> > again after a WAL record is read\n> > > + * in PerformWalRecovery.\n> > > + */\n> > > +\n> > disable_startup_stat_flush_timeout();\n> > > +\n> > > (void)\n> > WaitLatch(&XLogRecoveryCtl->recoveryWakeupLatch,\n> > >\n> > WL_LATCH_SET | WL_TIMEOUT |\n> > >\n> > WL_EXIT_ON_PM_DEATH,\n> >\n> > I think always disabling the timer here isn't quite right - we want to\n> > wake up\n> > *once* in WaitForWALToBecomeAvailable(), otherwise we'll not submit pending\n> > stats before waiting - potentially for a long time - for WAL. One way\n> > would be\n> > just explicitly report before the WaitLatch().\n> >\n> > Another approach, I think better, would be to not use\n> > enable_timeout_every(),\n> > and to explicitly rearm the timer in HandleStartupProcInterrupts(). When\n> > called from WaitForWALToBecomeAvailable(), we'd not rearm, and instead do\n> > so\n> > at the end of WaitForWALToBecomeAvailable(). That way we also wouldn't\n> > repeatedly fire between calls to HandleStartupProcInterrupts().\n> >\n> \n> Attached patch is probably not doing what you asked. IIUC\n> HandleStartupProcInterrupts should arm the timer but also shouldn't arm it\n> if it's called from WaitForWALToBecomeAvailable. But the timer should be\n> armed again at the very end of WaitForWALToBecomeAvailable. I've been\n> thinking about how to do this properly. Should HandleStartupProcInterrupts\n> take a parameter to decide whether the timer needs to be armed? Or need to\n> add an additional global flag to rearm the timer? Any thoughts?\n\nI had the same question about how to determine whether or not to rearm.\n\n> From 9be7360e49db424c45c53e85efe8a4f5359b5244 Mon Sep 17 00:00:00 2001\n> From: Melih Mutlu <m.melihmutlu@gmail.com>\n> Date: Wed, 26 Apr 2023 18:21:32 +0300\n> Subject: [PATCH v2] Add timeout to flush stats during startup's main replay\n> loop\n> diff --git a/src/backend/postmaster/startup.c b/src/backend/postmaster/startup.c\n> index efc2580536..842394bc8f 100644\n> --- a/src/backend/postmaster/startup.c\n> +++ b/src/backend/postmaster/startup.c\n> @@ -72,6 +72,9 @@ static TimestampTz startup_progress_phase_start_time;\n> */\n> static volatile sig_atomic_t startup_progress_timer_expired = false;\n> \n> +/* Indicates whether flushing stats is needed. */\n> +static volatile sig_atomic_t idle_stats_update_pending = false;\n> +\n> /*\n> * Time between progress updates for long-running startup operations.\n> */\n> @@ -206,6 +209,18 @@ HandleStartupProcInterrupts(void)\n> \t/* Perform logging of memory contexts of this process */\n> \tif (LogMemoryContextPending)\n> \t\tProcessLogMemoryContextInterrupt();\n> +\n> +\tif (idle_stats_update_pending)\n> +\t{\n> +\t\t/* It's time to report wal stats. */\n\nIf we want dbstats to be updated, we'll probably have to call\npgstat_report_stat() here and fix the timestamp issue Horiguchi-san\npoints out upthread. Perhaps you could review those changes and consider\nadding those as preliminary patches before this in a set.\n\nI think you will then need to handle the issue he mentions with partial\nflushes coming from pgstat_report_stat() and remembering to try and\nflush stats again in case of a partial flush. Though, perhaps we can\njust pass force=true.\n\n> +\t\tpgstat_report_wal(true);\n> +\t\tidle_stats_update_pending = false;\n> +\t}\n\nGood that you thought to check if the timeout was already active.\n\n> +\telse if (!get_timeout_active(IDLE_STATS_UPDATE_TIMEOUT))\n> +\t{\n> +\t\t/* Set the next timeout. */\n> +\t\tenable_idle_stats_update_timeout();\n> +\t}\n> }\n> \n> \n> @@ -385,3 +400,22 @@ has_startup_progress_timeout_expired(long *secs, int *usecs)\n> \n> \treturn true;\n> }\n> +\n> +/* Set a flag indicating that it's time to flush wal stats. */\n> +void\n> +idle_stats_update_timeout_handler(void)\n> +{\n> +\tidle_stats_update_pending = true;\n\nIs WakeupRecovery() needed when the timer goes off and the startup\nprocess is waiting on a latch? Does this happen in\nWaitForWalToBecomeAvailable()?\n\n> +\tWakeupRecovery();\n> +}\n> +\n> +/* Enable the timeout set for wal stat flush. */\n> +void\n> +enable_idle_stats_update_timeout(void)\n> +{\n> +\tTimestampTz fin_time;\n> +\n> +\tfin_time = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),\n> +\t\t\t\t\t\t\t\t\t\t PGSTAT_MIN_INTERVAL);\n\nIt is a shame we have to end up calling GetCurrentTimestamp() since we\nare using enable_timeout_at(). Couldn't we use enable_timeout_after()?\n\n> +\tenable_timeout_at(IDLE_STATS_UPDATE_TIMEOUT, fin_time);\n> +}\n\n- Melanie\n\n\n", "msg_date": "Mon, 8 May 2023 17:06:38 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi,\n\nOn 2023-04-26 18:47:14 +0900, Kyotaro Horiguchi wrote:\n> I see four issues here.\n> \n> 1. The current database stats omits buffer fetches that don't\n> originate from a relation.\n> \n> In this case pgstat_relation can't work since recovery isn't conscious\n> of relids. We might be able to resolve relfilenode into a relid, but\n> it may not be that simple. Fortunately we already count fetches and\n> hits process-wide using pgBufferUsage, so we can use this for database\n> stats.\n\nI don't think we need to do anything about that for 16 - they aren't updated\nat process end either.\n\nI think the fix here is to do the architectural change of maintaining most\nstats keyed by relfilenode as we've discussed in some other threads. Then we\nalso can have relation level write stats etc.\n\n\n> 2. Even if we wanted to report stats for the startup process,\n> pgstat_report_stats wouldn't permit it since transaction-end\n> timestamp doesn't advance.\n> \n> I'm not certain if it's the correct approach, but perhaps we could use\n> GetCurrentTimestamp() instead of GetCurrentTransactionStopTimestamp()\n> specifically for the startup process.\n\nWhat about using GetCurrentTimestamp() when force == true? That'd make sense\nfor other users as well, I think?\n\n\n> 3. When should we call pgstat_report_stats on the startup process?\n> \n> During recovery, I think we can call pgstat_report_stats() (or a\n> subset of it) right before invoking WaitLatch and at segment\n> boundaries.\n\nI've pondered that as well. But I don't think it's great - it's not exactly\nintuitive that stats reporting gets far less common if you use a 1GB\nwal_segment_size.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 May 2023 14:46:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Thanks for clarifying!\n\nAt Mon, 8 May 2023 14:46:43 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2023-04-26 18:47:14 +0900, Kyotaro Horiguchi wrote:\n> > I see four issues here.\n> > \n> > 1. The current database stats omits buffer fetches that don't\n> > originate from a relation.\n> > \n> > In this case pgstat_relation can't work since recovery isn't conscious\n> > of relids. We might be able to resolve relfilenode into a relid, but\n> > it may not be that simple. Fortunately we already count fetches and\n> > hits process-wide using pgBufferUsage, so we can use this for database\n> > stats.\n> \n> I don't think we need to do anything about that for 16 - they aren't updated\n> at process end either.\n> \n> I think the fix here is to do the architectural change of maintaining most\n> stats keyed by relfilenode as we've discussed in some other threads. Then we\n> also can have relation level write stats etc.\n\nI think so.\n\n> > 2. Even if we wanted to report stats for the startup process,\n> > pgstat_report_stats wouldn't permit it since transaction-end\n> > timestamp doesn't advance.\n> > \n> > I'm not certain if it's the correct approach, but perhaps we could use\n> > GetCurrentTimestamp() instead of GetCurrentTransactionStopTimestamp()\n> > specifically for the startup process.\n> \n> What about using GetCurrentTimestamp() when force == true? That'd make sense\n> for other users as well, I think?\n\nI'm not sure if I got you right, but when force==true, it allows\npgstat_report_stats to flush without considering whether the interval\nhas elapsed. In that case, there's no need to keep track of the last\nflush time and the caller should handle the interval instead.\n\n> > 3. When should we call pgstat_report_stats on the startup process?\n> > \n> > During recovery, I think we can call pgstat_report_stats() (or a\n> > subset of it) right before invoking WaitLatch and at segment\n> > boundaries.\n> \n> I've pondered that as well. But I don't think it's great - it's not exactly\n> intuitive that stats reporting gets far less common if you use a 1GB\n> wal_segment_size.\n\nIf the segment size gets larger, the archive intervals become\nlonger. So, I have a vague feeling that users wouldn't go for such a\nlarge segment size. I don't have a clear idea about the ideal length\nfor stats reporting intervals in this case, but I think every few\nminutes or so would not be so bad for the startup process to report\nstats when recovery gets busy. Also, I think recovery will often wait\nfor new data once it catches up to the primary.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 12 May 2023 14:32:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi,\n\nI don't really feel confident we're going to get the timeout approach solid\nenough for something going into the tree around beta 1.\n\nHow about this, imo a lot simpler, approach: We flush stats whenever replaying\na XLOG_RUNNING_XACTS record. Unless the primary is idle, it will log those at\na regular interval. If the primary is idle, we don't need to flush stats in\nthe startup process, because we'll not have done any io.\n\nWe only log XLOG_RUNNING_XACTS when wal_level >= replica, so stats wouldn't\nget regularly flushed if wal_level = minimal - but in that case the stats are\nalso not accessible, so that's not a problem.\n\nIt's not the prettiest solution, but I think the simplicity is worth a lot.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 May 2023 15:14:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "At Sun, 21 May 2023 15:14:23 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> I don't really feel confident we're going to get the timeout approach solid\n> enough for something going into the tree around beta 1.\n> \n> How about this, imo a lot simpler, approach: We flush stats whenever replaying\n> a XLOG_RUNNING_XACTS record. Unless the primary is idle, it will log those at\n> a regular interval. If the primary is idle, we don't need to flush stats in\n> the startup process, because we'll not have done any io.\n> \n> We only log XLOG_RUNNING_XACTS when wal_level >= replica, so stats wouldn't\n> get regularly flushed if wal_level = minimal - but in that case the stats are\n> also not accessible, so that's not a problem.\n\nI concur with the three aspects, interval regularity, reliability and\nabout the case of the minimal wal_level. While I can't confidently\nclaim it is the best, its simplicilty gives us a solid reason to\nproceed with it. If necessary, we can switch to alternative timing\nsources in the future without causing major disruptions for users, I\nbelieve.\n\n> It's not the prettiest solution, but I think the simplicity is worth a lot.\n> \n> Greetings,\n\n+1.\n\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 22 May 2023 13:36:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi,\n\nOn 2023-05-22 13:36:42 +0900, Kyotaro Horiguchi wrote:\n> At Sun, 21 May 2023 15:14:23 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > Hi,\n> > \n> > I don't really feel confident we're going to get the timeout approach solid\n> > enough for something going into the tree around beta 1.\n> > \n> > How about this, imo a lot simpler, approach: We flush stats whenever replaying\n> > a XLOG_RUNNING_XACTS record. Unless the primary is idle, it will log those at\n> > a regular interval. If the primary is idle, we don't need to flush stats in\n> > the startup process, because we'll not have done any io.\n> > \n> > We only log XLOG_RUNNING_XACTS when wal_level >= replica, so stats wouldn't\n> > get regularly flushed if wal_level = minimal - but in that case the stats are\n> > also not accessible, so that's not a problem.\n> \n> I concur with the three aspects, interval regularity, reliability and\n> about the case of the minimal wal_level. While I can't confidently\n> claim it is the best, its simplicilty gives us a solid reason to\n> proceed with it. If necessary, we can switch to alternative timing\n> sources in the future without causing major disruptions for users, I\n> believe.\n> \n> > It's not the prettiest solution, but I think the simplicity is worth a lot.\n> > \n> > Greetings,\n> \n> +1.\n\nAttached a patch implementing this approach.\n\nIt's imo always a separate bug that we were using\nGetCurrentTransactionStopTimestamp() when force was passed in - that timestamp\ncan be quite out of date in some cases. But I don't immediately see a\nnoticeable consequence, so ...\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 9 Jun 2023 21:28:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "At Fri, 9 Jun 2023 21:28:23 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2023-05-22 13:36:42 +0900, Kyotaro Horiguchi wrote:\n> > At Sun, 21 May 2023 15:14:23 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > > Hi,\n> > > \n> > > I don't really feel confident we're going to get the timeout approach solid\n> > > enough for something going into the tree around beta 1.\n> > > \n> > > How about this, imo a lot simpler, approach: We flush stats whenever replaying\n> > > a XLOG_RUNNING_XACTS record. Unless the primary is idle, it will log those at\n> > > a regular interval. If the primary is idle, we don't need to flush stats in\n> > > the startup process, because we'll not have done any io.\n> > > \n> > > We only log XLOG_RUNNING_XACTS when wal_level >= replica, so stats wouldn't\n> > > get regularly flushed if wal_level = minimal - but in that case the stats are\n> > > also not accessible, so that's not a problem.\n> > \n> > I concur with the three aspects, interval regularity, reliability and\n> > about the case of the minimal wal_level. While I can't confidently\n> > claim it is the best, its simplicilty gives us a solid reason to\n> > proceed with it. If necessary, we can switch to alternative timing\n> > sources in the future without causing major disruptions for users, I\n> > believe.\n> > \n> > > It's not the prettiest solution, but I think the simplicity is worth a lot.\n> > > \n> > > Greetings,\n> > \n> > +1.\n> \n> Attached a patch implementing this approach.\n> \n> It's imo always a separate bug that we were using\n> GetCurrentTransactionStopTimestamp() when force was passed in - that timestamp\n> can be quite out of date in some cases. But I don't immediately see a\n> noticeable consequence, so ...\n\nThanks!\n\nI think that the reason is that we only pass true only when we're in a\nsort of idle time. Addition to that XLOG_RUNNING_XACTS comes so\ninfrequently to cause noticeable degradation. If it causes\nsufficiently frequent updates, we will be satisfied with it.\n\nThe patch is quite simple and looks good to me. (At least way better\nthan always using GetCurrentTimestamp():)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Jun 2023 12:54:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "Hi,\n\nOn 2023-06-13 12:54:19 +0900, Kyotaro Horiguchi wrote:\n> I think that the reason is that we only pass true only when we're in a\n> sort of idle time. Addition to that XLOG_RUNNING_XACTS comes so\n> infrequently to cause noticeable degradation. If it causes\n> sufficiently frequent updates, we will be satisfied with it.\n> \n> The patch is quite simple and looks good to me. (At least way better\n> than always using GetCurrentTimestamp():)\n\nThanks for looking - I already had pushed the commit by the time I read your\nemail, otherwise I'd have mentioned you reviewing it...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 12 Jun 2023 21:13:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" }, { "msg_contents": "At Mon, 12 Jun 2023 21:13:35 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Thanks for looking - I already had pushed the commit by the time I read your\n> email, otherwise I'd have mentioned you reviewing it...\n\nOops! But I anticipated that and that's no problem (I should have\nconfirmed commits.). Thanks for the reply!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Jun 2023 13:28:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_io for the startup process" } ]
[ { "msg_contents": "`local_traverse_files` and `libpq_traverse_files` both have a\ncallback parameter but instead use the global process_source_file\nwhich is no good for function encapsulation.\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Wed, 26 Apr 2023 09:51:13 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "[pg_rewind] use the passing callback instead of global function" }, { "msg_contents": "On Wed, Apr 26, 2023 at 9:51 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n\n> `local_traverse_files` and `libpq_traverse_files` both have a\n> callback parameter but instead use the global process_source_file\n> which is no good for function encapsulation.\n\n\nNice catch. This should be a typo introduced by 37d2ff38.\n\nWhile this patch is doing it correctly, I'm wondering that since both\nkinds of source server (libpq and local) are using the same function\n(i.e. process_source_file) to process source file list for\ntraverse_files operations, do we really need to provide a callback? Or\nwill there be some kind of source server that may use different source\nfile processing function?\n\nThanks\nRichard\n\nOn Wed, Apr 26, 2023 at 9:51 AM Junwang Zhao <zhjwpku@gmail.com> wrote:`local_traverse_files` and `libpq_traverse_files` both have a\ncallback parameter but instead use the global process_source_file\nwhich is no good for function encapsulation.Nice catch.  This should be a typo introduced by 37d2ff38.While this patch is doing it correctly, I'm wondering that since bothkinds of source server (libpq and local) are using the same function(i.e. process_source_file) to process source file list fortraverse_files operations, do we really need to provide a callback?  Orwill there be some kind of source server that may use different sourcefile processing function?ThanksRichard", "msg_date": "Wed, 26 Apr 2023 16:33:39 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [pg_rewind] use the passing callback instead of global function" }, { "msg_contents": "> On 26 Apr 2023, at 10:33, Richard Guo <guofenglinux@gmail.com> wrote:\n> \n> On Wed, Apr 26, 2023 at 9:51 AM Junwang Zhao <zhjwpku@gmail.com <mailto:zhjwpku@gmail.com>> wrote:\n> `local_traverse_files` and `libpq_traverse_files` both have a\n> callback parameter but instead use the global process_source_file\n> which is no good for function encapsulation.\n> \n> Nice catch. This should be a typo introduced by 37d2ff38.\n\nAgreed, I'll look at applying this after some testing.\n\n> While this patch is doing it correctly, I'm wondering that since both\n> kinds of source server (libpq and local) are using the same function\n> (i.e. process_source_file) to process source file list for\n> traverse_files operations, do we really need to provide a callback? Or\n> will there be some kind of source server that may use different source\n> file processing function?\n\n\nWhile there isn't one right now, removing the callback seems like imposing a\nrestriction that the refactoring in 37d2ff38 aimed to avoid.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 26 Apr 2023 10:48:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [pg_rewind] use the passing callback instead of global function" } ]
[ { "msg_contents": "Hello,\n\npg_dumpall.c has a function dumpRoleMembership() which dumps all\nmembership roles. This function includes a piece of code which checks\nif the membership tree has an open end which can't be resolved.\nHowever that code is never used.\n\nThe variable prev_remaining is initially set to 0, and then never changed.\nWhich in turn never executes this code:\n\nif (remaining == prev_remaining)\n\nbecause the loop is only entered while \"remaining > 0\".\n\nThe attached patch fixes this problem, and updates prev_remaining inside\nthe loop.\n\n\nCo-Author: Artur Zakirov <zaartur@gmail.com>\nwho reviewed the patch.\n\n\nRegards\n\n-- \n\t\t\t\tAndreas 'ads' Scherbaum\nGerman PostgreSQL User Group\nEuropean PostgreSQL User Group - Board of Directors\nVolunteer Regional Contact, Germany - PostgreSQL Project", "msg_date": "Wed, 26 Apr 2023 12:18:16 +0200", "msg_from": "Andreas 'ads' Scherbaum <ads@pgug.de>", "msg_from_op": true, "msg_subject": "Find dangling membership roles in pg_dumpall" }, { "msg_contents": "> On 26 Apr 2023, at 12:18, Andreas 'ads' Scherbaum <ads@pgug.de> wrote:\n> \n> \n> Hello,\n> \n> pg_dumpall.c has a function dumpRoleMembership() which dumps all\n> membership roles. This function includes a piece of code which checks\n> if the membership tree has an open end which can't be resolved.\n> However that code is never used.\n> \n> The variable prev_remaining is initially set to 0, and then never changed.\n> Which in turn never executes this code:\n> \n> if (remaining == prev_remaining)\n> \n> because the loop is only entered while \"remaining > 0\".\n> \n> The attached patch fixes this problem, and updates prev_remaining inside\n> the loop.\n\nNice catch, that indeed seems like a proper fix. This was introduced in\nce6b672e44 and so doesn't need a backpatch.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 26 Apr 2023 13:02:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Find dangling membership roles in pg_dumpall" }, { "msg_contents": "> On 26 Apr 2023, at 13:02, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 26 Apr 2023, at 12:18, Andreas 'ads' Scherbaum <ads@pgug.de> wrote:\n\n>> The attached patch fixes this problem, and updates prev_remaining inside\n>> the loop.\n> \n> Nice catch, that indeed seems like a proper fix. This was introduced in\n> ce6b672e44 and so doesn't need a backpatch.\n\nPushed, but without the comment extension as I felt the existing comment aptly\ndiscussed the codepath. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 26 Apr 2023 14:29:00 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Find dangling membership roles in pg_dumpall" }, { "msg_contents": "On Wed, Apr 26, 2023 at 8:29 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Pushed, but without the comment extension as I felt the existing comment aptly\n> discussed the codepath. Thanks!\n\nWoopsie. Thanks to both of you.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 08:35:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Find dangling membership roles in pg_dumpall" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nIn 035_standby_logical_decoding.pl, I think that the check in the following test\r\ncase should be performed on the standby node, instead of the primary node, as\r\nthe slot is created on the standby node. The test currently passes because it\r\nonly checks the return value of psql. It might be more appropriate to check the\r\nerror message. Please see the attached patch.\r\n\r\n```\r\n$node_primary->safe_psql('postgres', 'CREATE DATABASE otherdb');\r\n\r\nis( $node_primary->psql(\r\n 'otherdb',\r\n \"SELECT lsn FROM pg_logical_slot_peek_changes('behaves_ok_activeslot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;\"\r\n ),\r\n 3,\r\n 'replaying logical slot from another database fails');\r\n```\r\n\r\nThe regress log:\r\npsql:<stdin>:1: ERROR: replication slot \"behaves_ok_activeslot\" does not exist\r\n[11:23:21.859](0.086s) ok 8 - replaying logical slot from another database fails\r\n\r\nRegards,\r\nShi Yu", "msg_date": "Thu, 27 Apr 2023 08:11:50 +0000", "msg_from": "\"Yu Shi (Fujitsu)\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix a test case in 035_standby_logical_decoding.pl" }, { "msg_contents": "Hi,\n\nOn 4/27/23 10:11 AM, Yu Shi (Fujitsu) wrote:\n> Hi hackers,\n> \n> In 035_standby_logical_decoding.pl, I think that the check in the following test\n> case should be performed on the standby node, instead of the primary node, as\n> the slot is created on the standby node.\n\nOh right, the current test is not done on the right node, thanks!\n\n> The test currently passes because it\n> only checks the return value of psql. It might be more appropriate to check the\n> error message. \n\nAgree, and it's consistent with what is being done in 006_logical_decoding.pl.\n\n> Please see the attached patch.\n> \n\n+\n+($result, $stdout, $stderr) = $node_standby->psql(\n 'otherdb',\n \"SELECT lsn FROM pg_logical_slot_peek_changes('behaves_ok_activeslot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;\"\n- ),\n- 3,\n- 'replaying logical slot from another database fails');\n+ );\n+ok( $stderr =~\n+ m/replication slot \"behaves_ok_activeslot\" was not created in this database/,\n+ \"replaying logical slot from another database fails\");\n\n\nThat does look good to me.\n\nNit: I wonder if while at it (as it was already there) we could not remove the \" ORDER BY lsn DESC LIMIT 1\" part of it.\nIt does not change anything regarding the test but looks more appropriate to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 10:46:34 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a test case in 035_standby_logical_decoding.pl" }, { "msg_contents": "On Thu, Apr 27, 2023 4:47 PM Drouvot, Bertrand <bertranddrouvot.pg@gmail.com> wrote:\r\n> \r\n> Hi,\r\n> \r\n> On 4/27/23 10:11 AM, Yu Shi (Fujitsu) wrote:\r\n> > Hi hackers,\r\n> >\r\n> > In 035_standby_logical_decoding.pl, I think that the check in the following test\r\n> > case should be performed on the standby node, instead of the primary node,\r\n> as\r\n> > the slot is created on the standby node.\r\n> \r\n> Oh right, the current test is not done on the right node, thanks!\r\n> \r\n> > The test currently passes because it\r\n> > only checks the return value of psql. It might be more appropriate to check the\r\n> > error message.\r\n> \r\n> Agree, and it's consistent with what is being done in 006_logical_decoding.pl.\r\n> \r\n> > Please see the attached patch.\r\n> >\r\n> \r\n> +\r\n> +($result, $stdout, $stderr) = $node_standby->psql(\r\n> 'otherdb',\r\n> \"SELECT lsn FROM pg_logical_slot_peek_changes('behaves_ok_activeslot',\r\n> NULL, NULL) ORDER BY lsn DESC LIMIT 1;\"\r\n> - ),\r\n> - 3,\r\n> - 'replaying logical slot from another database fails');\r\n> + );\r\n> +ok( $stderr =~\r\n> + m/replication slot \"behaves_ok_activeslot\" was not created in this\r\n> database/,\r\n> + \"replaying logical slot from another database fails\");\r\n> \r\n> \r\n> That does look good to me.\r\n> \r\n> Nit: I wonder if while at it (as it was already there) we could not remove the \"\r\n> ORDER BY lsn DESC LIMIT 1\" part of it.\r\n> It does not change anything regarding the test but looks more appropriate to\r\n> me.\r\n> \r\n\r\nThanks for your reply. I agree with you and I removed it in the attached patch.\r\n\r\nRegards,\r\nShi Yu", "msg_date": "Thu, 27 Apr 2023 09:41:22 +0000", "msg_from": "\"Yu Shi (Fujitsu)\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix a test case in 035_standby_logical_decoding.pl" }, { "msg_contents": "On Thu, Apr 27, 2023 at 2:16 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> On 4/27/23 10:11 AM, Yu Shi (Fujitsu) wrote:\n> > Hi hackers,\n> >\n> > In 035_standby_logical_decoding.pl, I think that the check in the following test\n> > case should be performed on the standby node, instead of the primary node, as\n> > the slot is created on the standby node.\n>\n> Oh right, the current test is not done on the right node, thanks!\n>\n> > The test currently passes because it\n> > only checks the return value of psql. It might be more appropriate to check the\n> > error message.\n>\n> Agree, and it's consistent with what is being done in 006_logical_decoding.pl.\n>\n> > Please see the attached patch.\n> >\n>\n> +\n> +($result, $stdout, $stderr) = $node_standby->psql(\n> 'otherdb',\n> \"SELECT lsn FROM pg_logical_slot_peek_changes('behaves_ok_activeslot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;\"\n> - ),\n> - 3,\n> - 'replaying logical slot from another database fails');\n> + );\n> +ok( $stderr =~\n> + m/replication slot \"behaves_ok_activeslot\" was not created in this database/,\n> + \"replaying logical slot from another database fails\");\n>\n>\n> That does look good to me.\n>\n\nI agree that that the check should be done on standby but how does it\nmake a difference to check the error message or return value? Won't it\nbe the same for both primary and standby?\n\n> Nit: I wonder if while at it (as it was already there) we could not remove the \" ORDER BY lsn DESC LIMIT 1\" part of it.\n> It does not change anything regarding the test but looks more appropriate to me.\n>\n\nIt may not make a difference as this is anyway an error case but it\nlooks more logical to LIMIT by 1 as you are fetching a single LSN\nvalue and it will be consistent with other tests in this file and the\ntest case in the file 006_logical_decoding.pl.\n\nBTW, in the same test, I see it uses wait_for_catchup() in one place\nand wait_for_replay_catchup() in another place after Insert. Isn't it\nbetter to use wait_for_replay_catchup in both places?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 Apr 2023 15:23:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a test case in 035_standby_logical_decoding.pl" }, { "msg_contents": "Hi,\n\nOn 4/27/23 11:53 AM, Amit Kapila wrote:\n> On Thu, Apr 27, 2023 at 2:16 PM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>>\n>> On 4/27/23 10:11 AM, Yu Shi (Fujitsu) wrote:\n>>> Hi hackers,\n>>>\n>>> In 035_standby_logical_decoding.pl, I think that the check in the following test\n>>> case should be performed on the standby node, instead of the primary node, as\n>>> the slot is created on the standby node.\n>>\n>> Oh right, the current test is not done on the right node, thanks!\n>>\n>>> The test currently passes because it\n>>> only checks the return value of psql. It might be more appropriate to check the\n>>> error message.\n>>\n>> Agree, and it's consistent with what is being done in 006_logical_decoding.pl.\n>>\n>>> Please see the attached patch.\n>>>\n>>\n>> +\n>> +($result, $stdout, $stderr) = $node_standby->psql(\n>> 'otherdb',\n>> \"SELECT lsn FROM pg_logical_slot_peek_changes('behaves_ok_activeslot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;\"\n>> - ),\n>> - 3,\n>> - 'replaying logical slot from another database fails');\n>> + );\n>> +ok( $stderr =~\n>> + m/replication slot \"behaves_ok_activeslot\" was not created in this database/,\n>> + \"replaying logical slot from another database fails\");\n>>\n>>\n>> That does look good to me.\n>>\n> \n> I agree that that the check should be done on standby but how does it\n> make a difference to check the error message or return value? Won't it\n> be the same for both primary and standby?\n> \n\nYes that would be the same. I think the original idea from Shi-san (please correct me If I'm wrong)\nwas \"while at it\" let's also make this test on the right node even better.\n\n>> Nit: I wonder if while at it (as it was already there) we could not remove the \" ORDER BY lsn DESC LIMIT 1\" part of it.\n>> It does not change anything regarding the test but looks more appropriate to me.\n>>\n> \n> It may not make a difference as this is anyway an error case but it\n> looks more logical to LIMIT by 1 as you are fetching a single LSN\n> value and it will be consistent with other tests in this file and the\n> test case in the file 006_logical_decoding.pl.\n> \n\nyeah I think it all depends how one see this test (sort of test a failure or try to get\na result and see if it fails). That was a Nit so I don't have a strong opinion on this one.\n\n> BTW, in the same test, I see it uses wait_for_catchup() in one place\n> and wait_for_replay_catchup() in another place after Insert. Isn't it\n> better to use wait_for_replay_catchup in both places?\n> \n\nThey are both using the 'replay' mode so both are perfectly correct but for consistency (and as\nthey don't use the same \"target_lsn\" ('write' vs 'flush')) I think that using wait_for_replay_catchup()\nin both places (which is using the 'flush' target lsn) is a good idea.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 12:37:40 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a test case in 035_standby_logical_decoding.pl" }, { "msg_contents": "On Thu, Apr 27, 2023 at 4:07 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> On 4/27/23 11:53 AM, Amit Kapila wrote:\n> > On Thu, Apr 27, 2023 at 2:16 PM Drouvot, Bertrand\n> > <bertranddrouvot.pg@gmail.com> wrote:\n> >>\n> >> On 4/27/23 10:11 AM, Yu Shi (Fujitsu) wrote:\n> >>> Hi hackers,\n> >>>\n> >>> In 035_standby_logical_decoding.pl, I think that the check in the following test\n> >>> case should be performed on the standby node, instead of the primary node, as\n> >>> the slot is created on the standby node.\n> >>\n> >> Oh right, the current test is not done on the right node, thanks!\n> >>\n> >>> The test currently passes because it\n> >>> only checks the return value of psql. It might be more appropriate to check the\n> >>> error message.\n> >>\n> >> Agree, and it's consistent with what is being done in 006_logical_decoding.pl.\n> >>\n> >>> Please see the attached patch.\n> >>>\n> >>\n> >> +\n> >> +($result, $stdout, $stderr) = $node_standby->psql(\n> >> 'otherdb',\n> >> \"SELECT lsn FROM pg_logical_slot_peek_changes('behaves_ok_activeslot', NULL, NULL) ORDER BY lsn DESC LIMIT 1;\"\n> >> - ),\n> >> - 3,\n> >> - 'replaying logical slot from another database fails');\n> >> + );\n> >> +ok( $stderr =~\n> >> + m/replication slot \"behaves_ok_activeslot\" was not created in this database/,\n> >> + \"replaying logical slot from another database fails\");\n> >>\n> >>\n> >> That does look good to me.\n> >>\n> >\n> > I agree that that the check should be done on standby but how does it\n> > make a difference to check the error message or return value? Won't it\n> > be the same for both primary and standby?\n> >\n>\n> Yes that would be the same. I think the original idea from Shi-san (please correct me If I'm wrong)\n> was \"while at it\" let's also make this test on the right node even better.\n>\n\nFair enough. Let''s do it that way then.\n\n> >> Nit: I wonder if while at it (as it was already there) we could not remove the \" ORDER BY lsn DESC LIMIT 1\" part of it.\n> >> It does not change anything regarding the test but looks more appropriate to me.\n> >>\n> >\n> > It may not make a difference as this is anyway an error case but it\n> > looks more logical to LIMIT by 1 as you are fetching a single LSN\n> > value and it will be consistent with other tests in this file and the\n> > test case in the file 006_logical_decoding.pl.\n> >\n>\n> yeah I think it all depends how one see this test (sort of test a failure or try to get\n> a result and see if it fails). That was a Nit so I don't have a strong opinion on this one.\n>\n\nI feel let's be consistent here and keep it as it is.\n\n> > BTW, in the same test, I see it uses wait_for_catchup() in one place\n> > and wait_for_replay_catchup() in another place after Insert. Isn't it\n> > better to use wait_for_replay_catchup in both places?\n> >\n>\n> They are both using the 'replay' mode so both are perfectly correct but for consistency (and as\n> they don't use the same \"target_lsn\" ('write' vs 'flush')) I think that using wait_for_replay_catchup()\n> in both places (which is using the 'flush' target lsn) is a good idea.\n>\n\nYeah, let's use wait_for_replay_catchup() at both places.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 Apr 2023 19:23:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a test case in 035_standby_logical_decoding.pl" }, { "msg_contents": "On Thu, Apr 27, 2023 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Apr 27, 2023 at 4:07 PM Drouvot, Bertrand\r\n> <bertranddrouvot.pg@gmail.com> wrote:\r\n> >\r\n> > On 4/27/23 11:53 AM, Amit Kapila wrote:\r\n> > > On Thu, Apr 27, 2023 at 2:16 PM Drouvot, Bertrand\r\n> > > <bertranddrouvot.pg@gmail.com> wrote:\r\n> > >>\r\n> > >> On 4/27/23 10:11 AM, Yu Shi (Fujitsu) wrote:\r\n> > >>> Hi hackers,\r\n> > >>>\r\n> > >>> In 035_standby_logical_decoding.pl, I think that the check in the following\r\n> test\r\n> > >>> case should be performed on the standby node, instead of the primary\r\n> node, as\r\n> > >>> the slot is created on the standby node.\r\n> > >>\r\n> > >> Oh right, the current test is not done on the right node, thanks!\r\n> > >>\r\n> > >>> The test currently passes because it\r\n> > >>> only checks the return value of psql. It might be more appropriate to check\r\n> the\r\n> > >>> error message.\r\n> > >>\r\n> > >> Agree, and it's consistent with what is being done in\r\n> 006_logical_decoding.pl.\r\n> > >>\r\n> > >>> Please see the attached patch.\r\n> > >>>\r\n> > >>\r\n> > >> +\r\n> > >> +($result, $stdout, $stderr) = $node_standby->psql(\r\n> > >> 'otherdb',\r\n> > >> \"SELECT lsn FROM\r\n> pg_logical_slot_peek_changes('behaves_ok_activeslot', NULL, NULL) ORDER BY\r\n> lsn DESC LIMIT 1;\"\r\n> > >> - ),\r\n> > >> - 3,\r\n> > >> - 'replaying logical slot from another database fails');\r\n> > >> + );\r\n> > >> +ok( $stderr =~\r\n> > >> + m/replication slot \"behaves_ok_activeslot\" was not created in this\r\n> database/,\r\n> > >> + \"replaying logical slot from another database fails\");\r\n> > >>\r\n> > >>\r\n> > >> That does look good to me.\r\n> > >>\r\n> > >\r\n> > > I agree that that the check should be done on standby but how does it\r\n> > > make a difference to check the error message or return value? Won't it\r\n> > > be the same for both primary and standby?\r\n> > >\r\n> >\r\n> > Yes that would be the same. I think the original idea from Shi-san (please\r\n> correct me If I'm wrong)\r\n> > was \"while at it\" let's also make this test on the right node even better.\r\n> >\r\n> \r\n> Fair enough. Let''s do it that way then.\r\n> \r\n> > >> Nit: I wonder if while at it (as it was already there) we could not remove the\r\n> \" ORDER BY lsn DESC LIMIT 1\" part of it.\r\n> > >> It does not change anything regarding the test but looks more appropriate\r\n> to me.\r\n> > >>\r\n> > >\r\n> > > It may not make a difference as this is anyway an error case but it\r\n> > > looks more logical to LIMIT by 1 as you are fetching a single LSN\r\n> > > value and it will be consistent with other tests in this file and the\r\n> > > test case in the file 006_logical_decoding.pl.\r\n> > >\r\n> >\r\n> > yeah I think it all depends how one see this test (sort of test a failure or try to\r\n> get\r\n> > a result and see if it fails). That was a Nit so I don't have a strong opinion on\r\n> this one.\r\n> >\r\n> \r\n> I feel let's be consistent here and keep it as it is.\r\n> \r\n> > > BTW, in the same test, I see it uses wait_for_catchup() in one place\r\n> > > and wait_for_replay_catchup() in another place after Insert. Isn't it\r\n> > > better to use wait_for_replay_catchup in both places?\r\n> > >\r\n> >\r\n> > They are both using the 'replay' mode so both are perfectly correct but for\r\n> consistency (and as\r\n> > they don't use the same \"target_lsn\" ('write' vs 'flush')) I think that using\r\n> wait_for_replay_catchup()\r\n> > in both places (which is using the 'flush' target lsn) is a good idea.\r\n> >\r\n> \r\n> Yeah, let's use wait_for_replay_catchup() at both places.\r\n> \r\n\r\nThanks for your reply. Your suggestions make sense to me.\r\nPlease see the attached patch.\r\n\r\nRegards,\r\nShi Yu", "msg_date": "Fri, 28 Apr 2023 02:05:39 +0000", "msg_from": "\"Yu Shi (Fujitsu)\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix a test case in 035_standby_logical_decoding.pl" } ]
[ { "msg_contents": "Hackers,\n\nI have been updating pgAudit for PG16 and ran into the following issue \nin the regression tests:\n\n\\connect - user1\nWARNING: permission denied to set parameter \"pgaudit.log_level\"\n\nThis happens after switching back and forth a few times between the \ncurrent user when the regression script was executed and user1 which is \ncreated in the script. Specifically, it happens at [1].\n\nI have tracked the issue down to context == PGC_USERSET for case \nPGC_SUSET in set_config_option_ext(). This GUC is PGC_SUSET so it seems \nlike once it is set reloading it should not be an issue.\n\nIf the GUC is set again immediately before the \\connect then there is no \nerror, so it looks like the correct context is being lost somewhere \nalong the way.\n\nBefore I get into serious debugging on this issue, I thought it would be \ngood to bring it up in case the answer is obvious to someone else.\n\nThanks,\n-David\n\n[1] \nhttps://github.com/pgaudit/pgaudit/compare/dev-pg16-ci#diff-db4bb73982787fa1d07d4c9d80bc54028b8d2a52b80806f1352a42c42f00eaaaR604\n\n\n", "msg_date": "Thu, 27 Apr 2023 18:22:09 +0300", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Possible regression setting GUCs on \\connect" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> I have been updating pgAudit for PG16 and ran into the following issue \n> in the regression tests:\n> \\connect - user1\n> WARNING: permission denied to set parameter \"pgaudit.log_level\"\n> This happens after switching back and forth a few times between the \n> current user when the regression script was executed and user1 which is \n> created in the script. Specifically, it happens at [1].\n\nIf this is new in v16, I'm inclined to blame 096dd80f3, but that's\njust a guess without a test case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Apr 2023 12:13:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On 4/27/23 19:13, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> I have been updating pgAudit for PG16 and ran into the following issue\n>> in the regression tests:\n>> \\connect - user1\n>> WARNING: permission denied to set parameter \"pgaudit.log_level\"\n>> This happens after switching back and forth a few times between the\n>> current user when the regression script was executed and user1 which is\n>> created in the script. Specifically, it happens at [1].\n> \n> If this is new in v16, I'm inclined to blame 096dd80f3, but that's\n> just a guess without a test case.\n\nSeems plausible. This can be reproduced by cloning [1] into contrib and \nrunning `make check`. I can work out another test case but it may not \nend up being simpler.\n\nThoughts on this Alexander?\n\n[1] https://github.com/pgaudit/pgaudit/tree/dev-pg16-ci\n\n\n", "msg_date": "Thu, 27 Apr 2023 19:35:25 +0300", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> Seems plausible. This can be reproduced by cloning [1] into contrib and \n> running `make check`. I can work out another test case but it may not \n> end up being simpler.\n> [1] https://github.com/pgaudit/pgaudit/tree/dev-pg16-ci\n\nI tried to replicate this per that recipe, but it works for me:\n\n$ git clone https://github.com/pgaudit/pgaudit.git pgaudit\n$ cd pgaudit\n$ git checkout dev-pg16-ci\n$ make -s check\n# +++ regress check in contrib/pgaudit +++\n# using temp instance on port 61696 with PID 1191703\nok 1 - pgaudit 310 ms\n1..1\n# All 1 tests passed.\n\nThis is at commit 6f879bddbdcfbf9995ecee1db9a587e06027bd13 on\nyour dev-pg16-ci branch and df38157d94662a64e2f83aa8a0110fd1ee7c4776\non PG master. Note that I had to add\n\n$ diff -pud Makefile~ Makefile \n--- Makefile~ 2023-04-27 14:02:19.041714415 -0400\n+++ Makefile 2023-04-27 14:07:10.056909016 -0400\n@@ -20,3 +20,5 @@ top_builddir = ../..\n include $(top_builddir)/src/Makefile.global\n include $(top_srcdir)/contrib/contrib-global.mk\n endif\n+\n+EXTRA_INSTALL += contrib/pg_stat_statements\n\nelse I got failures about pg_stat_statements not being installed.\nBut I don't see the failure you're complaining of.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Apr 2023 14:16:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On 4/27/23 21:16, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> Seems plausible. This can be reproduced by cloning [1] into contrib and\n>> running `make check`. I can work out another test case but it may not\n>> end up being simpler.\n>> [1] https://github.com/pgaudit/pgaudit/tree/dev-pg16-ci\n> \n> I tried to replicate this per that recipe, but it works for me:\n> \n> $ git clone https://github.com/pgaudit/pgaudit.git pgaudit\n> $ cd pgaudit\n> $ git checkout dev-pg16-ci\n> $ make -s check\n> # +++ regress check in contrib/pgaudit +++\n> # using temp instance on port 61696 with PID 1191703\n> ok 1 - pgaudit 310 ms\n> 1..1\n> # All 1 tests passed.\n\nI included the errors in the expect log so I could link to them. So test \nsuccess means the error is happening.\n\n> Note that I had to add\n> \n> $ diff -pud Makefile~ Makefile\n> --- Makefile~ 2023-04-27 14:02:19.041714415 -0400\n> +++ Makefile 2023-04-27 14:07:10.056909016 -0400\n> @@ -20,3 +20,5 @@ top_builddir = ../..\n> include $(top_builddir)/src/Makefile.global\n> include $(top_srcdir)/contrib/contrib-global.mk\n> endif\n> +\n> +EXTRA_INSTALL += contrib/pg_stat_statements\n\nYeah, I rarely run tests in-tree, but I'll add this if it does not break \nour regular CI.\n\n-David\n\n\n", "msg_date": "Thu, 27 Apr 2023 21:28:26 +0300", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "I suspect the problem is that GUCArrayDelete() is using the wrong Datum:\n\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 9dd624b3ae..ee9f87e7f2 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -6496,7 +6496,7 @@ GUCArrayDelete(ArrayType *array, ArrayType **usersetArray, const char *name)\n {\n newarray = construct_array_builtin(&d, 1, TEXTOID);\n if (usersetArray)\n- newUsersetArray = construct_array_builtin(&d, 1, BOOLOID);\n+ newUsersetArray = construct_array_builtin(&userSetDatum, 1, BOOLOID);\n }\n \n index++;\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 11:43:08 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On 4/27/23 21:43, Nathan Bossart wrote:\n> I suspect the problem is that GUCArrayDelete() is using the wrong Datum:\n> \n> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> index 9dd624b3ae..ee9f87e7f2 100644\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -6496,7 +6496,7 @@ GUCArrayDelete(ArrayType *array, ArrayType **usersetArray, const char *name)\n> {\n> newarray = construct_array_builtin(&d, 1, TEXTOID);\n> if (usersetArray)\n> - newUsersetArray = construct_array_builtin(&d, 1, BOOLOID);\n> + newUsersetArray = construct_array_builtin(&userSetDatum, 1, BOOLOID);\n> }\n> \n> index++;\n\nThat seems to work. The errors are now gone.\n\nThanks!\n-David\n\n\n", "msg_date": "Thu, 27 Apr 2023 21:47:33 +0300", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Thu, Apr 27, 2023 at 09:47:33PM +0300, David Steele wrote:\n> That seems to work. The errors are now gone.\n\nGreat. Barring objections, I'll plan on committing this shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 11:51:28 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "Hi!\n\nOn Thu, Apr 27, 2023 at 9:51 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Thu, Apr 27, 2023 at 09:47:33PM +0300, David Steele wrote:\n> > That seems to work. The errors are now gone.\n>\n> Great. Barring objections, I'll plan on committing this shortly.\n\nThanks to everybody for catching and investigating this.\nNathan, I'd like to push it myself. I'm also going to check the code\nfor similar errors.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 27 Apr 2023 21:53:23 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Thu, Apr 27, 2023 at 09:53:23PM +0300, Alexander Korotkov wrote:\n> Thanks to everybody for catching and investigating this.\n> Nathan, I'd like to push it myself. I'm also going to check the code\n> for similar errors.\n\nSounds good!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 11:55:29 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": ".On Thu, Apr 27, 2023 at 9:55 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> On Thu, Apr 27, 2023 at 09:53:23PM +0300, Alexander Korotkov wrote:\n> > Thanks to everybody for catching and investigating this.\n> > Nathan, I'd like to push it myself. I'm also going to check the code\n> > for similar errors.\n>\n> Sounds good!\n\nI didn't find similar bugs in 096dd80f3c. Pushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 27 Apr 2023 22:20:13 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 4/27/23 21:16, Tom Lane wrote:\n>> I tried to replicate this per that recipe, but it works for me:\n\n> I included the errors in the expect log so I could link to them. So test \n> success means the error is happening.\n\nAh, got it.\n\nI added some debug output to the test, and what I see is that\nat the \"\\connect - user1\" commands that work ok, what we have\nin pg_db_role_setting is along the lines of\n\n\n+select setdatabase, setrole::regrole, setconfig, setuser from pg_db_role_setting;\n+ setdatabase | setrole | setconfig | setuser \n+-------------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------+---------------\n+ 0 | user1 | {\"pgaudit.log=read, WRITE\",pgaudit.log_level=notice,pgaudit.log_client=on,pgaudit.role=auditor,pgaudit.log_relation=on} | {f,f,f,f,f}\n...\n\nwhile where it's not working:\n\n+select setdatabase, setrole::regrole, setconfig, setuser from pg_db_role_setting;\n+ setdatabase | setrole | setconfig | setuser \n+-------------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------+---------------\n+ 0 | user1 | {pgaudit.log_level=notice,pgaudit.log_client=on,pgaudit.role=auditor,pgaudit.log_statement=off} | {t,f,f,f}\n...\n\nSo it is failing because setuser = 't' for that setting; which makes the\nfailure itself unsurprising, but it seems like the flag should not\nhave been set that way. Digging further, it looks like the flag array\nis not correctly updated during ALTER USER RESET:\n\nselect setdatabase, setrole::regrole, setconfig, setuser from pg_db_role_setting;\nNOTICE: AUDIT: SESSION,1,1,READ,SELECT,TABLE,pg_catalog.pg_db_role_setting,\"select setdatabase, setrole::regrole, setconfig, setuser from pg_db_role_setting;\",<not logged>\n setdatabase | setrole | setconfig | setuser \n-------------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------+---------------\n 0 | user1 | {\"pgaudit.log=read, WRITE\",pgaudit.log_level=notice,pgaudit.log_client=on,pgaudit.role=auditor,pgaudit.log_relation=on} | {f,f,f,f,f}\n\n... that's fine ...\n\nALTER ROLE user1 RESET pgaudit.log_relation;\nselect setdatabase, setrole::regrole, setconfig, setuser from pg_db_role_setting;\n setdatabase | setrole | setconfig | setuser \n-------------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------+---------------\n 0 | user1 | {\"pgaudit.log=read, WRITE\",pgaudit.log_level=notice,pgaudit.log_client=on,pgaudit.role=auditor} | {t,f,f,f}\n\n... wrong ...\n\nALTER ROLE user1 RESET pgaudit.log;\nselect setdatabase, setrole::regrole, setconfig, setuser from pg_db_role_setting;\n setdatabase | setrole | setconfig | setuser \n-------------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------+---------------\n 0 | user1 | {pgaudit.log_level=notice,pgaudit.log_client=on,pgaudit.role=auditor} | {t,f,f}\n\n... and wronger.\n\n\nI had not paid any attention to 096dd80f3 when it went in, but now\nthat I have looked at it I'm quite distressed, independently of this\nprobably-minor bug. It seems to me that this feature is not well\ndesigned and completely ignores the precedents set by commits\na0ffa885e and 13d838815. The right way to do this was not to add some\npoorly-explained option to ALTER ROLE, but to record the role OID that\nissued the ALTER ROLE, and then to check when loading the ALTER ROLE\nsetting whether that role (still) has the right to change the\nspecified setting. As implemented, this can't possibly track changes\nin GRANT/REVOKE SET privileges correctly, and I wonder if it's not\nintroducing outright security holes like the one fixed by 13d838815.\n\nI think we ought to revert 096dd80f3 and try again in v17.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Apr 2023 15:22:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I suspect the problem is that GUCArrayDelete() is using the wrong Datum:\n\n> - newUsersetArray = construct_array_builtin(&d, 1, BOOLOID);\n> + newUsersetArray = construct_array_builtin(&userSetDatum, 1, BOOLOID);\n\nAh, should have checked my mail earlier.\n\nHowever, my concern about whether we even want this feature\nstill stands.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Apr 2023 15:34:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Thu, Apr 27, 2023 at 03:22:04PM -0400, Tom Lane wrote:\n> The right way to do this was not to add some\n> poorly-explained option to ALTER ROLE, but to record the role OID that\n> issued the ALTER ROLE, and then to check when loading the ALTER ROLE\n> setting whether that role (still) has the right to change the\n> specified setting. As implemented, this can't possibly track changes\n> in GRANT/REVOKE SET privileges correctly, and I wonder if it's not\n> introducing outright security holes like the one fixed by 13d838815.\n\nI generally agree. At least, I think it would be nice to avoid adding a\nnew option if possible. It's not clear to me why we'd need to also check\nprivileges at login time as opposed to only checking them at ALTER ROLE SET\ntime. ISTM that the former approach would introduce some interesting\nproblems around dropping roles or changing roles' privileges.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 15:14:43 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Apr 27, 2023 at 03:22:04PM -0400, Tom Lane wrote:\n>> The right way to do this was not to add some\n>> poorly-explained option to ALTER ROLE, but to record the role OID that\n>> issued the ALTER ROLE, and then to check when loading the ALTER ROLE\n>> setting whether that role (still) has the right to change the\n>> specified setting. As implemented, this can't possibly track changes\n>> in GRANT/REVOKE SET privileges correctly, and I wonder if it's not\n>> introducing outright security holes like the one fixed by 13d838815.\n\n> I generally agree. At least, I think it would be nice to avoid adding a\n> new option if possible. It's not clear to me why we'd need to also check\n> privileges at login time as opposed to only checking them at ALTER ROLE SET\n> time.\n\nPerhaps there's room to argue about that. But ISTM that if someone\ndoes ALTER ROLE SET on the strength of some privilege you granted\nthem, and then you regret that and revoke the privilege, then the\nALTER ROLE setting should not continue to work. So I would regard\nthe session-start-time check as the primary one. Checking when\nALTER ROLE is done is just a user-friendliness detail.\n\nAlso, in the case of an extension-defined GUC, we don't necessarily\nknow its privilege level at either ALTER ROLE time or session start,\nsince the extension might not yet be loaded at either point.\nI've forgotten exactly what restrictive hack we use to get around\nthat, but the *only* way to do that fully correctly is to record\nthe role that did the ALTER and then check its privileges at extension\nload time. I think that 096dd80f3 has made it harder not easier\nto get to the point of doing that correctly, because it's added\na feature that we'll have to figure out how to make interoperate\nwith a correct implementation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Apr 2023 18:38:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Fri, Apr 28, 2023 at 1:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > On Thu, Apr 27, 2023 at 03:22:04PM -0400, Tom Lane wrote:\n> >> The right way to do this was not to add some\n> >> poorly-explained option to ALTER ROLE, but to record the role OID that\n> >> issued the ALTER ROLE, and then to check when loading the ALTER ROLE\n> >> setting whether that role (still) has the right to change the\n> >> specified setting. As implemented, this can't possibly track changes\n> >> in GRANT/REVOKE SET privileges correctly, and I wonder if it's not\n> >> introducing outright security holes like the one fixed by 13d838815.\n>\n> > I generally agree. At least, I think it would be nice to avoid adding a\n> > new option if possible. It's not clear to me why we'd need to also check\n> > privileges at login time as opposed to only checking them at ALTER ROLE SET\n> > time.\n>\n> Perhaps there's room to argue about that. But ISTM that if someone\n> does ALTER ROLE SET on the strength of some privilege you granted\n> them, and then you regret that and revoke the privilege, then the\n> ALTER ROLE setting should not continue to work. So I would regard\n> the session-start-time check as the primary one. Checking when\n> ALTER ROLE is done is just a user-friendliness detail.\n\n From my point of view that is much different from what we're doing\nwith other database objects. If some role gets revoked from\nprivilege, that doesn't affect the actions done with that privilege\nbefore. The law is not retroactive. If one has created some tables,\nthose tables still work if the creator gets revoked privilege or even\ngets deleted. Why should the setting behave differently?\n\nAdditionally, I think if we start recording role OID, then we need a\nfull set of management clauses for each individual option ownership.\nOtherwise, we would leave this new role OID without necessarily\nmanagement facilities. But with them, the whole stuff will look like\nawful overengineering.\n\n> Also, in the case of an extension-defined GUC, we don't necessarily\n> know its privilege level at either ALTER ROLE time or session start,\n> since the extension might not yet be loaded at either point.\n\nYes, that's it.\n\n> I've forgotten exactly what restrictive hack we use to get around\n> that,\n\nAs I understand the restrictive hack is to assume that the role is at\nleast SUSET.\n\n> but the *only* way to do that fully correctly is to record\n> the role that did the ALTER and then check its privileges at extension\n> load time.\n\nThis depends on the understanding of correctness. Recording role OID\nwould mean altering that role privileges or deleting the role would\naffect the settings. Even if that is correct, this is very very far\nfrom the behavior we had for decades.\n\n> I think that 096dd80f3 has made it harder not easier\n> to get to the point of doing that correctly, because it's added\n> a feature that we'll have to figure out how to make interoperate\n> with a correct implementation.\n\nWith 096dd80f3, we still may revoke the setting of USERSET options\nfrom the public. Even if the option is not currently loaded at ALTER\ntime, we still may find an explicit revoke recorder in the system\ncatalog. That behavior will be current if we understand the default\noptions as separate material things (as it is today), but not part of\nthe setter role.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 28 Apr 2023 02:30:57 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Fri, Apr 28, 2023 at 2:30 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Additionally, I think if we start recording role OID, then we need a\n> full set of management clauses for each individual option ownership.\n> Otherwise, we would leave this new role OID without necessarily\n> management facilities. But with them, the whole stuff will look like\n> awful overengineering.\n\nI can also predict a lot of ambiguous cases. For instance, we\nexisting setting can be overridden with a different role OID. If it\nhas been overridden can the overwriter turn it back?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 28 Apr 2023 03:04:01 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On 4/27/23 8:04 PM, Alexander Korotkov wrote:\r\n> On Fri, Apr 28, 2023 at 2:30 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\r\n>> Additionally, I think if we start recording role OID, then we need a\r\n>> full set of management clauses for each individual option ownership.\r\n>> Otherwise, we would leave this new role OID without necessarily\r\n>> management facilities. But with them, the whole stuff will look like\r\n>> awful overengineering.\r\n> \r\n> I can also predict a lot of ambiguous cases. For instance, we\r\n> existing setting can be overridden with a different role OID. If it\r\n> has been overridden can the overwriter turn it back?\r\n\r\n[RMT hat]\r\n\r\nWhile the initial bug has been fixed, given there is discussion on \r\nreverting 096dd80f3, I've added this as an open item.\r\n\r\nI want to study this a bit more before providing my own opinion on revert.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Fri, 28 Apr 2023 09:42:03 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "Hi!\n\nOn Fri, 28 Apr 2023 at 17:42, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> On 4/27/23 8:04 PM, Alexander Korotkov wrote:\n> > On Fri, Apr 28, 2023 at 2:30 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >> Additionally, I think if we start recording role OID, then we need a\n> >> full set of management clauses for each individual option ownership.\n> >> Otherwise, we would leave this new role OID without necessarily\n> >> management facilities. But with them, the whole stuff will look like\n> >> awful overengineering.\n> >\n> > I can also predict a lot of ambiguous cases. For instance, we\n> > existing setting can be overridden with a different role OID. If it\n> > has been overridden can the overwriter turn it back?\n>\n> [RMT hat]\n>\n> While the initial bug has been fixed, given there is discussion on\n> reverting 096dd80f3, I've added this as an open item.\n>\n> I want to study this a bit more before providing my own opinion on revert.\n\nI see that 096dd80f3 is a lot simpler in implementation than\na0ffa885e, so I agree Alexander's opinion that it's good not to\noverengineer what could be done simple. If we patched corner cases of\na0ffa885e before (by 13d838815), why not patch minor things in\n096dd80f3 instead of reverting?\n\nAs I see in [1] there is some demand from users regarding this option.\n\nRegards,\nPavel Borisov,\nSupabase.\n\n\n", "msg_date": "Fri, 28 Apr 2023 20:29:21 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On 4/28/23 12:29 PM, Pavel Borisov wrote:\r\n> Hi!\r\n> \r\n> On Fri, 28 Apr 2023 at 17:42, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>>\r\n>> On 4/27/23 8:04 PM, Alexander Korotkov wrote:\r\n>>> On Fri, Apr 28, 2023 at 2:30 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\r\n>>>> Additionally, I think if we start recording role OID, then we need a\r\n>>>> full set of management clauses for each individual option ownership.\r\n>>>> Otherwise, we would leave this new role OID without necessarily\r\n>>>> management facilities. But with them, the whole stuff will look like\r\n>>>> awful overengineering.\r\n>>>\r\n>>> I can also predict a lot of ambiguous cases. For instance, we\r\n>>> existing setting can be overridden with a different role OID. If it\r\n>>> has been overridden can the overwriter turn it back?\r\n>>\r\n>> [RMT hat]\r\n>>\r\n>> While the initial bug has been fixed, given there is discussion on\r\n>> reverting 096dd80f3, I've added this as an open item.\r\n>>\r\n>> I want to study this a bit more before providing my own opinion on revert.\r\n> \r\n> I see that 096dd80f3 is a lot simpler in implementation than\r\n> a0ffa885e, so I agree Alexander's opinion that it's good not to\r\n> overengineer what could be done simple. If we patched corner cases of\r\n> a0ffa885e before (by 13d838815), why not patch minor things in\r\n> 096dd80f3 instead of reverting?\r\n> \r\n> As I see in [1] there is some demand from users regarding this option.\r\n\r\n[RMT hat]\r\n\r\nI read through the original thread[1] to understand the use case and \r\nalso the concerns, but I need to study [1] and this thread a bit more \r\nbefore I can form an opinion.\r\n\r\nThe argument that there is \"demand from users\" is certainly one I relate \r\nto, but there have been high-demand features in the past (e.g. MERGE, \r\nSQL/JSON) that have been reverted and released later due to various \r\nconcerns around implementation, etc. The main job of the RMT is to \r\nensure a major release is on time and is as stable as possible, which \r\nwill be a major factor into any decisions if there is lack of community \r\nconsensus on an open item.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://postgr.es/m/CAGRrpzawQSbuEedicOLRjQRCmSh6nC3HeMNvnQdBVmPMg7AvQw%40mail.gmail.com", "msg_date": "Sun, 30 Apr 2023 12:25:20 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Sun, Apr 30, 2023 at 12:25:20PM -0400, Jonathan S. Katz wrote:\n> [RMT hat]\n> \n> I read through the original thread[1] to understand the use case and also\n> the concerns, but I need to study [1] and this thread a bit more before I\n> can form an opinion.\n> \n> The argument that there is \"demand from users\" is certainly one I relate to,\n> but there have been high-demand features in the past (e.g. MERGE, SQL/JSON)\n> that have been reverted and released later due to various concerns around\n> implementation, etc. The main job of the RMT is to ensure a major release is\n> on time and is as stable as possible, which will be a major factor into any\n> decisions if there is lack of community consensus on an open item.\n\n(note: Not RMT this year)\n\nThis thread had no replies for the last two weeks, and beta1 is\nplanned for next week. Alexander, what are your plans here?\n--\nMichael", "msg_date": "Wed, 17 May 2023 15:17:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Fri, Apr 28, 2023 at 5:01 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Fri, Apr 28, 2023 at 1:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Nathan Bossart <nathandbossart@gmail.com> writes:\n> > > On Thu, Apr 27, 2023 at 03:22:04PM -0400, Tom Lane wrote:\n> > >> The right way to do this was not to add some\n> > >> poorly-explained option to ALTER ROLE, but to record the role OID that\n> > >> issued the ALTER ROLE, and then to check when loading the ALTER ROLE\n> > >> setting whether that role (still) has the right to change the\n> > >> specified setting. As implemented, this can't possibly track changes\n> > >> in GRANT/REVOKE SET privileges correctly, and I wonder if it's not\n> > >> introducing outright security holes like the one fixed by 13d838815.\n> >\n> > > I generally agree. At least, I think it would be nice to avoid adding a\n> > > new option if possible. It's not clear to me why we'd need to also check\n> > > privileges at login time as opposed to only checking them at ALTER ROLE SET\n> > > time.\n> >\n> > Perhaps there's room to argue about that. But ISTM that if someone\n> > does ALTER ROLE SET on the strength of some privilege you granted\n> > them, and then you regret that and revoke the privilege, then the\n> > ALTER ROLE setting should not continue to work. So I would regard\n> > the session-start-time check as the primary one. Checking when\n> > ALTER ROLE is done is just a user-friendliness detail.\n>\n> From my point of view that is much different from what we're doing\n> with other database objects. If some role gets revoked from\n> privilege, that doesn't affect the actions done with that privilege\n> before. The law is not retroactive. If one has created some tables,\n> those tables still work if the creator gets revoked privilege or even\n> gets deleted. Why should the setting behave differently?\n>\n\nI see there are mainly three concerns (a) Avoid adding the new option\nUSER SET, (b) The behavior of this feature varies from the precedents\nset by a0ffa885e and 13d838815, (c) As per discussion, not following\n13d838815 could lead to a similar security hole in this feature.\n\nNow, I don't know whether Tom and or Nathan share your viewpoint and\nfeel that nothing should be done. It would have been better if such a\ndiscussion happens during development but I can understand that mostly\nthe other senior people are sometimes busy enough to pay attention to\nall the work going on. I see that when Alexander proposed this new\noption and behavior in the original thread [1], there were no\nobjections, so the commit followed the normal community rules but we\nhave seen various times that post-commit reviews also lead to changing\nor reverting the feature.\n\nI see that you seem to think it would be over-engineering to follow\nthe suggestions shared here but without the patch or some further\ndiscussion, it won't be easy to conclude that.\n\nTom/Nathan, do you have any further suggestions here?\n\n[1] - https://www.postgresql.org/message-id/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 17 May 2023 12:17:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "Hi, Amit.\n\nOn Wed, May 17, 2023 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I see there are mainly three concerns (a) Avoid adding the new option\n> USER SET, (b) The behavior of this feature varies from the precedents\n> set by a0ffa885e and 13d838815, (c) As per discussion, not following\n> 13d838815 could lead to a similar security hole in this feature.\n>\n> Now, I don't know whether Tom and or Nathan share your viewpoint and\n> feel that nothing should be done. It would have been better if such a\n> discussion happens during development but I can understand that mostly\n> the other senior people are sometimes busy enough to pay attention to\n> all the work going on. I see that when Alexander proposed this new\n> option and behavior in the original thread [1], there were no\n> objections, so the commit followed the normal community rules but we\n> have seen various times that post-commit reviews also lead to changing\n> or reverting the feature.\n>\n> I see that you seem to think it would be over-engineering to follow\n> the suggestions shared here but without the patch or some further\n> discussion, it won't be easy to conclude that.\n>\n> Tom/Nathan, do you have any further suggestions here?\n\nI think the main question regarding the USER SET option is its\ncontradiction with Tom's plans to track the setter role OID per\nsetting. If we do track role OID then it makes USER SET both\nunnecessary for users and undesired complications for development.\nHowever, I've expressed my doubts about the tracking setter role OID\n[1], [2]. I think these plans look good in the big picture, but\nimplementation will have so many caveats that implementation will\nstall for a long time (probably forever). If we accept this view, the\nUSER SET option might seem a good practical solution for real-world\nissues.\n\nI think if we would elaborate more on tracking setter role OID, come\nto at least sketchy design then it could be easily to come to an\nagreement on future directions.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfdsy-jxhgR0bWk1Fv63c6txwMAkzxFMGMf29jqa9uU_CdQ%40mail.gmail.com\n2. https://www.postgresql.org/message-id/CAPpHfdu6roOVEUsV9TWNdQ%3DTZCrNEEwJM62EQiKULUyjpERhtg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 17 May 2023 14:56:54 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Tom/Nathan, do you have any further suggestions here?\n\nMy recommendation is to revert this feature. I do not see any\nway that we won't regret it as a poor design.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 May 2023 08:08:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Wed, May 17, 2023 at 08:08:36AM -0400, Tom Lane wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>> Tom/Nathan, do you have any further suggestions here?\n> \n> My recommendation is to revert this feature. I do not see any\n> way that we won't regret it as a poor design.\n\nI agree. The problem seems worth solving, but I think we ought to consider\na different approach. Apologies for not chiming in earlier on the original\nthread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 17 May 2023 09:47:04 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On 5/17/23 12:47 PM, Nathan Bossart wrote:\r\n> On Wed, May 17, 2023 at 08:08:36AM -0400, Tom Lane wrote:\r\n>> Amit Kapila <amit.kapila16@gmail.com> writes:\r\n>>> Tom/Nathan, do you have any further suggestions here?\r\n>>\r\n>> My recommendation is to revert this feature. I do not see any\r\n>> way that we won't regret it as a poor design.\r\n> \r\n> I agree. The problem seems worth solving, but I think we ought to consider\r\n> a different approach. Apologies for not chiming in earlier on the original\r\n> thread.\r\n\r\n[RMT hat, personal opinion]\r\n\r\nI do agree that the feature itself is useful, but given there is \r\ndisagreement over the feature design, particularly from people who have \r\nspent time working on features and analyzing the security ramifications \r\nin this area, the safest option is to revert and try again for v17.\r\n\r\nI suggest we revert before Beta 1.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 17 May 2023 12:57:45 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "Tom,\n\nOn Wed, May 17, 2023 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Tom/Nathan, do you have any further suggestions here?\n>\n> My recommendation is to revert this feature. I do not see any\n> way that we won't regret it as a poor design.\n\nI have carefully noted your concerns regarding the USER SET patch that\nI've committed. It's clear that you have strong convictions about\nthis, particularly in relation to your plan of storing the setter role\nOID in pg_db_role_setting.\n\nI want to take a moment to acknowledge the significance of your\nperspective and I respect that you have a different view on this\nmatter. Although I have not yet had the opportunity to see the\nfeasibility of your approach, I am open to understanding it further.\n\nAnyway, I don't want to do anything counter-productive. So, I've\ntaken the decision to revert the USER SET patch for the time being.\n\nI'm looking forward to continuing working with you on this subject for v17.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 17 May 2023 20:30:41 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On 5/17/23 1:30 PM, Alexander Korotkov wrote:\r\n> Tom,\r\n> \r\n> On Wed, May 17, 2023 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>> Amit Kapila <amit.kapila16@gmail.com> writes:\r\n>>> Tom/Nathan, do you have any further suggestions here?\r\n>>\r\n>> My recommendation is to revert this feature. I do not see any\r\n>> way that we won't regret it as a poor design.\r\n> \r\n> I have carefully noted your concerns regarding the USER SET patch that\r\n> I've committed. It's clear that you have strong convictions about\r\n> this, particularly in relation to your plan of storing the setter role\r\n> OID in pg_db_role_setting.\r\n> \r\n> I want to take a moment to acknowledge the significance of your\r\n> perspective and I respect that you have a different view on this\r\n> matter. Although I have not yet had the opportunity to see the\r\n> feasibility of your approach, I am open to understanding it further.\r\n> \r\n> Anyway, I don't want to do anything counter-productive. So, I've\r\n> taken the decision to revert the USER SET patch for the time being.\r\n\r\nThanks Alexander. I know reverting a feature is not easy and appreciate \r\nyou taking the time to work through this discussion.\r\n\r\n> I'm looking forward to continuing working with you on this subject for v17.\r\n\r\n+1; I think everyone agrees there is a feature here that will be helpful \r\nto our users.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 17 May 2023 15:23:13 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "On Wed, May 17, 2023 at 1:31 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I have carefully noted your concerns regarding the USER SET patch that\n> I've committed. It's clear that you have strong convictions about\n> this, particularly in relation to your plan of storing the setter role\n> OID in pg_db_role_setting.\n>\n> I want to take a moment to acknowledge the significance of your\n> perspective and I respect that you have a different view on this\n> matter. Although I have not yet had the opportunity to see the\n> feasibility of your approach, I am open to understanding it further.\n>\n> Anyway, I don't want to do anything counter-productive. So, I've\n> taken the decision to revert the USER SET patch for the time being.\n\nThis discussion made me go back and look at the commit in question. My\nopinion is that the feature as it was committed is quite hard to\nunderstand. The documentation for it said this: \"Specifies that\nvariable should be set on behalf of ordinary role.\" But what does that\neven mean? What's an \"ordinary role\"? What does \"on behalf of\" mean? I\nthink these are not terms we use elsewhere in the documentation, and I\nthink it wouldn't be easy for users to understand how to use the\nfeature properly. I'm not sure whether Tom's idea about what the\ndesign should be is good or bad, but I think that whatever we end up\nwith, we should try to explain more clearly and thoroughly what\nproblem the feature solves and how it does so.\n\nImagine a paragraph someplace that says something like \"You might want\nto do X. But if you try to do it, you'll find that it doesn't work\nbecause Y: [SQL example] We can work around this problem using the Z\nfeature. That lets us tell the system that it should Q, which fixes Y:\n[SQL example].\". It sounds like Tom might be proposing that we solve\nthis problem in some way that doesn't actually require any new SQL\nsyntax, and if we do that, then this might not be needed. But if we do\nadd syntax, then I think something like this would be really good to\nhave.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 May 2023 11:47:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> This discussion made me go back and look at the commit in question. My\n> opinion is that the feature as it was committed is quite hard to\n> understand. The documentation for it said this: \"Specifies that\n> variable should be set on behalf of ordinary role.\" But what does that\n> even mean? What's an \"ordinary role\"? What does \"on behalf of\" mean?\n\nYeah. And even more to the point: how would the feature interact with\nper-user grants of SET privilege? It seems like it would have to ignore\nor override that, which is not a conclusion I like at all.\n\nI think that commit a0ffa885e pretty much nailed down the user interface\nwe want, and what remains is to work out how granting SET privilege\ninteracts with the time-delayed nature of ALTER USER/DATABASE SET.\nBut the answer to that does not seem difficult to me: remember who\nissued the ALTER and see if they still have SET privilege at the time\nwe activate a particular entry.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 May 2023 14:33:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible regression setting GUCs on \\connect" } ]
[ { "msg_contents": "Hi,\n\nToday, I found myself wondering whether we could add a non-btree\nindex, and in particular a GIN index, to a system catalog if we so\ndesired. Currently, after initdb, we only create tables and btree\nindexes, but I'm not sure if there is any good reason why we couldn't\ndo something else. One problem with a GIN index specifically is that\nit doesn't implement amgettuple, so routines like systable_getnext()\nand index_getnext() couldn't be used, and we'd have to code up a\nbitmap scan. Currently, bitmap scans aren't used anywhere other than\nthe executor, but I don't know of any reason why it wouldn't be\npractical to use them elsewhere.\n\nI think I may pursue a different approach to the problem that led me\nto think about this, at least for the moment. But I'm still curious\nabout the general question: if somebody showed up with a well-written\npatch that added a GIN index to a system catalog, would that\npotentially be acceptable, or DOA?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 14:03:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "can system catalogs have GIN indexes?" }, { "msg_contents": "On Thu, Apr 27, 2023 at 11:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think I may pursue a different approach to the problem that led me\n> to think about this, at least for the moment. But I'm still curious\n> about the general question: if somebody showed up with a well-written\n> patch that added a GIN index to a system catalog, would that\n> potentially be acceptable, or DOA?\n\nSurely it would depend on the use case. Is this just an intellectual\nexercise, or do you actually plan on doing something like this, in\nwhatever way? For example, does the posting list compression seem like\nit might offer a compelling trade-off for some system catalog indexes\nover and above B-Tree deduplication?\n\nI'm asking this (at least in part) because it affects the answer. Lots\nof stuff that GIN does that seems like it would be particularly tricky\nto integrate with a system catalog is non-essential. It could be (and\nsometimes is) selectively disabled. Whereas B-Tree indexes don't\nreally have any optional features (you can disable deduplication\nselectively, but I believe that approximately nobody ever found it\nuseful to do so).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 27 Apr 2023 11:17:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: can system catalogs have GIN indexes?" }, { "msg_contents": "On Thu, Apr 27, 2023 at 11:17 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm asking this (at least in part) because it affects the answer. Lots\n> of stuff that GIN does that seems like it would be particularly tricky\n> to integrate with a system catalog is non-essential. It could be (and\n> sometimes is) selectively disabled. Whereas B-Tree indexes don't\n> really have any optional features (you can disable deduplication\n> selectively, but I believe that approximately nobody ever found it\n> useful to do so).\n\nActually, you *can't* disable deduplication against a system catalog\nindex in practice, due to a limitation with storage parameters. That's\nwhy deduplication was disabled against system catalogs until Postgres\n15.\n\nThat limitation would be much harder to ignore with GIN indexes.\nB-Tree/nbtree deduplication is more or less optimized for being\nenabled by default. It has most of the upside of posting lists, with\nvery few downsides. For example, you can sometimes get a lot more\nspace savings with Posting list compression (very low cardinality\nindexes can be as much as 10x smaller), but that's not really\ncompatible with the design of nbtree deduplication.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 27 Apr 2023 11:21:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: can system catalogs have GIN indexes?" }, { "msg_contents": "On Thu, Apr 27, 2023 at 2:17 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Apr 27, 2023 at 11:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I think I may pursue a different approach to the problem that led me\n> > to think about this, at least for the moment. But I'm still curious\n> > about the general question: if somebody showed up with a well-written\n> > patch that added a GIN index to a system catalog, would that\n> > potentially be acceptable, or DOA?\n>\n> Surely it would depend on the use case. Is this just an intellectual\n> exercise, or do you actually plan on doing something like this, in\n> whatever way? For example, does the posting list compression seem like\n> it might offer a compelling trade-off for some system catalog indexes\n> over and above B-Tree deduplication?\n>\n> I'm asking this (at least in part) because it affects the answer. Lots\n> of stuff that GIN does that seems like it would be particularly tricky\n> to integrate with a system catalog is non-essential. It could be (and\n> sometimes is) selectively disabled. Whereas B-Tree indexes don't\n> really have any optional features (you can disable deduplication\n> selectively, but I believe that approximately nobody ever found it\n> useful to do so).\n\nMy interest was in being able to index operators that GIN can index\nand btree cannot.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 Apr 2023 14:30:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: can system catalogs have GIN indexes?" } ]
[ { "msg_contents": "I am not sure it is really a bug, but nevertheless\n\nIf you do\n\nmkdir [source]\ngit clone git://git.postgresql.org/git/postgresql.git [source]\nmkdir build; cd build\n../\\[source\\]/configure \nmake\n\nyou will get\n\nmake[1]: *** No rule to make target 'generated-headers'. Stop.\n\nIf there are no \"[]\" in the path to the source, everything is OK.\n\nIt would be OK for me, if it still does not work. But I would appreciate at \nleast proper error message there (or at configure step), this would save time, \nfor figuring out, why it suddenly stopped working.\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Fri, 28 Apr 2023 14:09:59 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Build problem with square brackets in build path" }, { "msg_contents": "Nikolay Shaplov <dhyan@nataraj.su> writes:\n> If you do\n\n> mkdir [source]\n> git clone git://git.postgresql.org/git/postgresql.git [source]\n> mkdir build; cd build\n> ../\\[source\\]/configure \n> make\n\n> you will get\n\n> make[1]: *** No rule to make target 'generated-headers'. Stop.\n\n> If there are no \"[]\" in the path to the source, everything is OK.\n\nIt's generally quite unwise to use shell meta-characters in\nfile or directory names. I give you one example:\n\n$ ls ../[source]\nCOPYRIGHT README.git contrib/\nGNUmakefile.in aclocal.m4 doc/\nHISTORY config/ meson.build\nMakefile configure* meson_options.txt\nREADME configure.ac src/\n$ ls ../[source]/*.ac\nls: ../[source]/*.ac: No such file or directory\n\nThis is expected behavior (I leave it as an exercise for the\nstudent to figure out why).\n\nWhile it might be possible to make the Postgres build scripts\nproof against funny characters in the build paths, the effort\nrequired would be far out of proportion to the value. Not least\nbecause manual operations in such a file tree would misbehave\noften enough to convince you to change, even if the scripts were\nall water-tight.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Apr 2023 14:06:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Build problem with square brackets in build path" } ]
[ { "msg_contents": "Hi,\n\n*Problem: Having multiple versions of Postgres installed in CentOS 7. I\nWant to set the 9.5 version as default. Not able to access Postgres 9.5\nthrough the terminal as well.*\n\n 1. For Command *psql --version* I'm getting 9.5 as the version.\n 2. For Command *sudo -u postgres psql *I'm getting 9.2 as the version.\n\nPlease look at the below screenshot.\n[image: Screenshot from 2023-04-28 21-14-29.png]\n\n*Background: *By default, the server has a 9.2 version I installed the\n*rh-postgresql95* version from the below article.\nUsed the below command to install *rh-postgresql95*\n\n> yum <https://www.server-world.info/en/command/html/yum.html> --enablerepo=centos-sclo-rh\n> -y install rh-postgresql95-postgresql-server\n>\nhttps://www.server-world.info/en/note?os=CentOS_7&p=postgresql95&f=1\n\nTried updating the PATH variable correctly with the latest version. But not\nworking.\n\n[image: Screenshot from 2023-04-28 21-28-44.png]\n\n\nPlease share the steps or any guidance on how to resolve the issue.\n\nThankyou so much. Anything would be helpful.\n\nThanks & Regards,\nGautham", "msg_date": "Fri, 28 Apr 2023 21:33:26 +0530", "msg_from": "Gautham Raj <gauthamrajsunny@gmail.com>", "msg_from_op": true, "msg_subject": "Postgres Version want to update from 9.2 to 9.5 version in CentOS 7.9" }, { "msg_contents": "Gautham Raj <gauthamrajsunny@gmail.com> writes:\n> *Problem: Having multiple versions of Postgres installed in CentOS 7. I\n> Want to set the 9.5 version as default. Not able to access Postgres 9.5\n> through the terminal as well.*\n\n> 1. For Command *psql --version* I'm getting 9.5 as the version.\n> 2. For Command *sudo -u postgres psql *I'm getting 9.2 as the version.\n\nYou'd need to read up on Red Hat's \"SCL\" packaging system to understand\nhow to use that rh-postgresql95 package. SCL is since my time there,\nbut I'm pretty sure it's intentional that it's not in the default PATH.\n\nBut TBH, all the versions available from Red Hat for RHEL7/CentOS7 are\nout of support as far as the upstream community is concerned; to us it's\npretty mystifying why you'd be trying to migrate to an already-years-dead\nrelease branch. I'd suggest getting some more modern release from\nhttps://www.postgresql.org/download/\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Apr 2023 12:38:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres Version want to update from 9.2 to 9.5 version in CentOS\n 7.9" }, { "msg_contents": "Thank you for the quick response.\n\nYes, I'm willing to get the latest version. I read some articles CentOS 7\ndoesn't support the latest versions. So was trying the old versions.\n\nI tried the article shared but, got the below error at the step.\n\n[image: image.png]\nSomething is wrong here.\n\nPlease suggest the steps for resolving this issue.\n\nThanks & Regards,\nGautham\n\nOn Fri, Apr 28, 2023 at 10:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Gautham Raj <gauthamrajsunny@gmail.com> writes:\n> > *Problem: Having multiple versions of Postgres installed in CentOS 7. I\n> > Want to set the 9.5 version as default. Not able to access Postgres 9.5\n> > through the terminal as well.*\n>\n> > 1. For Command *psql --version* I'm getting 9.5 as the version.\n> > 2. For Command *sudo -u postgres psql *I'm getting 9.2 as the version.\n>\n> You'd need to read up on Red Hat's \"SCL\" packaging system to understand\n> how to use that rh-postgresql95 package. SCL is since my time there,\n> but I'm pretty sure it's intentional that it's not in the default PATH.\n>\n> But TBH, all the versions available from Red Hat for RHEL7/CentOS7 are\n> out of support as far as the upstream community is concerned; to us it's\n> pretty mystifying why you'd be trying to migrate to an already-years-dead\n> release branch. I'd suggest getting some more modern release from\n> https://www.postgresql.org/download/\n>\n> regards, tom lane\n>", "msg_date": "Fri, 28 Apr 2023 23:40:09 +0530", "msg_from": "Gautham Raj <gauthamrajsunny@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres Version want to update from 9.2 to 9.5 version in CentOS\n 7.9" }, { "msg_contents": "On Fri, Apr 28, 2023 at 12:10 PM Gautham Raj <gauthamrajsunny@gmail.com>\nwrote:\n\n> Thank you for the quick response.\n>\n> Yes, I'm willing to get the latest version. I read some articles CentOS 7\n> doesn't support the latest versions. So was trying the old versions.\n>\n> I tried the article shared but, got the below error at the step.\n>\n\nYou have bigger problems to solve with your setup. Basic programs are\nfailing to run, so there are other things that need fixing with your\npackage manager that are completely unrelated to PostgreSQL.\n\nAlthough Tom kindly responded to your initial question, this list\n(-hackers) is the wrong place to be asking that type of question, as its\npurpose is to discuss the *development* of PostgreSQL, not installation and\nupgrade issues.\n\nThere are many other places better suited to your question, including the\npgsql-general list [1], IRC channel [2] and Slack channel [3].\n\n-Roberto\n\n[1] https://www.postgresql.org/list/pgsql-general/\n[2] https://www.postgresql.org/community/irc/\n[3]\nhttps://join.slack.com/t/postgresteam/shared_invite/zt-1qj14i9sj-E9WqIFlvcOiHsEk2yFEMjA\n\nOn Fri, Apr 28, 2023 at 12:10 PM Gautham Raj <gauthamrajsunny@gmail.com> wrote:Thank you for the quick response.Yes, I'm willing to get the latest version. I read some articles CentOS 7 doesn't support the latest versions. So was trying the old versions.I tried the article shared but, got the below error at the step.You have bigger problems to solve with your setup. Basic programs are failing to run, so there are other things that need fixing with your package manager that are completely unrelated to PostgreSQL.Although Tom kindly responded to your initial question, this list (-hackers) is the wrong place to be asking that type of question, as its purpose is to discuss the *development* of PostgreSQL, not installation and upgrade issues.There are many other places better suited to your question, including the pgsql-general list [1], IRC channel [2] and Slack channel [3].-Roberto[1] https://www.postgresql.org/list/pgsql-general/[2] https://www.postgresql.org/community/irc/[3] https://join.slack.com/t/postgresteam/shared_invite/zt-1qj14i9sj-E9WqIFlvcOiHsEk2yFEMjA", "msg_date": "Fri, 28 Apr 2023 12:22:34 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Version want to update from 9.2 to 9.5 version in CentOS\n 7.9" } ]
[ { "msg_contents": "I had noticed that performance wasn't great when using the @> or <@ \noperators when examining if an element is contained in a range.\nBased on the discussion in [1] I would like to suggest the following \nchanges:\n\nThis patch attempts to improve the row estimation, as well as opening \nthe possibility of using a btree index scan when using the containment \noperators.\n\nThis is done via a new support function handling the following 2 requests:\n\n* SupportRequestIndexCondition\nfind_index_quals will build an operator clause, given at least one \nfinite RangeBound.\n\n* SupportRequestSimplify\nfind_simplified_clause will rewrite the containment operator into a \nclause using inequality operators from the btree family (if available \nfor the element type).\n\nA boolean constant is returned if the range is either empty or has no \nbounds.\n\nPerforming the rewrite here lets the clausesel machinery provide the \nsame estimates as for normal scalar inequalities.\n\nIn both cases build_bound_expr is used to build the operator clauses \nfrom RangeBounds.\n\nThanks to Laurenz Albe for giving the patch a look before submission.\n\n[1] \nhttps://www.postgresql.org/message-id/222c75fd-43b8-db3e-74a6-bb4fe22f76db@kimmet.dk\n\n\tRegards,\n\t\tKim Johan Andersson", "msg_date": "Sat, 29 Apr 2023 17:07:19 +0200", "msg_from": "Kim Johan Andersson <kimjand@kimmet.dk>", "msg_from_op": true, "msg_subject": "[PATCH] Add support function for containment operators" }, { "msg_contents": "Any thoughts on this change?", "msg_date": "Thu, 06 Jul 2023 12:51:29 +0000", "msg_from": "Kim Johan Andersson <kimjand@kimmet.dk>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Sat, 2023-04-29 at 17:07 +0200, Kim Johan Andersson wrote:\n> I had noticed that performance wasn't great when using the @> or <@ \n> operators when examining if an element is contained in a range.\n> Based on the discussion in [1] I would like to suggest the following \n> changes:\n> \n> This patch attempts to improve the row estimation, as well as opening \n> the possibility of using a btree index scan when using the containment \n> operators.\n> \n> This is done via a new support function handling the following 2 requests:\n> \n> * SupportRequestIndexCondition\n> find_index_quals will build an operator clause, given at least one \n> finite RangeBound.\n> \n> * SupportRequestSimplify\n> find_simplified_clause will rewrite the containment operator into a \n> clause using inequality operators from the btree family (if available \n> for the element type).\n> \n> A boolean constant is returned if the range is either empty or has no \n> bounds.\n> \n> Performing the rewrite here lets the clausesel machinery provide the \n> same estimates as for normal scalar inequalities.\n> \n> In both cases build_bound_expr is used to build the operator clauses \n> from RangeBounds.\n\nI think that this is a small, but useful improvement.\n\nThe patch applies and builds without warning and passes \"make installcheck-world\"\nwith the (ample) new regression tests.\n\nSome comments:\n\n- About the regression tests:\n You are using EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF, COSTS OFF).\n While that returns stable results, I don't see the added value.\n I think you should use EXPLAIN (COSTS OFF). You don't need to test the\n actual number of result rows; we can trust that the executor processes\n >= and < correctly.\n Plain EXPLAIN would speed up the regression tests, which is a good thing.\n\n- About the implementation:\n You implement both \"SupportRequestIndexCondition\" and \"SupportRequestSimplify\",\n but when I experimented, the former was never called. That does not\n surprise me, since any expression of the shape \"expr <@ range constant\"\n can be simplified. Is the \"SupportRequestIndexCondition\" branch dead code?\n If not, do you have an example that triggers it?\n\n- About the code:\n\n +static Node *\n +find_index_quals(Const *rangeConst, Expr *otherExpr, Oid opfamily)\n +{\n [...]\n +\n + if (!(lower.infinite && upper.infinite))\n + {\n [...]\n + }\n +\n + return NULL;\n\n To avoid deep indentation and to make the code more readable, I think\n it would be better to write\n\n if (!(lower.infinite && upper.infinite))\n return NULL;\n\n and unindent the rest of the code\n\n\n +static Node *\n +match_support_request(Node *rawreq)\n +{\n [...]\n + switch (req->funcid)\n + {\n + case F_ELEM_CONTAINED_BY_RANGE:\n [...]\n + case F_RANGE_CONTAINS_ELEM:\n [...]\n + default:\n + return NULL;\n + }\n\n (This code appears twice.)\n\n The default clause should not be reachable, right?\n I think that there should be an Assert() to verify that.\n Perhaps something like\n\n Assert(req->funcid == F_ELEM_CONTAINED_BY_RANGE ||\n req->funcid == F_RANGE_CONTAINS_ELEM);\n\n if (req->funcid == F_ELEM_CONTAINED_BY_RANGE)\n {\n [...]\n }\n else if (req->funcid == F_RANGE_CONTAINS_ELEM)\n {\n [...]\n }\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 06 Jul 2023 18:15:56 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "> On Sat, 2023-04-29 at 17:07 +0200, Kim Johan Andersson wrote:\n> > I had noticed that performance wasn't great when using the @> or <@ \n> > operators when examining if an element is contained in a range.\n> > Based on the discussion in [1] I would like to suggest the following \n> > changes:\n> > \n> > This patch attempts to improve the row estimation, as well as opening \n> > the possibility of using a btree index scan when using the containment \n> > operators.\n\nI managed to break the patch:\n\nCREATE DATABASE czech ICU_LOCALE \"cs-CZ\" LOCALE \"cs_CZ.utf8\" TEMPLATE template0;\n\n\\c czech\n\nCREATE TYPE textrange AS RANGE (SUBTYPE = text, SUBTYPE_OPCLASS = text_pattern_ops);\n\nCREATE TABLE tx (t text);\n\nINSERT INTO tx VALUES ('a'), ('c'), ('d'), ('ch');\n\nEXPLAIN SELECT * FROM tx WHERE t <@ textrange('a', 'd');\n\n QUERY PLAN \n════════════════════════════════════════════════════\n Seq Scan on tx (cost=0.00..30.40 rows=7 width=32)\n Filter: ((t >= 'a'::text) AND (t < 'd'::text))\n(2 rows)\n\nSELECT * FROM tx WHERE t <@ textrange('a', 'd');\nERROR: could not determine which collation to use for string comparison\nHINT: Use the COLLATE clause to set the collation explicitly.\n\n\nThe replacement operators are wrong; it should be ~>=~ and ~<~ .\n\nAlso, there should be no error message.\nThe result should be 'a', 'c' and 'ch'.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 06 Jul 2023 20:19:07 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "I wrote:\n> You implement both \"SupportRequestIndexCondition\" and \"SupportRequestSimplify\",\n> but when I experimented, the former was never called.  That does not\n> surprise me, since any expression of the shape \"expr <@ range constant\"\n> can be simplified.  Is the \"SupportRequestIndexCondition\" branch dead code?\n> If not, do you have an example that triggers it?\n\nI had an idea about this:\nSo far, you only consider constant ranges. But if we have a STABLE range\nexpression, you could use an index scan for \"expr <@ range\", for example\nIndex Cond (expr >= lower(range) AND expr < upper(range)).\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 07 Jul 2023 13:20:41 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On 07-07-2023 13:20, Laurenz Albe wrote:\n> I wrote:\n>> You implement both \"SupportRequestIndexCondition\" and \"SupportRequestSimplify\",\n>> but when I experimented, the former was never called.  That does not\n>> surprise me, since any expression of the shape \"expr <@ range constant\"\n>> can be simplified.  Is the \"SupportRequestIndexCondition\" branch dead code?\n>> If not, do you have an example that triggers it?\n\nI would think it is dead code, I came to the same conclusion. So we can \ndrop SupportRequestIndexCondition, since the simplification happens to \ntake care of everything.\n\n\n> I had an idea about this:\n> So far, you only consider constant ranges. But if we have a STABLE range\n> expression, you could use an index scan for \"expr <@ range\", for example\n> Index Cond (expr >= lower(range) AND expr < upper(range)).\n> \n\nI will try to look into this. Originally that was what I was hoping for, \nbut didn't see way of going about it.\n\nThanks for your comments, I will also look at the locale-related \nbreakage you spotted.\n\n\tRegards,\n\t\tKimjand\n\n\n", "msg_date": "Sat, 8 Jul 2023 08:11:10 +0200", "msg_from": "Kim Johan Andersson <kimjand@kimmet.dk>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Sat, 2023-07-08 at 08:11 +0200, Kim Johan Andersson wrote:\n> On 07-07-2023 13:20, Laurenz Albe wrote:\n> > I wrote:\n> > > You implement both \"SupportRequestIndexCondition\" and \"SupportRequestSimplify\",\n> > > but when I experimented, the former was never called.  That does not\n> > > surprise me, since any expression of the shape \"expr <@ range constant\"\n> > > can be simplified.  Is the \"SupportRequestIndexCondition\" branch dead code?\n> > > If not, do you have an example that triggers it?\n> \n> I would think it is dead code, I came to the same conclusion. So we can \n> drop SupportRequestIndexCondition, since the simplification happens to \n> take care of everything.\n> \n> \n> > I had an idea about this:\n> > So far, you only consider constant ranges.  But if we have a STABLE range\n> > expression, you could use an index scan for \"expr <@ range\", for example\n> > Index Cond (expr >= lower(range) AND expr < upper(range)).\n> > \n> \n> I will try to look into this. Originally that was what I was hoping for, \n> but didn't see way of going about it.\n> \n> Thanks for your comments, I will also look at the locale-related \n> breakage you spotted.\n\nI have marked the patch as \"returned with feedback\".\n\nI encourage you to submit an improved version in a future commitfest!\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 01 Aug 2023 04:07:24 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Tue, Aug 1, 2023 at 10:07 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> >\n> >\n> > > I had an idea about this:\n> > > So far, you only consider constant ranges. But if we have a STABLE range\n> > > expression, you could use an index scan for \"expr <@ range\", for example\n> > > Index Cond (expr >= lower(range) AND expr < upper(range)).\n> > >\n\nThe above part, not sure how to implement it, not sure it is doable.\n\nRefactor:\ndrop SupportRequestIndexCondition and related code, since mentioned in\nupthread, it's dead code.\nrefactor the regression test. (since data types with infinity cover\nmore cases than int4range, so I deleted some tests).\n\nnow there are 3 helper functions (build_bound_expr,\nfind_simplified_clause, match_support_request), 2 entry functions\n(elem_contained_by_range_support, range_contains_elem_support)\n\nCollation problem seems solved. Putting the following test on the\nsrc/test/regress/sql/rangetypes.sql will not work. Maybe because of\nthe order of the regression test, in SQL-ASCII encoding, I cannot\ncreate collation=\"cs-CZ-x-icu\".\n\ndrop type if EXISTS textrange1, textrange2;\ndrop table if EXISTS collate_test1, collate_test2;\nCREATE TYPE textrange1 AS RANGE (SUBTYPE = text, collation=\"C\");\ncreate type textrange2 as range(subtype=text, collation=\"cs-CZ-x-icu\");\nCREATE TABLE collate_test1 (b text COLLATE \"en-x-icu\" NOT NULL);\nINSERT INTO collate_test1(b) VALUES ('a'), ('c'), ('d'), ('ch');\nCREATE TABLE collate_test2 (b text NOT NULL);\nINSERT INTO collate_test2(b) VALUES ('a'), ('c'), ('d'), ('ch');\n\n--should include 'ch'\nSELECT * FROM collate_test1 WHERE b <@ textrange1('a', 'd');\n--should not include 'ch'\nSELECT * FROM collate_test1 WHERE b <@ textrange2('a', 'd');\n--should include 'ch'\nSELECT * FROM collate_test2 WHERE b <@ textrange1('a', 'd');\n--should not include 'ch'\nSELECT * FROM collate_test2 WHERE b <@ textrange2('a', 'd');\n-----------------", "msg_date": "Fri, 13 Oct 2023 14:26:01 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Fri, 2023-10-13 at 14:26 +0800, jian he wrote:\n> Collation problem seems solved.\n\nI didn't review your patch in detail, there is still a problem\nwith my example:\n\n CREATE TYPE textrange AS RANGE (\n SUBTYPE = text,\n SUBTYPE_OPCLASS = text_pattern_ops\n );\n\n CREATE TABLE tx (t text COLLATE \"cs-CZ-x-icu\");\n\n INSERT INTO tx VALUES ('a'), ('c'), ('d'), ('ch');\n\n SELECT * FROM tx WHERE t <@ textrange('a', 'd');\n\n t \n ════\n a\n c\n ch\n (3 rows)\n\nThat was correct.\n\n EXPLAIN SELECT * FROM tx WHERE t <@ textrange('a', 'd');\n\n QUERY PLAN \n ════════════════════════════════════════════════════\n Seq Scan on tx (cost=0.00..30.40 rows=7 width=32)\n Filter: ((t >= 'a'::text) AND (t < 'd'::text))\n (2 rows)\n\nBut that was weird. The operators seem wrong. Look at that\nquery:\n\n SELECT * FROM tx WHERE t >= 'a' AND t < 'd';\n\n t \n ═══\n a\n c\n (2 rows)\n\nBut the execution plan is identical...\n\nI am not sure what is the problem here, but in my opinion the\noperators shown in the execution plan should be like this:\n\n SELECT * FROM tx WHERE t ~>=~ 'a' AND t ~<~ 'd';\n\n t \n ════\n a\n c\n ch\n (3 rows)\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 19 Oct 2023 18:01:40 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Fri, Oct 20, 2023 at 12:01 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Fri, 2023-10-13 at 14:26 +0800, jian he wrote:\n> > Collation problem seems solved.\n>\n> I didn't review your patch in detail, there is still a problem\n> with my example:\n>\n> CREATE TYPE textrange AS RANGE (\n> SUBTYPE = text,\n> SUBTYPE_OPCLASS = text_pattern_ops\n> );\n>\n> CREATE TABLE tx (t text COLLATE \"cs-CZ-x-icu\");\n>\n> INSERT INTO tx VALUES ('a'), ('c'), ('d'), ('ch');\n>\n> SELECT * FROM tx WHERE t <@ textrange('a', 'd');\n>\n> t\n> ════\n> a\n> c\n> ch\n> (3 rows)\n>\n> That was correct.\n>\n> EXPLAIN SELECT * FROM tx WHERE t <@ textrange('a', 'd');\n>\n> QUERY PLAN\n> ════════════════════════════════════════════════════\n> Seq Scan on tx (cost=0.00..30.40 rows=7 width=32)\n> Filter: ((t >= 'a'::text) AND (t < 'd'::text))\n> (2 rows)\n>\n> But that was weird. The operators seem wrong. Look at that\n\nThanks for pointing this out!\n\nThe problem is that TypeCacheEntry->rngelemtype typcaheentry don't\nhave the range's SUBTYPE_OPCLASS info.\nSo in find_simplified_clause, we need to get the range's\nSUBTYPE_OPCLASS from the pg_catalog table.\nAlso in pg_range, column rngsubopc is not null. so this should be fine.", "msg_date": "Fri, 20 Oct 2023 16:24:24 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Fri, 2023-10-20 at 16:24 +0800, jian he wrote:\n> [new patch]\n\nThanks, that patch works as expected and passes regression tests.\n\nSome comments about the code:\n\n> --- a/src/backend/utils/adt/rangetypes.c\n> +++ b/src/backend/utils/adt/rangetypes.c\n> @@ -558,7 +570,6 @@ elem_contained_by_range(PG_FUNCTION_ARGS)\n> \tPG_RETURN_BOOL(range_contains_elem_internal(typcache, r, val));\n> }\n> \n> -\n> /* range, range -> bool functions */\n> \n> /* equality (internal version) */\n\nPlease don't change unrelated whitespace.\n\n> +static Node *\n> +find_simplified_clause(Const *rangeConst, Expr *otherExpr)\n> +{\n> +\tForm_pg_range pg_range;\n> +\tHeapTuple\ttup;\n> +\tOid\t\t\topclassOid;\n> +\tRangeBound\tlower;\n> +\tRangeBound\tupper;\n> +\tbool\t\tempty;\n> +\tOid\t\t\trng_collation;\n> +\tTypeCacheEntry *elemTypcache;\n> +\tOid\t\t\topfamily =\tInvalidOid;\n> +\n> +\tRangeType *range = DatumGetRangeTypeP(rangeConst->constvalue);\n> +\tTypeCacheEntry *rangetypcache = lookup_type_cache(RangeTypeGetOid(range), TYPECACHE_RANGE_INFO);\n> +\t{\n\nThis brace is unnecessary. Perhaps a leftover from a removed conditional statement.\n\n> +\t\t/* this part is get the range's SUBTYPE_OPCLASS from pg_range catalog.\n> +\t\t * Refer load_rangetype_info function last line.\n> +\t\t * TypeCacheEntry->rngelemtype typcaheenetry either don't have opclass entry or with default opclass.\n> +\t\t * Range's subtype opclass only in catalog table.\n> +\t\t*/\n\nThe comments in the patch need some more love.\nApart from the language, you should have a look at the style guide:\n\n- single-line comments should start with lower case and have no period:\n\n /* example of a single-line comment */\n\n- Multi-line comments should start with /* on its own line and end with */ on its\n own line. They should use whole sentences:\n\n /*\n * In case a comment spans several lines, it should look like\n * this. Try not to exceed 80 characters.\n */\n\n> +\t\ttup = SearchSysCache1(RANGETYPE, ObjectIdGetDatum(RangeTypeGetOid(range)));\n> +\n> +\t\t/* should not fail, since we already checked typtype ... */\n> +\t\tif (!HeapTupleIsValid(tup))\n> +\t\t\telog(ERROR, \"cache lookup failed for range type %u\", RangeTypeGetOid(range));\n\nIf this is a \"can't happen\" case, it should be an Assert.\n\n> +\n> +\t\tpg_range = (Form_pg_range) GETSTRUCT(tup);\n> +\n> +\t\topclassOid = pg_range->rngsubopc;\n> +\n> +\t\tReleaseSysCache(tup);\n> +\n> +\t\t/* get opclass properties and look up the comparison function */\n> +\t\topfamily = get_opclass_family(opclassOid);\n> +\t}\n> +\n> +\trange_deserialize(rangetypcache, range, &lower, &upper, &empty);\n> +\trng_collation = rangetypcache->rng_collation;\n> +\n> +\tif (empty)\n> +\t{\n> +\t\t/* If the range is empty, then there can be no matches. */\n> +\t\treturn makeBoolConst(false, false);\n> +\t}\n> +\telse if (lower.infinite && upper.infinite)\n> +\t{\n> +\t\t/* The range has no bounds, so matches everything. */\n> +\t\treturn makeBoolConst(true, false);\n> +\t}\n> +\telse\n> +\t{\n\nMany of the variables declared at the beginning of the function are only used in\nthis branch. You should declare them here.\n\n> +static Node *\n> +match_support_request(Node *rawreq)\n> +{\n> +\tif (IsA(rawreq, SupportRequestSimplify))\n> +\t{\n\nTo keep the indentation shallow, the preferred style is:\n\n if (/* case we don't handle */)\n return NULL;\n /* proceed without indentation */\n\n> +\t\tSupportRequestSimplify *req = (SupportRequestSimplify *) rawreq;\n> +\t\tFuncExpr *fexpr = req->fcall;\n> +\t\tNode\t *leftop;\n> +\t\tNode\t *rightop;\n> +\t\tConst\t *rangeConst;\n> +\t\tExpr\t *otherExpr;\n> +\n> +\t\tAssert(list_length(fexpr->args) == 2);\n> +\n> +\t\tleftop = linitial(fexpr->args);\n> +\t\trightop = lsecond(fexpr->args);\n> +\n> +\t\tswitch (fexpr->funcid)\n> +\t\t{\n> +\t\t\tcase F_ELEM_CONTAINED_BY_RANGE:\n> +\t\t\t\tif (!IsA(rightop, Const) || ((Const *) rightop)->constisnull)\n> +\t\t\t\t\treturn NULL;\n> +\n> +\t\t\t\trangeConst = (Const *) rightop;\n> +\t\t\t\totherExpr = (Expr *) leftop;\n> +\t\t\t\tbreak;\n> +\n> +\t\t\tcase F_RANGE_CONTAINS_ELEM:\n> +\t\t\t\tif (!IsA(leftop, Const) || ((Const *) leftop)->constisnull)\n> +\t\t\t\t\treturn NULL;\n> +\n> +\t\t\t\trangeConst = (Const *) leftop;\n> +\t\t\t\totherExpr = (Expr *) rightop;\n> +\t\t\t\tbreak;\n> +\n> +\t\t\tdefault:\n> +\t\t\t\treturn NULL;\n> +\t\t}\n> +\n> +\t\treturn find_simplified_clause(rangeConst, otherExpr);\n> +\t}\n> +\treturn NULL;\n> +}\n> \\ No newline at end of file\n\nYou are calling this funtion from both \"elem_contained_by_range_support\" and\n\"range_contains_elem_support\", only to branch based on the function type.\nI think the code would be simpler if you did away with \"match_support_request\" at all.\n\n\nI adjusted your patch according to my comments; what do you think?\n\nI also went over the regression tests. I did away with the comparison function, instead\nI used examples that don't return too many rows. I cut down on the test cases a little\nbit. I added a test that uses the \"text_pattern_ops\" operator class.\n\nYours,\nLaurenz Albe", "msg_date": "Sun, 12 Nov 2023 18:15:35 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Sun, 2023-11-12 at 18:15 +0100, Laurenz Albe wrote:\n> I adjusted your patch according to my comments; what do you think?\n\nI have added the patch to the January commitfest, with Jian and Kim as authors.\nI hope that is OK with you.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sun, 12 Nov 2023 20:20:03 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On 12-11-2023 20:20, Laurenz Albe wrote:\n> On Sun, 2023-11-12 at 18:15 +0100, Laurenz Albe wrote:\n>> I adjusted your patch according to my comments; what do you think?\n> \n> I have added the patch to the January commitfest, with Jian and Kim as authors.\n> I hope that is OK with you.\n\nSounds great to me. Thanks to Jian for picking this up.\n\n\tRegards,\n\t\tKim Johan Andersson\n\n\n\n\n", "msg_date": "Sun, 12 Nov 2023 21:30:33 +0100", "msg_from": "Kim Johan Andersson <kimjand@kimmet.dk>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "fix a typo and also did a minor change.\n\nfrom\n+ if (lowerExpr != NULL && upperExpr != NULL)\n+ return (Node *) makeBoolExpr(AND_EXPR, list_make2(lowerExpr, upperExpr),\n-1);\n+ else if (lowerExpr != NULL)\n+ return (Node *) lowerExpr;\n+ else if (upperExpr != NULL)\n+ return (Node *) upperExpr;\n\nto\n\n+ if (lowerExpr != NULL && upperExpr != NULL)\n+ return (Node *) makeBoolExpr(AND_EXPR, list_make2(lowerExpr, upperExpr),\n-1);\n+ else if (lowerExpr != NULL)\n+ return (Node *) lowerExpr;\n+ else if (upperExpr != NULL)\n+ return (Node *) upperExpr;\n+ else\n+ {\n+ Assert(false);\n+ return NULL;\n+ }\n\nbecause cfbot says:\n\n15:04:38.116] make -s -j${BUILD_JOBS} clean\n[15:04:38.510] time make -s -j${BUILD_JOBS} world-bin\n[15:04:43.272] rangetypes.c:2908:1: error: non-void function does not\nreturn a value in all control paths [-Werror,-Wreturn-type]\n[15:04:43.272] }\n[15:04:43.272] ^\n[15:04:43.272] 1 error generated.\n\nalso add some commit messages, I hope it will be useful.", "msg_date": "Sat, 16 Dec 2023 21:03:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "jian he <jian.universality@gmail.com> writes:\n> [ v5-0001-Simplify-containment-in-range-constants-with-supp.patch ]\n\nI spent some time reviewing and cleaning up this code. The major\nproblem I noted was that it doesn't spend any effort thinking about\ncases where it'd be unsafe or undesirable to apply the transformation.\nIn particular, it's entirely uncool to produce a double-sided\ncomparison if the elemExpr is volatile. These two expressions\ndo not have the same result:\n\nselect random() <@ float8range(0.1, 0.2);\nselect random() >= 0.1 and random() < 0.2;\n\n(Yes, I'm aware that BETWEEN is broken in this respect. All the\nmore reason why we mustn't break <@.)\n\nAnother problem is that even if the elemExpr isn't volatile,\nit might be expensive enough that evaluating it twice is bad.\nI am not sure where we ought to put the cutoff. There are some\nexisting places where we set a 10X cpu_operator_cost limit on\ncomparable transformations, so I stole that logic in the attached.\nBut perhaps someone has an argument for a different rule?\n\nAnyway, pending discussion of that point, I think the code is good\nto go. I don't like the test cases much though: they expend many more\ncycles than necessary. You could prove the same points just by\nlooking at the expansion of expressions, eg.\n\nregression=# explain (verbose, costs off) select current_date <@ daterange(null,null);\n QUERY PLAN \n----------------\n Result\n Output: true\n(2 rows)\n\nregression=# explain (verbose, costs off) select current_date <@ daterange('-Infinity', '1997-04-10'::date, '[)');\n QUERY PLAN \n-----------------------------------------------------------------------------------------\n Result\n Output: ((CURRENT_DATE >= '-infinity'::date) AND (CURRENT_DATE < '1997-04-10'::date))\n(2 rows)\n\nI'd suggest losing the temp table and just coding tests like these.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 16 Jan 2024 16:46:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Wed, Jan 17, 2024 at 5:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> But perhaps someone has an argument for a different rule?\n>\n> Anyway, pending discussion of that point, I think the code is good\n> to go. I don't like the test cases much though: they expend many more\n> cycles than necessary. You could prove the same points just by\n> looking at the expansion of expressions, eg.\n>\n\nyour patch is far better!\n\nIMHO, worried about the support function, the transformed plan\ngenerates the wrong result,\nso we add the tests to make it bullet proof.\nNow I see your point. If the transformed plan is right, the whole\nadded code should be fine.\nbut keeping the textrange_supp related test should be a good idea.\nsince we don't have SUBTYPE_OPCLASS related sql tests.\n\n\n", "msg_date": "Wed, 17 Jan 2024 12:39:36 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "jian he <jian.universality@gmail.com> writes:\n> Now I see your point. If the transformed plan is right, the whole\n> added code should be fine.\n> but keeping the textrange_supp related test should be a good idea.\n> since we don't have SUBTYPE_OPCLASS related sql tests.\n\nYeah, it's a little harder to make a table-less test for that case.\nI thought about using current_user or the like as a stable comparison\nvalue, but that introduces some doubt about what the collation would\nbe. That test seems cheap enough as-is, since it's handling only a\ntiny amount of data.\n\nCommitted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jan 2024 14:01:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" }, { "msg_contents": "On Sun, 21 Jan 2024 at 00:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> jian he <jian.universality@gmail.com> writes:\n> > Now I see your point. If the transformed plan is right, the whole\n> > added code should be fine.\n> > but keeping the textrange_supp related test should be a good idea.\n> > since we don't have SUBTYPE_OPCLASS related sql tests.\n>\n> Yeah, it's a little harder to make a table-less test for that case.\n> I thought about using current_user or the like as a stable comparison\n> value, but that introduces some doubt about what the collation would\n> be. That test seems cheap enough as-is, since it's handling only a\n> tiny amount of data.\n>\n> Committed.\n\nI have updated the commitfest entry to Committed as the patch is committed.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 26 Jan 2024 18:35:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support function for containment operators" } ]
[ { "msg_contents": "Hi!\nWe are developing a GiST index structure. We use M-tree for this work. We have a problem with it. We would like to get some closest neighbors. In my select we order the query by distance, and we use limit. Sometimes it works well, but in some cases I don’t get back the closest points. If we set the recheck flag to true, the query results will be correct.\nI examined the CUBE extension and realized the recheck flag is false. According to the comments, it’s not necessary to set the recheck to true in CUBE. What conditions must be met to work well even with recheck=false? How is it working in CUBE?\nThanks for your replies.\n\n\n\n\n\n\n\n\n\n\nHi! \n\nWe are developing a GiST index structure. We use M-tree for this work. We have a problem with it. We would like to get some closest neighbors. In my select we order the query by distance,\n and we use limit. Sometimes it works well, but in some cases I don’t get back the closest points. If we set the recheck flag to true, the query results will be correct. \n\nI examined the CUBE extension and realized the recheck flag is false. According to the comments, it’s not necessary to set the recheck to true in CUBE. What conditions must be met to\n work well even with recheck=false? How is it working in CUBE? \n\nThanks for your replies.", "msg_date": "Mon, 1 May 2023 21:13:54 +0000", "msg_from": "=?windows-1250?Q?Kotrocz=F3_Roland?= <kotroczo.roland@inf.elte.hu>", "msg_from_op": true, "msg_subject": "Order problem in GiST index" } ]
[ { "msg_contents": "Big PostgreSQL databases use and regularly open/close huge numbers of\nfile descriptors and directory entries for various anachronistic\nreasons, one of which is the 1GB RELSEG_SIZE thing. The segment\nmanagement code is trickier that you might think and also still\nharbours known bugs.\n\nA nearby analysis of yet another obscure segment life cycle bug\nreminded me of this patch set to switch to simple large files and\neventually drop all that. I originally meant to develop the attached\nsketch-quality code further and try proposing it in the 16 cycle,\nwhile I was down the modernisation rabbit hole[1], but then I got side\ntracked: at some point I believed that the 56 bit relfilenode thing\nmight be necessary for correctness, but then I found a set of rules\nthat seem to hold up without that. I figured I might as well post\nwhat I have early in the 17 cycle as a \"concept\" patch to see which\nway the flames blow.\n\nThere are various boring details due to Windows, and then a load of\nfairly obvious changes, and then a whole can of worms about how we'd\nhandle the transition for the world's fleet of existing databases.\nI'll cut straight to that part. Different choices on aggressiveness\ncould be made, but here are the straw-man answers I came up with so\nfar:\n\n1. All new relations would be in large format only. No 16384.N\nfiles, just 16384 that can grow to MaxBlockNumber * BLCKSZ.\n\n2. The existence of a file 16384.1 means that this smgr relation is\nin legacy segmented format that came from pg_upgrade (note that we\ndon't unlink that file once it exists, even when truncating the fork,\nuntil we eventually drop the relation).\n\n3. Forks that were pg_upgrade'd from earlier releases using hard\nlinks or reflinks would implicitly be in large format if they only had\none segment, and otherwise they could stay in the traditional format\nfor a grace period of N major releases, after which we'd plan to drop\nsegment support. pg_upgrade's [ref]link mode would therefore be the\nonly way to get a segmented relation, other than a developer-only\ntrick for testing/debugging.\n\n4. Every opportunity to convert a multi-segment fork to large format\nwould be taken: pg_upgrade in copy mode, basebackup, COPY DATABASE,\nVACUUM FULL, TRUNCATE, etc. You can see approximately working sketch\nversions of all the cases I thought of so far in the attached.\n\n5. The main places that do file-level copying of relations would use\ncopy_file_range() to do the splicing, so that on file systems that are\nsmart enough (XFS, ZFS, BTRFS, ...) with qualifying source and\ndestination, the operation can be very fast, and other degrees of\noptimisation are available to the kernel too even for file systems\nwithout block sharing magic (pushing down block range copies to\nhardware/network storage, etc). The copy_file_range() stuff could\nalso be proposed independently (I vaguely recall it was discussed a\nfew times before), it's just that it really comes into its own when\nyou start splicing files together, as needed here, and it's also been\nadopted by FreeBSD with the same interface as Linux and has an\nefficient implementation in bleeding edge ZFS there.\n\nStepping back, the main ideas are: (1) for some users of large\ndatabases, it would be painlessly done at upgrade time without even\nreally noticing, using modern file system facilities where possible\nfor speed; (2) for anyone who wants to defer that because of lack of\nfast copy_file_range() and a desire to avoid prolonged downtime by\nusing links or reflinks, concatenation can be put off for the next N\nreleases, giving a total of 5 + N years of option to defer the work,\nand in that case there are also many ways to proactively change to\nlarge format before the time comes with varying degrees of granularity\nand disruption. For example, set up a new replica and fail over, or\nVACUUM FULL tables one at a time, etc.\n\nThere are plenty of things left to do in this patch set: pg_rewind\ndoesn't understand optional segmentation yet, there are probably more\nthings like that, and I expect there are some ssize_t vs pgoff_t\nconfusions I missed that could bite a 32 bit system. But you can see\nthe basics working on a typical system.\n\nI am not aware of any modern/non-historic filesystem[2] that can't do\nlarge files with ease. Anyone know of anything to worry about on that\nfront? I think the main collateral damage would be weird old external\ntools like some weird old version of Windows tar I occasionally see\nmentioned, that sort of thing, but that'd just be another case of\n\"well don't use that then\", I guess? What else might we need to think\nabout, outside PostgreSQL?\n\nWhat other problems might occur inside PostgreSQL? Clearly we'd need\nto figure out a decent strategy to automate testing of all of the\nrelevant transitions. We could test the splicing code paths with an\noptional test suite that you might enable along with a small segment\nsize (as we're already testing on CI and probably BF after the last\nround of segmentation bugs). To test the messy Windows off_t API\nstuff convincingly, we'd need actual > 4GB files, I think? Maybe\ndoable cheaply with file system hole punching tricks.\n\nSpeaking of file system holes, this patch set doesn't touch buffile.c\nThat code wants to use segments for two extra purposes: (1) parallel\ncreate index merges workers' output using segmentation tricks as if\nthere were holes in the file; this could perhaps be replaced with\nlarge files that make use of actual OS-level holes but I didn't feel\nlike additionally claiming that all computers have spare files --\nperhaps another approach is needed anyway; (2) buffile.c deliberately\nspreads large buffiles around across multiple temporary tablespaces\nusing segments supposedly for space management reasons. So although\nit initially looks like a nice safe little place to start using large\nfiles, we'd need an answer to those design choices first.\n\n/me dons flameproof suit and goes back to working on LLVM problems for a while\n\n[1] https://wiki.postgresql.org/wiki/AllComputers\n[2] https://en.wikipedia.org/wiki/Comparison_of_file_systems", "msg_date": "Tue, 2 May 2023 13:28:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Large files for relations" }, { "msg_contents": "Hi\n\nI like this patch - it can save some system sources - I am not sure how\nmuch, because bigger tables usually use partitioning usually.\n\nImportant note - this feature breaks sharing files on the backup side - so\nbefore disabling 1GB sized files, this issue should be solved.\n\nRegards\n\nPavel\n\nHiI like this patch - it can save some system sources - I am not sure how much, because bigger tables usually use partitioning usually.Important note - this feature breaks sharing files on the backup side - so before disabling 1GB sized files, this issue should be solved.RegardsPavel", "msg_date": "Tue, 2 May 2023 05:27:38 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Tue, May 2, 2023 at 3:28 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I like this patch - it can save some system sources - I am not sure how much, because bigger tables usually use partitioning usually.\n\nYeah, if you only use partitions of < 1GB it won't make a difference.\nLarger partitions are not uncommon, though.\n\n> Important note - this feature breaks sharing files on the backup side - so before disabling 1GB sized files, this issue should be solved.\n\nHmm, right, so there is a backup granularity continuum with \"whole\ndatabase cluster\" at one end, \"only files whose size, mtime [or\noptionally also checksum] changed since last backup\" in the middle,\nand \"only blocks that changed since LSN of last backup\" at the other\nend. Getting closer to the right end of that continuum can make\nbackups require less reading, less network transfer, less writing\nand/or less storage space depending on details. But this proposal\nmoves the middle thing further to the left by changing the granularity\nfrom 1GB to whole relation, which can be gargantuan with this patch.\nUltimately we need to be all the way at the right on that continuum,\nand there are clearly several people working on that goal.\n\nI'm not involved in any of those projects, but it's fun to think about\nan alien technology that produces complete standalone backups like\nrsync --link-dest (as opposed to \"full\" backups followed by a chain of\n\"incremental\" backups that depend on it so you need to retain them\ncarefully) while still sharing disk blocks with older backups, and\ndoing so with block granularity. TL;DW something something WAL\nsomething something copy_file_range().\n\n\n", "msg_date": "Wed, 3 May 2023 17:21:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Wed, May 3, 2023 at 5:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> rsync --link-dest\n\nI wonder if rsync will grow a mode that can use copy_file_range() to\nshare blocks with a reference file (= previous backup). Something\nlike --copy-range-dest. That'd work for large-file relations\n(assuming a file system that has block sharing, like XFS and ZFS).\nYou wouldn't get the \"mtime is enough, I don't even need to read the\nbytes\" optimisation, which I assume makes all database hackers feel a\nbit queasy anyway, but you'd get the space savings via the usual\nrolling checksum or a cheaper version that only looks for strong\nchecksum matches at the same offset, or whatever other tricks rsync\nmight have up its sleeve.\n\n\n", "msg_date": "Wed, 3 May 2023 17:36:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Wed, May 3, 2023 at 1:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, May 3, 2023 at 5:21 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > rsync --link-dest\n>\n> I wonder if rsync will grow a mode that can use copy_file_range() to\n> share blocks with a reference file (= previous backup). Something\n> like --copy-range-dest. That'd work for large-file relations\n> (assuming a file system that has block sharing, like XFS and ZFS).\n> You wouldn't get the \"mtime is enough, I don't even need to read the\n> bytes\" optimisation, which I assume makes all database hackers feel a\n> bit queasy anyway, but you'd get the space savings via the usual\n> rolling checksum or a cheaper version that only looks for strong\n> checksum matches at the same offset, or whatever other tricks rsync\n> might have up its sleeve.\n>\n\nI understand the need to reduce open file handles, despite the\npossibilities enabled by using large numbers of small file sizes.\nSnowflake, for instance, sees everything in 1MB chunks, which makes\nmassively parallel sequential scans (Snowflake's _only_ query plan)\npossible, though I don't know if they accomplish that via separate files,\nor via segments within a large file.\n\nI am curious whether a move like this to create a generational change in\nfile file format shouldn't be more ambitious, perhaps altering the block\nformat to insert a block format version number, whether that be at every\nblock, or every megabyte, or some other interval, and whether we store it\nin-file or in a separate file to accompany the first non-segmented. Having\nsuch versioning information would allow blocks of different formats to\nco-exist in the same table, which could be critical to future changes such\nas 64 bit XIDs, etc.\n\nOn Wed, May 3, 2023 at 1:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Wed, May 3, 2023 at 5:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> rsync --link-dest\n\nI wonder if rsync will grow a mode that can use copy_file_range() to\nshare blocks with a reference file (= previous backup).  Something\nlike --copy-range-dest.  That'd work for large-file relations\n(assuming a file system that has block sharing, like XFS and ZFS).\nYou wouldn't get the \"mtime is enough, I don't even need to read the\nbytes\" optimisation, which I assume makes all database hackers feel a\nbit queasy anyway, but you'd get the space savings via the usual\nrolling checksum or a cheaper version that only looks for strong\nchecksum matches at the same offset, or whatever other tricks rsync\nmight have up its sleeve.I understand the need to reduce open file handles, despite the possibilities enabled by using large numbers of small file sizes. Snowflake, for instance, sees everything in 1MB chunks, which makes massively parallel sequential scans (Snowflake's _only_ query plan) possible, though I don't know if they accomplish that via separate files, or via segments within a large file.I am curious whether a move like this to create a generational change in file file format shouldn't be more ambitious, perhaps altering the block format to insert a block format version number, whether that be at every block, or every megabyte, or some other interval, and whether we store it in-file or in a separate file to accompany the first non-segmented. Having such versioning information would allow blocks of different formats to co-exist in the same table, which could be critical to future changes such as 64 bit XIDs, etc.", "msg_date": "Tue, 9 May 2023 16:52:49 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "Greetings,\n\n* Corey Huinker (corey.huinker@gmail.com) wrote:\n> On Wed, May 3, 2023 at 1:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, May 3, 2023 at 5:21 PM Thomas Munro <thomas.munro@gmail.com>\n> > wrote:\n> > > rsync --link-dest\n\n... rsync isn't really a safe tool to use for PG backups by itself\nunless you're using it with archiving and with start/stop backup and\nwith checksums enabled.\n\n> > I wonder if rsync will grow a mode that can use copy_file_range() to\n> > share blocks with a reference file (= previous backup). Something\n> > like --copy-range-dest. That'd work for large-file relations\n> > (assuming a file system that has block sharing, like XFS and ZFS).\n> > You wouldn't get the \"mtime is enough, I don't even need to read the\n> > bytes\" optimisation, which I assume makes all database hackers feel a\n> > bit queasy anyway, but you'd get the space savings via the usual\n> > rolling checksum or a cheaper version that only looks for strong\n> > checksum matches at the same offset, or whatever other tricks rsync\n> > might have up its sleeve.\n\nThere's also really good reasons to have multiple full backups and not\njust a single full backup and then lots and lots of incrementals which\nbasically boils down to \"are you really sure that one copy of that one\nreally important file won't every disappear from your backup\nrepository..?\"\n\nThat said, pgbackrest does now have block-level incremental backups\n(where we define our own block size ...) and there's reasons we decided\nagainst going down the LSN-based approach (not the least of which is\nthat the LSN isn't always updated...), but long story short, moving to\nlarger than 1G files should be something that pgbackrest will be able\nto handle without as much impact as there would have been previously in\nterms of incremental backups. There is a loss in the ability to use\nmtime to scan just the parts of the relation that changed and that's\nunfortunate but I wouldn't see it as really a game changer (and yes,\nthere's certainly an argument for not trusting mtime, though I don't\nthink we've yet had a report where there was an mtime issue that our\nmtime-validity checking didn't catch and force pgbackrest into\nchecksum-based revalidation automatically which resulted in an invalid\nbackup... of course, not enough people test their backups...).\n\n> I understand the need to reduce open file handles, despite the\n> possibilities enabled by using large numbers of small file sizes.\n\nI'm also generally in favor of reducing the number of open file handles\nthat we have to deal with. Addressing the concerns raised nearby about\nweird corner-cases of non-1G length ABCDEF.1 files existing while\nABCDEF.2, and more, files exist is certainly another good argument in\nfavor of getting rid of segments.\n\n> I am curious whether a move like this to create a generational change in\n> file file format shouldn't be more ambitious, perhaps altering the block\n> format to insert a block format version number, whether that be at every\n> block, or every megabyte, or some other interval, and whether we store it\n> in-file or in a separate file to accompany the first non-segmented. Having\n> such versioning information would allow blocks of different formats to\n> co-exist in the same table, which could be critical to future changes such\n> as 64 bit XIDs, etc.\n\nTo the extent you're interested in this, there are patches posted which\nare alrady trying to move us in a direction that would allow for\ndifferent page formats that add in space for other features such as\n64bit XIDs, better checksums, and TDE tags to be supported.\n\nhttps://commitfest.postgresql.org/43/3986/\n\nCurrently those patches are expecting it to be declared at initdb time,\nbut the way they're currently written that's more of a soft requirement\nas you can tell on a per-page basis what features are enabled for that\npage. Might make sense to support it in that form first anyway though,\nbefore going down the more ambitious route of allowing different pages\nto have different sets of features enabled for them concurrently.\n\nWhen it comes to 'a separate file', we do have forks already and those\nserve a very valuable but distinct use-case where you can get\ninformation from the much smaller fork (be it the FSM or the VM or some\nfuture thing) while something like 64bit XIDs or a stronger checksum is\nsomething you'd really need on every page. I have serious doubts about\na proposal where we'd store information needed on every page read in\nsome far away block that's still in the same file such as using\nsomething every 1MB as that would turn every block access into two..\n\nThanks,\n\nStephen", "msg_date": "Tue, 9 May 2023 17:53:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> I am not aware of any modern/non-historic filesystem[2] that can't do\n> large files with ease. Anyone know of anything to worry about on that\n> front?\n\n\nThere is some trouble in the ambiguity of what we mean by \"modern\" and\n\"large files\". There are still a large number of users of ext4 where the\nmax file size is 16TB. Switching to a single large file per relation would\neffectively cut the max table size in half for those users. How would a\nuser with say a 20TB table running on ext4 be impacted by this change?\n\nOn Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\nI am not aware of any modern/non-historic filesystem[2] that can't do\nlarge files with ease.  Anyone know of anything to worry about on that\nfront? There is some trouble in the ambiguity of what we mean by \"modern\" and \"large files\". There are still a large number of users of ext4 where the max file size is 16TB. Switching to a single large file per relation would effectively cut the max table size in half for those users. How would a user with say a 20TB table running on ext4 be impacted by this change?", "msg_date": "Thu, 11 May 2023 16:16:37 -0400", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Fri, May 12, 2023 at 8:16 AM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n> On Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> I am not aware of any modern/non-historic filesystem[2] that can't do\n>> large files with ease. Anyone know of anything to worry about on that\n>> front?\n>\n> There is some trouble in the ambiguity of what we mean by \"modern\" and \"large files\". There are still a large number of users of ext4 where the max file size is 16TB. Switching to a single large file per relation would effectively cut the max table size in half for those users. How would a user with say a 20TB table running on ext4 be impacted by this change?\n\nHrmph. Yeah, that might be a bit of a problem. I see it discussed in\nvarious places that MySQL/InnoDB can't have tables bigger than 16TB on\next4 because of this, when it's in its default one-file-per-object\nmode (as opposed to its big-tablespace-files-to-hold-all-the-objects\nmode like DB2, Oracle etc, in which case I think you can have multiple\n16TB segment files and get past that ext4 limit). It's frustrating\nbecause 16TB is still really, really big and you probably should be\nusing partitions, or more partitions, to avoid all kinds of other\nscalability problems at that size. But however hypothetical the\nscenario might be, it should work, and this is certainly a plausible\nargument against the \"aggressive\" plan described above with the hard\ncut-off where we get to drop the segmented mode.\n\nConcretely, a 20TB pg_upgrade in copy mode would fail while trying to\nconcatenate with the above patches, so you'd have to use link or\nreflink mode (you'd probably want to use that anyway unless due to\nsheer volume of data to copy otherwise, since ext4 is also not capable\nof block-range sharing), but then you'd be out of luck after N future\nmajor releases, according to that plan where we start deleting the\ncode, so you'd need to organise some smaller partitions before that\ntime comes. Or pg_upgrade to a target on xfs etc. I wonder if a\nfuture version of extN will increase its max file size.\n\nA less aggressive version of the plan would be that we just keep the\nsegment code for the foreseeable future with no planned cut off, and\nwe make all of those \"piggy back\" transformations that I showed in the\npatch set optional. For example, I had it so that CLUSTER would\nquietly convert your relation to large format, if it was still in\nsegmented format (might as well if you're writing all the data out\nanyway, right?), but perhaps that could depend on a GUC. Likewise for\nbase backup. Etc. Then someone concerned about hitting the 16TB\nlimit on ext4 could opt out. Or something like that. It seems funny\nthough, that's exactly the user who should want this feature (they\nhave 16,000 relation segment files).\n\n\n", "msg_date": "Fri, 12 May 2023 11:37:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> On Fri, May 12, 2023 at 8:16 AM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n>> On Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> I am not aware of any modern/non-historic filesystem[2] that can't do\n>>> large files with ease. Anyone know of anything to worry about on that\n>>> front?\n>>\n>> There is some trouble in the ambiguity of what we mean by \"modern\" and\n>> \"large files\". There are still a large number of users of ext4 where\n>> the max file size is 16TB. Switching to a single large file per\n>> relation would effectively cut the max table size in half for those\n>> users. How would a user with say a 20TB table running on ext4 be\n>> impacted by this change?\n[…]\n> A less aggressive version of the plan would be that we just keep the\n> segment code for the foreseeable future with no planned cut off, and\n> we make all of those \"piggy back\" transformations that I showed in the\n> patch set optional. For example, I had it so that CLUSTER would\n> quietly convert your relation to large format, if it was still in\n> segmented format (might as well if you're writing all the data out\n> anyway, right?), but perhaps that could depend on a GUC. Likewise for\n> base backup. Etc. Then someone concerned about hitting the 16TB\n> limit on ext4 could opt out. Or something like that. It seems funny\n> though, that's exactly the user who should want this feature (they\n> have 16,000 relation segment files).\n\nIf we're going to have to keep the segment code for the foreseeable\nfuture anyway, could we not get most of the benefit by increasing the\nsegment size to something like 1TB? The vast majority of tables would\nfit in one file, and there would be less risk of hitting filesystem\nlimits.\n\n- ilmari\n\n\n", "msg_date": "Fri, 12 May 2023 12:38:28 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Thu, May 11, 2023 at 7:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Fri, May 12, 2023 at 8:16 AM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n> > On Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >> I am not aware of any modern/non-historic filesystem[2] that can't do\n> >> large files with ease. Anyone know of anything to worry about on that\n> >> front?\n> >\n> > There is some trouble in the ambiguity of what we mean by \"modern\" and\n> \"large files\". There are still a large number of users of ext4 where the\n> max file size is 16TB. Switching to a single large file per relation would\n> effectively cut the max table size in half for those users. How would a\n> user with say a 20TB table running on ext4 be impacted by this change?\n>\n> Hrmph. Yeah, that might be a bit of a problem. I see it discussed in\n> various places that MySQL/InnoDB can't have tables bigger than 16TB on\n> ext4 because of this, when it's in its default one-file-per-object\n> mode (as opposed to its big-tablespace-files-to-hold-all-the-objects\n> mode like DB2, Oracle etc, in which case I think you can have multiple\n> 16TB segment files and get past that ext4 limit). It's frustrating\n> because 16TB is still really, really big and you probably should be\n> using partitions, or more partitions, to avoid all kinds of other\n> scalability problems at that size. But however hypothetical the\n> scenario might be, it should work,\n>\n\nAgreed, it is frustrating, but it is not hypothetical. I have seen a number\nof\nusers having single tables larger than 16TB and don't use partitioning\nbecause\nof the limitations we have today. The most common reason is needing multiple\nunique constraints on the table that don't include the partition key.\nSomething\nlike a user_id and email. There are workarounds for those cases, but usually\nit's easier to deal with a single large table than to deal with the sharp\nedges\nthose workarounds introduce.\n\nOn Thu, May 11, 2023 at 7:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Fri, May 12, 2023 at 8:16 AM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n> On Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> I am not aware of any modern/non-historic filesystem[2] that can't do\n>> large files with ease.  Anyone know of anything to worry about on that\n>> front?\n>\n> There is some trouble in the ambiguity of what we mean by \"modern\" and \"large files\". There are still a large number of users of ext4 where the max file size is 16TB. Switching to a single large file per relation would effectively cut the max table size in half for those users. How would a user with say a 20TB table running on ext4 be impacted by this change?\n\nHrmph.  Yeah, that might be a bit of a problem.  I see it discussed in\nvarious places that MySQL/InnoDB can't have tables bigger than 16TB on\next4 because of this, when it's in its default one-file-per-object\nmode (as opposed to its big-tablespace-files-to-hold-all-the-objects\nmode like DB2, Oracle etc, in which case I think you can have multiple\n16TB segment files and get past that ext4 limit).  It's frustrating\nbecause 16TB is still really, really big and you probably should be\nusing partitions, or more partitions, to avoid all kinds of other\nscalability problems at that size.  But however hypothetical the\nscenario might be, it should work, Agreed, it is frustrating, but it is not hypothetical. I have seen a number ofusers having single tables larger than 16TB and don't use partitioning becauseof the limitations we have today. The most common reason is needing multipleunique constraints on the table that don't include the partition key. Somethinglike a user_id and email. There are workarounds for those cases, but usuallyit's easier to deal with a single large table than to deal with the sharp edgesthose workarounds introduce.", "msg_date": "Fri, 12 May 2023 09:30:57 -0400", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "Greetings,\n\n* Dagfinn Ilmari Mannsåker (ilmari@ilmari.org) wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Fri, May 12, 2023 at 8:16 AM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n> >> On Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >>> I am not aware of any modern/non-historic filesystem[2] that can't do\n> >>> large files with ease. Anyone know of anything to worry about on that\n> >>> front?\n> >>\n> >> There is some trouble in the ambiguity of what we mean by \"modern\" and\n> >> \"large files\". There are still a large number of users of ext4 where\n> >> the max file size is 16TB. Switching to a single large file per\n> >> relation would effectively cut the max table size in half for those\n> >> users. How would a user with say a 20TB table running on ext4 be\n> >> impacted by this change?\n> […]\n> > A less aggressive version of the plan would be that we just keep the\n> > segment code for the foreseeable future with no planned cut off, and\n> > we make all of those \"piggy back\" transformations that I showed in the\n> > patch set optional. For example, I had it so that CLUSTER would\n> > quietly convert your relation to large format, if it was still in\n> > segmented format (might as well if you're writing all the data out\n> > anyway, right?), but perhaps that could depend on a GUC. Likewise for\n> > base backup. Etc. Then someone concerned about hitting the 16TB\n> > limit on ext4 could opt out. Or something like that. It seems funny\n> > though, that's exactly the user who should want this feature (they\n> > have 16,000 relation segment files).\n> \n> If we're going to have to keep the segment code for the foreseeable\n> future anyway, could we not get most of the benefit by increasing the\n> segment size to something like 1TB? The vast majority of tables would\n> fit in one file, and there would be less risk of hitting filesystem\n> limits.\n\nWhile I tend to agree that 1GB is too small, 1TB seems like it's\npossibly going to end up on the too big side of things, or at least,\nif we aren't getting rid of the segment code then it's possibly throwing\naway the benefits we have from the smaller segments without really\ngiving us all that much. Going from 1G to 10G would reduce the number\nof open file descriptors by quite a lot without having much of a net\nchange on other things. 50G or 100G would reduce the FD handles further\nbut starts to make us lose out a bit more on some of the nice parts of\nhaving multiple segments.\n\nJust some thoughts.\n\nThanks,\n\nStephen", "msg_date": "Fri, 12 May 2023 09:53:32 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "Repeating what was mentioned on Twitter, because I had some experience with\nthe topic. With fewer files per table there will be more contention on the\nper-inode mutex (which might now be the per-inode rwsem). I haven't read\nfilesystem source in a long time. Back in the day, and perhaps today, it\nwas locked for the duration of a write to storage (locked within the\nkernel) and was briefly locked while setting up a read.\n\nThe workaround for writes was one of:\n1) enable disk write cache or use battery-backed HW RAID to make writes\nfaster (yes disks, I encountered this prior to 2010)\n2) use XFS and O_DIRECT in which case the per-inode mutex (rwsem) wasn't\nlocked for the duration of a write\n\nI have a vague memory that filesystems have improved in this regard.\n\n\nOn Thu, May 11, 2023 at 4:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Fri, May 12, 2023 at 8:16 AM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n> > On Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >> I am not aware of any modern/non-historic filesystem[2] that can't do\n> >> large files with ease. Anyone know of anything to worry about on that\n> >> front?\n> >\n> > There is some trouble in the ambiguity of what we mean by \"modern\" and\n> \"large files\". There are still a large number of users of ext4 where the\n> max file size is 16TB. Switching to a single large file per relation would\n> effectively cut the max table size in half for those users. How would a\n> user with say a 20TB table running on ext4 be impacted by this change?\n>\n> Hrmph. Yeah, that might be a bit of a problem. I see it discussed in\n> various places that MySQL/InnoDB can't have tables bigger than 16TB on\n> ext4 because of this, when it's in its default one-file-per-object\n> mode (as opposed to its big-tablespace-files-to-hold-all-the-objects\n> mode like DB2, Oracle etc, in which case I think you can have multiple\n> 16TB segment files and get past that ext4 limit). It's frustrating\n> because 16TB is still really, really big and you probably should be\n> using partitions, or more partitions, to avoid all kinds of other\n> scalability problems at that size. But however hypothetical the\n> scenario might be, it should work, and this is certainly a plausible\n> argument against the \"aggressive\" plan described above with the hard\n> cut-off where we get to drop the segmented mode.\n>\n> Concretely, a 20TB pg_upgrade in copy mode would fail while trying to\n> concatenate with the above patches, so you'd have to use link or\n> reflink mode (you'd probably want to use that anyway unless due to\n> sheer volume of data to copy otherwise, since ext4 is also not capable\n> of block-range sharing), but then you'd be out of luck after N future\n> major releases, according to that plan where we start deleting the\n> code, so you'd need to organise some smaller partitions before that\n> time comes. Or pg_upgrade to a target on xfs etc. I wonder if a\n> future version of extN will increase its max file size.\n>\n> A less aggressive version of the plan would be that we just keep the\n> segment code for the foreseeable future with no planned cut off, and\n> we make all of those \"piggy back\" transformations that I showed in the\n> patch set optional. For example, I had it so that CLUSTER would\n> quietly convert your relation to large format, if it was still in\n> segmented format (might as well if you're writing all the data out\n> anyway, right?), but perhaps that could depend on a GUC. Likewise for\n> base backup. Etc. Then someone concerned about hitting the 16TB\n> limit on ext4 could opt out. Or something like that. It seems funny\n> though, that's exactly the user who should want this feature (they\n> have 16,000 relation segment files).\n>\n>\n>\n\n-- \nMark Callaghan\nmdcallag@gmail.com\n\nRepeating what was mentioned on Twitter, because I had some experience with the topic. With fewer files per table there will be more contention on the per-inode mutex (which might now be the per-inode rwsem). I haven't read filesystem source in a long time. Back in the day, and perhaps today, it was locked for the duration of a write to storage (locked within the kernel) and was briefly locked while setting up a read.The workaround for writes was one of:1) enable disk write cache or use battery-backed HW RAID to make writes faster (yes disks, I encountered this prior to 2010)2) use XFS and O_DIRECT in which case the per-inode mutex (rwsem) wasn't locked for the duration of a writeI have a vague memory that filesystems have improved in this regard.On Thu, May 11, 2023 at 4:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Fri, May 12, 2023 at 8:16 AM Jim Mlodgenski <jimmy76@gmail.com> wrote:\n> On Mon, May 1, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> I am not aware of any modern/non-historic filesystem[2] that can't do\n>> large files with ease.  Anyone know of anything to worry about on that\n>> front?\n>\n> There is some trouble in the ambiguity of what we mean by \"modern\" and \"large files\". There are still a large number of users of ext4 where the max file size is 16TB. Switching to a single large file per relation would effectively cut the max table size in half for those users. How would a user with say a 20TB table running on ext4 be impacted by this change?\n\nHrmph.  Yeah, that might be a bit of a problem.  I see it discussed in\nvarious places that MySQL/InnoDB can't have tables bigger than 16TB on\next4 because of this, when it's in its default one-file-per-object\nmode (as opposed to its big-tablespace-files-to-hold-all-the-objects\nmode like DB2, Oracle etc, in which case I think you can have multiple\n16TB segment files and get past that ext4 limit).  It's frustrating\nbecause 16TB is still really, really big and you probably should be\nusing partitions, or more partitions, to avoid all kinds of other\nscalability problems at that size.  But however hypothetical the\nscenario might be, it should work, and this is certainly a plausible\nargument against the \"aggressive\" plan described above with the hard\ncut-off where we get to drop the segmented mode.\n\nConcretely, a 20TB pg_upgrade in copy mode would fail while trying to\nconcatenate with the above patches, so you'd have to use link or\nreflink mode (you'd probably want to use that anyway unless due to\nsheer volume of data to copy otherwise, since ext4 is also not capable\nof block-range sharing), but then you'd be out of luck after N future\nmajor releases, according to that plan where we start deleting the\ncode, so you'd need to organise some smaller partitions before that\ntime comes.  Or pg_upgrade to a target on xfs etc.  I wonder if a\nfuture version of extN will increase its max file size.\n\nA less aggressive version of the plan would be that we just keep the\nsegment code for the foreseeable future with no planned cut off, and\nwe make all of those \"piggy back\" transformations that I showed in the\npatch set optional.  For example, I had it so that CLUSTER would\nquietly convert your relation to large format, if it was still in\nsegmented format (might as well if you're writing all the data out\nanyway, right?), but perhaps that could depend on a GUC.  Likewise for\nbase backup.  Etc.  Then someone concerned about hitting the 16TB\nlimit on ext4 could opt out.  Or something like that.  It seems funny\nthough, that's exactly the user who should want this feature (they\nhave 16,000 relation segment files).\n\n\n-- Mark Callaghanmdcallag@gmail.com", "msg_date": "Fri, 12 May 2023 09:41:33 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Sat, May 13, 2023 at 4:41 AM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n> Repeating what was mentioned on Twitter, because I had some experience with the topic. With fewer files per table there will be more contention on the per-inode mutex (which might now be the per-inode rwsem). I haven't read filesystem source in a long time. Back in the day, and perhaps today, it was locked for the duration of a write to storage (locked within the kernel) and was briefly locked while setting up a read.\n>\n> The workaround for writes was one of:\n> 1) enable disk write cache or use battery-backed HW RAID to make writes faster (yes disks, I encountered this prior to 2010)\n> 2) use XFS and O_DIRECT in which case the per-inode mutex (rwsem) wasn't locked for the duration of a write\n>\n> I have a vague memory that filesystems have improved in this regard.\n\n(I am interpreting your \"use XFS\" to mean \"use XFS instead of ext4\".)\n\nRight, 80s file systems like UFS (and I suspect ext and ext2, which\nwere probably based on similar ideas and ran on non-SMP machines?)\nused coarse grained locking including vnodes/inodes level. Then over\ntime various OSes and file systems have improved concurrency. Brief\ndigression, as someone who got started on IRIX in the 90 and still\nthinks those were probably the coolest computers: At SGI, first they\nreplaced SysV UFS with EFS (E for extent-based allocation) and\ninvented O_DIRECT to skip the buffer pool, and then blew the doors off\neverything with XFS, which maximised I/O concurrency and possibly (I\nguess, it's not open source so who knows?) involved a revamped VFS to\nlower stuff like inode locks, motivated by monster IRIX boxes with up\nto 1024 CPUs and huge storage arrays. In the Linux ext3 era, I\nremember hearing lots of reports of various kinds of large systems\ngoing faster just by switching to XFS and there is lots of writing\nabout that. ext4 certainly changed enormously. One reason back in\nthose days (mid 2000s?) was the old\nfsync-actually-fsyncs-everything-in-the-known-universe-and-not-just-your-file\nthing, and another was the lack of write concurrency especially for\ndirect I/O, and probably lots more things. But that's all ancient\nhistory...\n\nAs for ext4, we've detected and debugged clues about the gradual\nweakening of locking over time on this list: we know that concurrent\nread/write to the same page of a file was previously atomic, but when\nwe switched to pread/pwrite for most data (ie not making use of the\ncurrent file position), it ceased to be (a concurrent reader can see a\nmash-up of old and new data with visible cache line-ish stripes in it,\nso there isn't even a write-lock for the page); then we noticed that\nin later kernels even read/write ceased to be atomic (implicating a\nchange in file size/file position interlocking, I guess). I also\nvaguely recall reading on here a long time ago that lseek()\nperformance was dramatically improved with weaker inode interlocking,\nperhaps even in response to this very program's pathological SEEK_END\ncall frequency (something I hope to fix, but I digress). So I think\nit's possible that the effect you mentioned is gone?\n\nI can think of a few differences compared to those other RDBMSs.\nThere the discussion was about one-file-per-relation vs\none-big-file-for-everything, whereas we're talking about\none-file-per-relation vs many-files-per-relation (which doesn't change\nthe point much, just making clear that I'm not proposing a 42PB file\nto whole everything, so you can still partition to get different\nfiles). We also usually call fsync in series in our checkpointer\n(after first getting the writebacks started with sync_file_range()\nsome time sooner). Currently our code believes that it is not safe to\ncall fdatasync() for files whose size might have changed. There is no\nbasis for that in POSIX or in any system that I currently know of\n(though I haven't looked into it seriously), but I believe there was a\nhistorical file system that at some point in history interpreted\n\"non-essential meta data\" (the stuff POSIX allows it not to flush to\ndisk) to include \"the size of the file\" (whereas POSIX really just\nmeant that you don't have to synchronise the mtime and similar), which\nis probably why PostgreSQL has some code that calls fsync() on newly\ncreated empty WAL segments to \"make sure the indirect blocks are down\non disk\" before allowing itself to use only fdatasync() later to\noverwrite it with data. The point being that, for the most important\nkind of interactive/user facing I/O latency, namely WAL flushes, we\nalready use fdatasync(). It's possible that we could use it to flush\nrelation data too (ie the relation files in question here, usually\nsynchronised by the checkpointer) according to POSIX but it doesn't\nimmediately seem like something that should be at all hot and it's\nbackground work. But perhaps I lack imagination.\n\nThanks, thought-provoking stuff.\n\n\n", "msg_date": "Sat, 13 May 2023 11:01:49 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Sat, May 13, 2023 at 11:01 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, May 13, 2023 at 4:41 AM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n> > use XFS and O_DIRECT\n\nAs for direct I/O, we're only just getting started on that. We\ncurrently can't produce more than one concurrent WAL write, and then\nfor relation data, we just got very basic direct I/O support but we\nhaven't yet got the asynchronous machinery to drive it properly (work\nin progress, more soon). I was just now trying to find out what the\nstate of parallel direct writes is in ext4, and it looks like it's\nfinally happening:\n\nhttps://www.phoronix.com/news/Linux-6.3-EXT4\n\n\n", "msg_date": "Sat, 13 May 2023 12:48:49 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Fri, May 12, 2023 at 4:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Sat, May 13, 2023 at 4:41 AM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n> > Repeating what was mentioned on Twitter, because I had some experience\n> with the topic. With fewer files per table there will be more contention on\n> the per-inode mutex (which might now be the per-inode rwsem). I haven't\n> read filesystem source in a long time. Back in the day, and perhaps today,\n> it was locked for the duration of a write to storage (locked within the\n> kernel) and was briefly locked while setting up a read.\n> >\n> > The workaround for writes was one of:\n> > 1) enable disk write cache or use battery-backed HW RAID to make writes\n> faster (yes disks, I encountered this prior to 2010)\n> > 2) use XFS and O_DIRECT in which case the per-inode mutex (rwsem) wasn't\n> locked for the duration of a write\n> >\n> > I have a vague memory that filesystems have improved in this regard.\n>\n> (I am interpreting your \"use XFS\" to mean \"use XFS instead of ext4\".)\n>\n\nYes, although when the decision was made it was probably ext-3 -> XFS. We\nsuffered from fsync a file == fsync the filesystem\nbecause MySQL binlogs use buffered IO and are appended on write. Switching\nfrom ext-? to XFS was an easy perf win\nso I don't have much experience with ext-? over the past decade.\n\n\n> Right, 80s file systems like UFS (and I suspect ext and ext2, which\n>\n\nLate 80s is when I last hacked on Unix fileys code, excluding browsing XFS\nand ext source. Unix was easy back then -- one big kernel lock covers\neverything.\n\n\n> some time sooner). Currently our code believes that it is not safe to\n> call fdatasync() for files whose size might have changed. There is no\n>\n\nLong ago we added code for InnoDB to avoid fsync/fdatasync in some cases\nwhen O_DIRECT was used. While great for performance\nwe also forgot to make sure they were still done when files were extended.\nEventually we fixed that.\n\nThanks for all of the details.\n\n-- \nMark Callaghan\nmdcallag@gmail.com\n\nOn Fri, May 12, 2023 at 4:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Sat, May 13, 2023 at 4:41 AM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n> Repeating what was mentioned on Twitter, because I had some experience with the topic. With fewer files per table there will be more contention on the per-inode mutex (which might now be the per-inode rwsem). I haven't read filesystem source in a long time. Back in the day, and perhaps today, it was locked for the duration of a write to storage (locked within the kernel) and was briefly locked while setting up a read.\n>\n> The workaround for writes was one of:\n> 1) enable disk write cache or use battery-backed HW RAID to make writes faster (yes disks, I encountered this prior to 2010)\n> 2) use XFS and O_DIRECT in which case the per-inode mutex (rwsem) wasn't locked for the duration of a write\n>\n> I have a vague memory that filesystems have improved in this regard.\n\n(I am interpreting your \"use XFS\" to mean \"use XFS instead of ext4\".)Yes, although when the decision was made it was probably ext-3 -> XFS.  We suffered from fsync a file == fsync the filesystembecause MySQL binlogs use buffered IO and are appended on write. Switching from ext-? to XFS was an easy perf winso I don't have much experience with ext-? over the past decade. \nRight, 80s file systems like UFS (and I suspect ext and ext2, whichLate 80s is when I last hacked on Unix fileys code, excluding browsing XFS and ext source. Unix was easy back then -- one big kernel lock covers everything. \nsome time sooner).  Currently our code believes that it is not safe to\ncall fdatasync() for files whose size might have changed.  There is noLong ago we added code for InnoDB to avoid fsync/fdatasync in some cases when O_DIRECT was used. While great for performancewe also forgot to make sure they were still done when files were extended. Eventually we fixed that. Thanks for all of the details.-- Mark Callaghanmdcallag@gmail.com", "msg_date": "Mon, 15 May 2023 09:43:17 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Fri, May 12, 2023 at 9:53 AM Stephen Frost <sfrost@snowman.net> wrote:\n> While I tend to agree that 1GB is too small, 1TB seems like it's\n> possibly going to end up on the too big side of things, or at least,\n> if we aren't getting rid of the segment code then it's possibly throwing\n> away the benefits we have from the smaller segments without really\n> giving us all that much. Going from 1G to 10G would reduce the number\n> of open file descriptors by quite a lot without having much of a net\n> change on other things. 50G or 100G would reduce the FD handles further\n> but starts to make us lose out a bit more on some of the nice parts of\n> having multiple segments.\n\nThis is my view as well, more or less. I don't really like our current\nhandling of relation segments; we know it has bugs, and making it\nnon-buggy feels difficult. And there are performance issues as well --\nfile descriptor consumption, for sure, but also probably that crossing\na file boundary likely breaks the operating system's ability to do\nreadahead to some degree. However, I think we're going to find that\nmoving to a system where we have just one file per relation fork and\nthat file can be arbitrarily large is not fantastic, either. Jim's\npoint about running into filesystem limits is a good one (hi Jim, long\ntime no see!) and the problem he points out with ext4 is almost\ncertainly not the only one. It doesn't just have to be filesystems,\neither. It could be a limitation of an archiving tool (tar, zip, cpio)\nor a file copy utility or whatever as well. A quick Google search\nsuggests that most such things have been updated to use 64-bit sizes,\nbut my point is that the set of things that can potentially cause\nproblems is broader than just the filesystem. Furthermore, even when\nthere's no hard limit at play, a smaller file size can occasionally be\n*convenient*, as in Pavel's example of using hard links to share\nstorage between backups. From that point of view, a 16GB or 64GB or\n256GB file size limit seems more convenient than no limit and more\nconvenient than a large limit like 1TB.\n\nHowever, the bugs are the flies in the ointment (ahem). If we just\nmake the segment size bigger but don't get rid of segments altogether,\nthen we still have to fix the bugs that can occur when you do have\nmultiple segments. I think part of Thomas's motivation is to dodge\nthat whole category of problems. If we gradually deprecate\nmulti-segment mode in favor of single-file-per-relation-fork, then the\nfact that the segment handling code has bugs becomes progressively\nless relevant. While that does make some sense, I'm not sure I really\nagree with the approach. The problem is that we're trading problems\nthat we at least theoretically can fix somehow by hitting our code\nwith a big enough hammer for an unknown set of problems that stem from\nlimitations of software we don't control, maybe don't even know about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 May 2023 13:55:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "Thanks all for the feedback. It was a nice idea and it *almost*\nworks, but it seems like we just can't drop segmented mode. And the\nautomatic transition schemes I showed don't make much sense without\nthat goal.\n\nWhat I'm hearing is that something simple like this might be more acceptable:\n\n* initdb --rel-segsize (cf --wal-segsize), default unchanged\n* pg_upgrade would convert if source and target don't match\n\nI would probably also leave out those Windows file API changes, too.\n--rel-segsize would simply refuse larger sizes until someone does the\nwork on that platform, to keep the initial proposal small.\n\nI would probably leave the experimental copy_on_write() ideas out too,\nfor separate discussion in a separate proposal.\n\n\n", "msg_date": "Wed, 24 May 2023 12:34:27 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On 24.05.23 02:34, Thomas Munro wrote:\n> Thanks all for the feedback. It was a nice idea and it *almost*\n> works, but it seems like we just can't drop segmented mode. And the\n> automatic transition schemes I showed don't make much sense without\n> that goal.\n> \n> What I'm hearing is that something simple like this might be more acceptable:\n> \n> * initdb --rel-segsize (cf --wal-segsize), default unchanged\n\nmakes sense\n\n> * pg_upgrade would convert if source and target don't match\n\nThis would be good, but it could also be an optional or later feature.\n\nMaybe that should be a different mode, like \n--copy-and-adjust-as-necessary, so that users would have to opt into \nwhat would presumably be slower than plain --copy, rather than being \nsurprised by it, if they unwittingly used incompatible initdb options.\n\n> I would probably also leave out those Windows file API changes, too.\n> --rel-segsize would simply refuse larger sizes until someone does the\n> work on that platform, to keep the initial proposal small.\n\nThose changes from off_t to pgoff_t? Yes, it would be good to do \nwithout those. Apart of the practical problems that have been brought \nup, this was a major annoyance with the proposed patch set IMO.\n\n> I would probably leave the experimental copy_on_write() ideas out too,\n> for separate discussion in a separate proposal.\n\nright\n\n\n\n", "msg_date": "Wed, 24 May 2023 08:18:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Wed, May 24, 2023 at 2:18 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> > What I'm hearing is that something simple like this might be more acceptable:\n> >\n> > * initdb --rel-segsize (cf --wal-segsize), default unchanged\n>\n> makes sense\n\n+1.\n\n> > * pg_upgrade would convert if source and target don't match\n>\n> This would be good, but it could also be an optional or later feature.\n\n+1. I think that would be nice to have, but not absolutely required.\n\nIMHO it's best not to overcomplicate these projects. Not everything\nneeds to be part of the initial commit. If the initial commit happens\n2 months from now and then stuff like this gets added over the next 8,\nthat's strictly better than trying to land the whole patch set next\nMarch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 May 2023 08:33:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> On 24.05.23 02:34, Thomas Munro wrote:\n> > Thanks all for the feedback. It was a nice idea and it *almost*\n> > works, but it seems like we just can't drop segmented mode. And the\n> > automatic transition schemes I showed don't make much sense without\n> > that goal.\n> > \n> > What I'm hearing is that something simple like this might be more acceptable:\n> > \n> > * initdb --rel-segsize (cf --wal-segsize), default unchanged\n> \n> makes sense\n\nAgreed, this seems alright in general. Having more initdb-time options\nto help with certain use-cases rather than having things be compile-time\nis definitely just generally speaking a good direction to be going in,\nimv.\n\n> > * pg_upgrade would convert if source and target don't match\n> \n> This would be good, but it could also be an optional or later feature.\n\nAgreed.\n\n> Maybe that should be a different mode, like --copy-and-adjust-as-necessary,\n> so that users would have to opt into what would presumably be slower than\n> plain --copy, rather than being surprised by it, if they unwittingly used\n> incompatible initdb options.\n\nI'm curious as to why it would be slower than a regular copy..?\n\n> > I would probably also leave out those Windows file API changes, too.\n> > --rel-segsize would simply refuse larger sizes until someone does the\n> > work on that platform, to keep the initial proposal small.\n> \n> Those changes from off_t to pgoff_t? Yes, it would be good to do without\n> those. Apart of the practical problems that have been brought up, this was\n> a major annoyance with the proposed patch set IMO.\n> \n> > I would probably leave the experimental copy_on_write() ideas out too,\n> > for separate discussion in a separate proposal.\n> \n> right\n\nYou mean copy_file_range() here, right?\n\nShouldn't we just add support for that today into pg_upgrade,\nindependently of this? Seems like a worthwhile improvement even without\nthe benefit it would provide to changing segment sizes.\n\nThanks,\n\nStephen", "msg_date": "Thu, 25 May 2023 13:08:40 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Thu, May 25, 2023 at 1:08 PM Stephen Frost <sfrost@snowman.net> wrote:\n> * Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> > On 24.05.23 02:34, Thomas Munro wrote:\n> > > * pg_upgrade would convert if source and target don't match\n> >\n> > This would be good, but it could also be an optional or later feature.\n>\n> Agreed.\n\nOK. I do have a patch for that, but I'll put that (+ copy_file_range)\naside for now so we can talk about the basic feature. Without that,\npg_upgrade just rejects mismatching clusters as it always did, no\nchange required.\n\n> > > I would probably also leave out those Windows file API changes, too.\n> > > --rel-segsize would simply refuse larger sizes until someone does the\n> > > work on that platform, to keep the initial proposal small.\n> >\n> > Those changes from off_t to pgoff_t? Yes, it would be good to do without\n> > those. Apart of the practical problems that have been brought up, this was\n> > a major annoyance with the proposed patch set IMO.\n\n+1, it was not nice.\n\nAlright, since I had some time to kill in an airport, here is a\nstarter patch for initdb --rel-segsize. Some random thoughts:\n\nAnother potential option name would be --segsize, if we think we're\ngoing to use this for temp files too eventually.\n\nMaybe it's not so beautiful to have that global variable\nrel_segment_size (which replaces REL_SEGSIZE everywhere). Another\nidea would be to make it static in md.c and call smgrsetsegmentsize(),\nor something like that. That could be a nice place to compute the\n\"shift\" value up front, instead of computing it each time in\nblockno_to_segno(), but that's probably not worth bothering with (?).\nBSR/LZCNT/CLZ instructions are pretty fast on modern chips. That's\nabout the only place where someone could say that this change makes\nthings worse for people not interested in the new feature, so I was\ncareful to get rid of / and % operations with no-longer-constant RHS.\n\nI had to promote segment size to int64 (global variable, field in\ncontrol file), because otherwise it couldn't represent\n--rel-segsize=32TB (it'd be too big by one). Other ideas would be to\nstore the shift value instead of the size, or store the max block\nnumber, eg subtract one, or use InvalidBlockNumber to mean \"no limit\"\n(with more branches to test for it). The only problem I ran into with\nthe larger type was that 'SHOW segment_size' now needs a custom show\nfunction because we don't have int64 GUCs.\n\nA C type confusion problem that I noticed: some code uses BlockNumber\nand some code uses int for segment numbers. It's not really a\nreachable problem for practical reasons (you'd need over 2 billion\ndirectories and VFDs to reach it), but it's wrong to use int if\nsegment size can be set as low as BLCKSZ (one file per block); you\ncould have more segments than an int can represent. We could go for\nuint32, BlockNumber or create SegmentNumber (which I think I've\nproposed before, and lost track of...). We can address that\nseparately (perhaps by finding my old patch...)", "msg_date": "Sun, 28 May 2023 02:48:56 -0400", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Sun, May 28, 2023 at 2:48 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> (you'd need over 2 billion\n> directories ...\n\ndirectory *entries* (segment files), I meant to write there.\n\n\n", "msg_date": "Sun, 28 May 2023 03:07:34 -0400", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On 28.05.23 02:48, Thomas Munro wrote:\n> Another potential option name would be --segsize, if we think we're\n> going to use this for temp files too eventually.\n> \n> Maybe it's not so beautiful to have that global variable\n> rel_segment_size (which replaces REL_SEGSIZE everywhere). Another\n> idea would be to make it static in md.c and call smgrsetsegmentsize(),\n> or something like that.\n\nI think one way to look at this is that the segment size is a \nconfiguration property of the md.c smgr. I have been thinking a bit \nabout how smgr-level configuration could look. You can't use a catalog \ntable, but we also can't have smgr plugins get space in pg_control.\n\nAnyway, I'm not asking you to design this now. A global variable via \npg_control seems fine for now. But it wouldn't be an smgr API call, I \nthink.\n\n\n", "msg_date": "Tue, 30 May 2023 07:20:54 -0400", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On 5/28/23 08:48, Thomas Munro wrote:\n> \n> Alright, since I had some time to kill in an airport, here is a\n> starter patch for initdb --rel-segsize. \n\nI've gone through this patch and it looks pretty good to me. A few things:\n\n+\t\t\t * rel_setment_size, we will truncate the K+1st segment to 0 length\n\nrel_setment_size -> rel_segment_size\n\n+\t * We used a phony GUC with a custome show function, because we don't\n\ncustome -> custom\n\n+\t\tif (strcmp(endptr, \"kB\") == 0)\n\nWhy kB here instead of KB to match MB, GB, TB below?\n\n+\tint64\t\trelseg_size;\t/* blocks per segment of large relation */\n\nThis will require PG_CONTROL_VERSION to be bumped -- but you are \nprobably waiting until commit time to avoid annoying conflicts, though I \ndon't think it is as likely as with CATALOG_VERSION_NO.\n\n> Some random thoughts:\n> \n> Another potential option name would be --segsize, if we think we're\n> going to use this for temp files too eventually.\n\nI feel like temp file segsize should be separately configurable for the \nsame reason that we are leaving it as 1GB for now.\n\n> Maybe it's not so beautiful to have that global variable\n> rel_segment_size (which replaces REL_SEGSIZE everywhere). \n\nMaybe not, but it is the way these things are done in general, .e.g. \nwal_segment_size, so I don't think it will be too controversial.\n\n> Another\n> idea would be to make it static in md.c and call smgrsetsegmentsize(),\n> or something like that. That could be a nice place to compute the\n> \"shift\" value up front, instead of computing it each time in\n> blockno_to_segno(), but that's probably not worth bothering with (?).\n> BSR/LZCNT/CLZ instructions are pretty fast on modern chips. That's\n> about the only place where someone could say that this change makes\n> things worse for people not interested in the new feature, so I was\n> careful to get rid of / and % operations with no-longer-constant RHS.\n\nRight -- not sure we should be troubling ourselves with trying to \noptimize away ops that are very fast, unless they are computed trillions \nof times.\n\n> I had to promote segment size to int64 (global variable, field in\n> control file), because otherwise it couldn't represent\n> --rel-segsize=32TB (it'd be too big by one). Other ideas would be to\n> store the shift value instead of the size, or store the max block\n> number, eg subtract one, or use InvalidBlockNumber to mean \"no limit\"\n> (with more branches to test for it). The only problem I ran into with\n> the larger type was that 'SHOW segment_size' now needs a custom show\n> function because we don't have int64 GUCs.\n\nA custom show function seems like a reasonable solution here.\n\n> A C type confusion problem that I noticed: some code uses BlockNumber\n> and some code uses int for segment numbers. It's not really a\n> reachable problem for practical reasons (you'd need over 2 billion\n> directories and VFDs to reach it), but it's wrong to use int if\n> segment size can be set as low as BLCKSZ (one file per block); you\n> could have more segments than an int can represent. We could go for\n> uint32, BlockNumber or create SegmentNumber (which I think I've\n> proposed before, and lost track of...). We can address that\n> separately (perhaps by finding my old patch...)\n\nI think addressing this separately is fine, though maybe enforcing some \nreasonable minimum in initdb would be a good idea for this patch. For my \n2c SEGSIZE == BLOCKSZ just makes very little sense.\n\nLastly, I think the blockno_to_segno(), blockno_within_segment(), and \nblockno_to_seekpos() functions add enough readability that they should \nbe committed regardless of how this patch proceeds.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 12 Jun 2023 10:52:56 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On Mon, Jun 12, 2023 at 8:53 PM David Steele <david@pgmasters.net> wrote:\n> + if (strcmp(endptr, \"kB\") == 0)\n>\n> Why kB here instead of KB to match MB, GB, TB below?\n\nThose are SI prefixes[1], and we use kB elsewhere too. (\"K\" was used\nfor kelvins, so they went with \"k\" for kilo. Obviously these aren't\nfully SI, because B is supposed to mean bel. A gigabel would be\npretty loud... more than \"sufficient power to create a black hole\"[2],\nhehe.)\n\n> + int64 relseg_size; /* blocks per segment of large relation */\n>\n> This will require PG_CONTROL_VERSION to be bumped -- but you are\n> probably waiting until commit time to avoid annoying conflicts, though I\n> don't think it is as likely as with CATALOG_VERSION_NO.\n\nOh yeah, thanks.\n\n> > Another\n> > idea would be to make it static in md.c and call smgrsetsegmentsize(),\n> > or something like that. That could be a nice place to compute the\n> > \"shift\" value up front, instead of computing it each time in\n> > blockno_to_segno(), but that's probably not worth bothering with (?).\n> > BSR/LZCNT/CLZ instructions are pretty fast on modern chips. That's\n> > about the only place where someone could say that this change makes\n> > things worse for people not interested in the new feature, so I was\n> > careful to get rid of / and % operations with no-longer-constant RHS.\n>\n> Right -- not sure we should be troubling ourselves with trying to\n> optimize away ops that are very fast, unless they are computed trillions\n> of times.\n\nThis obviously has some things in common with David Christensen's\nnearby patch for block sizes[3], and we should be shifting and masking\nthere too if that route is taken (as opposed to a specialise-the-code\nroute or somethign else). My binary-log trick is probably a little\ntoo cute though... I should probably just go and set a shift variable.\n\nThanks for looking!\n\n[1] https://en.wikipedia.org/wiki/Metric_prefix\n[2] https://en.wiktionary.org/wiki/gigabel\n[3] https://www.postgresql.org/message-id/flat/CAOxo6XKx7DyDgBkWwPfnGSXQYNLpNrSWtYnK6-1u%2BQHUwRa1Gg%40mail.gmail.com\n\n\n", "msg_date": "Tue, 4 Jul 2023 10:41:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "Rebased. I had intended to try to get this into v17, but a couple of\nunresolved problems came up while rebasing over the new incremental\nbackup stuff. You snooze, you lose. Hopefully we can sort these out\nin time for the next commitfest:\n\n* should pg_combinebasebackup read the control file to fetch the segment size?\n* hunt for other segment-size related problems that may be lurking in\nnew incremental backup stuff\n* basebackup_incremental.c wants to use memory in proportion to\nsegment size, which looks like a problem, and I wrote about that in a\nnew thread[1]\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2B2hZ0sBztPW4mkLfng0qfkNtAHFUfxOMLizJ0BPmi5%2Bg%40mail.gmail.com", "msg_date": "Thu, 7 Mar 2024 10:54:46 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Large files for relations" }, { "msg_contents": "On 06.03.24 22:54, Thomas Munro wrote:\n> Rebased. I had intended to try to get this into v17, but a couple of\n> unresolved problems came up while rebasing over the new incremental\n> backup stuff. You snooze, you lose. Hopefully we can sort these out\n> in time for the next commitfest:\n> \n> * should pg_combinebasebackup read the control file to fetch the segment size?\n> * hunt for other segment-size related problems that may be lurking in\n> new incremental backup stuff\n> * basebackup_incremental.c wants to use memory in proportion to\n> segment size, which looks like a problem, and I wrote about that in a\n> new thread[1]\n\nOverall, I like this idea, and the patch seems to have many bases covered.\n\nThe patch will need a rebase. I was able to test it on \nmaster@{2024-03-13}, but after that there are conflicts.\n\nIn .cirrus.tasks.yml, one of the test tasks uses \n--with-segsize-blocks=6, but you are removing that option. You could \nreplace that with something like\n\nPG_TEST_INITDB_EXTRA_OPTS='--rel-segsize=48kB'\n\nBut that won't work exactly because\n\ninitdb: error: argument of --rel-segsize must be a power of two\n\nI suppose that's ok as a change, since it makes the arithmetic more \nefficient. But maybe it should be called out explicitly in the commit \nmessage.\n\nIf I run it with 64kB, the test pgbench/001_pgbench_with_server fails \nconsistently, so it seems there is still a gap somewhere.\n\nA minor point, the initdb error message\n\ninitdb: error: argument of --rel-segsize must be a multiple of BLCKSZ\n\nwould be friendlier if actually showed the value of the block size \ninstead of just the symbol. Similarly for the nearby error message \nabout the off_t size.\n\nIn the control file, all the other fields use unsigned types. Should \nrelseg_size be uint64?\n\nPG_CONTROL_VERSION needs to be changed.\n\n\n\n", "msg_date": "Mon, 13 May 2024 22:51:11 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Large files for relations" } ]
[ { "msg_contents": "This is in response to Alexander's observation at [1], but I'm\nstarting a fresh thread to keep this patch separate from the plperl\nfixes in the cfbot's eyes.\n\nAlexander Lakhin <exclusion@gmail.com> writes:\n> I continue watching the array handling bugs dancing Sirtaki too. Now it's\n> another asymmetry:\n> select '{{1},{{2}}}'::int[];\n>  {{{1}},{{2}}}\n> but:\n> select '{{{1}},{2}}'::int[];\n>  {}\n\nBleah. Both of those should be rejected, for sure, but it's the same\nsituation as in the PLs: we weren't doing anything to enforce that all\nthe scalar elements appear at the same nesting depth.\n\nI spent some time examining array_in(), and was pretty disheartened\nby what a mess it is. It looks like back in the dim mists of the\nBerkeley era, there was an intentional attempt to allow\nnon-rectangular array input, with the missing elements automatically\nfilled out as NULLs. Since that was undocumented, we concluded it was\na bug and plastered on some code to check for rectangularity of the\ninput. I don't quibble with enforcing rectangularity, but the\nunderlying logic should have been simplified while we were at it.\nThe element-counting logic was basically magic (why is it okay to\nincrement temp[ndim - 1] when the current nest_level might be\ndifferent from that?) and the extra layers of checks didn't make it\nany more intelligible. Plus, ReadArrayStr was expending far more\ncycles than it needs to given the assumption of rectangularity.\n\nSo, here's a rewrite.\n\nAlthough I view this as a bug fix, AFAICT the only effects are to\naccept input that should be rejected. So again I don't advocate\nback-patching. But should we sneak it into v16, or wait for v17?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/9cd163da-d096-7e9e-28f6-f3620962a660%40gmail.com", "msg_date": "Tue, 02 May 2023 11:41:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Cleaning up array_in()" }, { "msg_contents": "On Tue, May 02, 2023 at 11:41:27AM -0400, Tom Lane wrote:\n> It looks like back in the dim mists of the\n> Berkeley era, there was an intentional attempt to allow\n> non-rectangular array input, with the missing elements automatically\n> filled out as NULLs. Since that was undocumented, we concluded it was\n> a bug and plastered on some code to check for rectangularity of the\n> input.\n\nInteresting.\n\n> Although I view this as a bug fix, AFAICT the only effects are to\n> accept input that should be rejected. So again I don't advocate\n> back-patching. But should we sneak it into v16, or wait for v17?\n\nI think it'd be okay to sneak it into v16, given it is technically a bug\nfix.\n\n> (This leaves ArrayGetOffset0() unused, but I'm unsure whether to\n> remove that.)\n\nWhy's that? Do you think it is likely to be used again in the future?\nOtherwise, 0001 LGTM.\n\nI haven't had a chance to look at 0002 closely yet.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 May 2023 16:40:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "02.05.2023 18:41, Tom Lane wrote:\n> So, here's a rewrite.\n>\n> Although I view this as a bug fix, AFAICT the only effects are to\n> accept input that should be rejected. So again I don't advocate\n> back-patching. But should we sneak it into v16, or wait for v17?\n\nI've tested the patch from a user perspective and found no interesting cases\nthat were valid before, but not accepted with the patch (or vice versa):\nThe only thing that confused me, is the error message (it's not new, too):\nselect '{{{{{{{{{{1}}}}}}}}}}'::int[];\nor even:\nselect '{{{{{{{{{{'::int[];\nERROR:  number of array dimensions (7) exceeds the maximum allowed (6)\n\nMaybe it could be reworded like that?:\ntoo many opening braces defining dimensions (maximum dimensions allowed: 6)\n\nBeside that, I would like to note the following error text changes\n(all of these are feasible, I think):\nselect '{{1},{{'::int[];\nBefore:\nERROR:  malformed array literal: \"{{1},{{\"\nLINE 1: select '{{1},{{'::int[];\n                ^\nDETAIL:  Unexpected end of input.\n\nAfter:\nERROR:  malformed array literal: \"{{1},{{\"\nLINE 1: select '{{1},{{'::int[];\n                ^\nDETAIL:  Multidimensional arrays must have sub-arrays with matching dimensions.\n---\nselect '{{1},{{{{{{'::int[];\nBefore:\nERROR:  number of array dimensions (7) exceeds the maximum allowed (6)\n\nAfter:\nERROR:  malformed array literal: \"{{1},{{{{{{\"\nLINE 1: select '{{1},{{{{{{'::int[];\n                ^\nDETAIL:  Multidimensional arrays must have sub-arrays with matching dimensions.\n---\nselect '{{1},{}}}'::int[];\nBefore:\nERROR:  malformed array literal: \"{{1},{}}}\"\nLINE 1: select '{{1},{}}}'::int[];\n                ^\nDETAIL:  Unexpected \"}\" character.\n\nAfter:\nERROR:  malformed array literal: \"{{1},{}}}\"\nLINE 1: select '{{1},{}}}'::int[];\n                ^\nDETAIL:  Multidimensional arrays must have sub-arrays with matching dimensions.\n---\nselect '{{}}}'::int[];\nBefore:\nERROR:  malformed array literal: \"{{}}}\"\nLINE 1: select '{{}}}'::int[];\n                ^\nDETAIL:  Unexpected \"}\" character.\n\nAfter:\nERROR:  malformed array literal: \"{{}}}\"\nLINE 1: select '{{}}}'::int[];\n                ^\nDETAIL:  Junk after closing right brace.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 9 May 2023 06:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> The only thing that confused me, is the error message (it's not new, too):\n> select '{{{{{{{{{{1}}}}}}}}}}'::int[];\n> or even:\n> select '{{{{{{{{{{'::int[];\n> ERROR:  number of array dimensions (7) exceeds the maximum allowed (6)\n\nYeah, I didn't touch that, but it's pretty bogus because the first\nnumber will always be \"7\" even if you wrote more than 7 left braces,\nsince the code errors out immediately upon finding that it's seen\ntoo many braces.\n\nThe equivalent message in the PLs just says \"number of array dimensions\nexceeds the maximum allowed (6)\". I'm inclined to do likewise in\narray_in, but didn't touch it here.\n\n> Beside that, I would like to note the following error text changes\n> (all of these are feasible, I think):\n\nI'll look into whether we can improve those, unless you had a patch\nin mind already?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 May 2023 23:06:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "09.05.2023 06:06, Tom Lane wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> The only thing that confused me, is the error message (it's not new, too):\n>> select '{{{{{{{{{{1}}}}}}}}}}'::int[];\n>> or even:\n>> select '{{{{{{{{{{'::int[];\n>> ERROR:  number of array dimensions (7) exceeds the maximum allowed (6)\n> Yeah, I didn't touch that, but it's pretty bogus because the first\n> number will always be \"7\" even if you wrote more than 7 left braces,\n> since the code errors out immediately upon finding that it's seen\n> too many braces.\n>\n> The equivalent message in the PLs just says \"number of array dimensions\n> exceeds the maximum allowed (6)\". I'm inclined to do likewise in\n> array_in, but didn't touch it here.\n\nI think that, strictly speaking, we have no array dimensions in the string\n'{{{{{{{{{{'; there are only characters (braces) during the string parsing.\nWhile in the PLs we definitely deal with real arrays, which have dimensions.\n\n>> Beside that, I would like to note the following error text changes\n>> (all of these are feasible, I think):\n> I'll look into whether we can improve those, unless you had a patch\n> in mind already?\n\nThose messages looked more or less correct to me, I just wanted to note how they are\nchanging (and haven't highlighted messages, that are not), but if you see here room\nfor improvement, please look into it (I have no good formulations yet).\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 9 May 2023 14:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "I took a look at 0002 because I attempted a similar but more surgical\nfix in [0].\n\nI spotted a few opportunities for further reducing state tracked by\n`ArrayCount`. You may not find all of these suggestions to be\nworthwhile.\n\n1) `in_quotes` appears to be wholly redundant with `parse_state ==\nARRAY_QUOTED_ELEM_STARTED`.\n\n2) The `empty_array` special case does not seem to be important to\nArrayCount's callers, which don't even special case `ndims == 0` but\nrather `ArrayGetNItemsSafe(..) == 0`. Perhaps this is a philosophical\nquestion as to whether `ArrayCount('{{}, {}}')` should return\n(ndims=2, dims=[2, 0]) or (ndims=0). Obviously someone needs to do\nthat normalization, but `ArrayCount` could leave that normalization to\n`ReadArrayStr`.\n\n3) `eoArray` could be replaced with a new `ArrayParseState` of\n`ARRAY_END`. Just a matter of taste, but \"end of array\" feels like a\nparser state to me.\n\nI also have a sense that `ndims_frozen` made the distinction between\n`ARRAY_ELEM_DELIMITED` and `ARRAY_LEVEL_DELIMITED` unnecessary, and\nthe two states could be merged into a single `ARRAY_DELIMITED` state,\nbut I've not pulled on this thread hard enough to say so confidently.\n\nThanks for doing the serious overhaul. As you say, the element\ncounting logic is much easier to follow now. I'm much more confident\nthat your patch is correct than mine.\n\nCheers,\nNikhil\n\n[0]: https://www.postgresql.org/message-id/CAPWqQZRHsFuvWJj%3DczXuKEB03LF4ctPpDE1k3CoexweEFicBKQ%40mail.gmail.com\n\n\nOn Tue, May 9, 2023 at 7:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> 09.05.2023 06:06, Tom Lane wrote:\n> > Alexander Lakhin <exclusion@gmail.com> writes:\n> >> The only thing that confused me, is the error message (it's not new, too):\n> >> select '{{{{{{{{{{1}}}}}}}}}}'::int[];\n> >> or even:\n> >> select '{{{{{{{{{{'::int[];\n> >> ERROR: number of array dimensions (7) exceeds the maximum allowed (6)\n> > Yeah, I didn't touch that, but it's pretty bogus because the first\n> > number will always be \"7\" even if you wrote more than 7 left braces,\n> > since the code errors out immediately upon finding that it's seen\n> > too many braces.\n> >\n> > The equivalent message in the PLs just says \"number of array dimensions\n> > exceeds the maximum allowed (6)\". I'm inclined to do likewise in\n> > array_in, but didn't touch it here.\n>\n> I think that, strictly speaking, we have no array dimensions in the string\n> '{{{{{{{{{{'; there are only characters (braces) during the string parsing.\n> While in the PLs we definitely deal with real arrays, which have dimensions.\n>\n> >> Beside that, I would like to note the following error text changes\n> >> (all of these are feasible, I think):\n> > I'll look into whether we can improve those, unless you had a patch\n> > in mind already?\n>\n> Those messages looked more or less correct to me, I just wanted to note how they are\n> changing (and haven't highlighted messages, that are not), but if you see here room\n> for improvement, please look into it (I have no good formulations yet).\n>\n> Best regards,\n> Alexander\n>\n>\n\n\n", "msg_date": "Sun, 4 Jun 2023 21:48:38 -0400", "msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "Nikhil Benesch <nikhil.benesch@gmail.com> writes:\n> I took a look at 0002 because I attempted a similar but more surgical\n> fix in [0].\n> I spotted a few opportunities for further reducing state tracked by\n> `ArrayCount`.\n\nWow, thanks for looking! I've not run these suggestions to ground\n(and won't have time for a few days), but they sound very good.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 04 Jun 2023 22:38:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On Mon, Jun 5, 2023 at 9:48 AM Nikhil Benesch <nikhil.benesch@gmail.com> wrote:\n>\n> I took a look at 0002 because I attempted a similar but more surgical\n> fix in [0].\n>\n> I spotted a few opportunities for further reducing state tracked by\n> `ArrayCount`. You may not find all of these suggestions to be\n> worthwhile.\n\nI pull ArrayCount into a separate C function, regress test (manually)\nbased on patch regress test.\n\n> 1) `in_quotes` appears to be wholly redundant with `parse_state ==\n> ARRAY_QUOTED_ELEM_STARTED`.\n\nremoving it works as expected.\n\n> 3) `eoArray` could be replaced with a new `ArrayParseState` of\n> `ARRAY_END`. Just a matter of taste, but \"end of array\" feels like a\n> parser state to me.\n\nworks. (reduce one variable.)\n\n> I also have a sense that `ndims_frozen` made the distinction between\n> `ARRAY_ELEM_DELIMITED` and `ARRAY_LEVEL_DELIMITED` unnecessary, and\n> the two states could be merged into a single `ARRAY_DELIMITED` state,\n> but I've not pulled on this thread hard enough to say so confidently.\n\nmerging these states into one work as expected.\n\n\n", "msg_date": "Mon, 3 Jul 2023 12:16:22 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "based on Nikhil Benesch idea.\nThe attached diff is based on\nv1-0002-Rewrite-ArrayCount-to-make-dimensionality-checks.patch.\n\ndiff compare v1-0002:\nselect '{{1,{2}},{2,3}}'::text[];\n ERROR: malformed array literal: \"{{1,{2}},{2,3}}\"\n LINE 1: select '{{1,{2}},{2,3}}'::text[];\n ^\n-DETAIL: Unexpected \"{\" character.\n+DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions.\n----------------------------------------------------\n select E'{{1,2},\\\\{2,3}}'::text[];\n ERROR: malformed array literal: \"{{1,2},\\{2,3}}\"\n LINE 1: select E'{{1,2},\\\\{2,3}}'::text[];\n ^\n-DETAIL: Unexpected \"\\\" character.\n+DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions.\n\n-------\nnew errors details kind of make sense.", "msg_date": "Mon, 3 Jul 2023 19:52:23 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "Nikhil Benesch <nikhil.benesch@gmail.com> writes:\n> I spotted a few opportunities for further reducing state tracked by\n> `ArrayCount`. You may not find all of these suggestions to be\n> worthwhile.\n\nI found some time today to look at these points.\n\n> 1) `in_quotes` appears to be wholly redundant with `parse_state ==\n> ARRAY_QUOTED_ELEM_STARTED`.\n\nI agree that it is redundant, but I'm disinclined to remove it because\nthe in_quotes logic matches that in ReadArrayStr. I think it's better\nto keep those two functions in sync. The parse_state represents an\nindependent set of checks that need not be repeated by ReadArrayStr,\nbut both functions have to track quoting. The same for eoArray.\n\n> 2) The `empty_array` special case does not seem to be important to\n> ArrayCount's callers, which don't even special case `ndims == 0` but\n> rather `ArrayGetNItemsSafe(..) == 0`. Perhaps this is a philosophical\n> question as to whether `ArrayCount('{{}, {}}')` should return\n> (ndims=2, dims=[2, 0]) or (ndims=0). Obviously someone needs to do\n> that normalization, but `ArrayCount` could leave that normalization to\n> `ReadArrayStr`.\n\nThis idea I do like. While looking at the callers, I also noticed\nthat it's impossible currently to write an empty array with explicit\nspecification of bounds. It seems to me that you ought to be able\nto write, say,\n\nSELECT '[1:0]={}'::int[];\n\nbut up to now you got \"upper bound cannot be less than lower bound\";\nand if you somehow got past that, you'd get \"Specified array\ndimensions do not match array contents.\" because of ArrayCount's\npremature optimization of \"one-dimensional array with length zero\"\nto \"zero-dimensional array\". We can fix that by doing what you said\nand adjusting the initial bounds restriction to be \"upper bound cannot\nbe less than lower bound minus one\".\n\n> I also have a sense that `ndims_frozen` made the distinction between\n> `ARRAY_ELEM_DELIMITED` and `ARRAY_LEVEL_DELIMITED` unnecessary, and\n> the two states could be merged into a single `ARRAY_DELIMITED` state,\n> but I've not pulled on this thread hard enough to say so confidently.\n\nI looked at jian he's implementation of that and was not impressed:\nI do not think the logic gets any clearer, and it seems to me that\nthis makes a substantial dent in ArrayCount's ability to detect syntax\nerrors. The fact that one of the test case error messages got better\nseems pretty accidental to me. We can get the same result in a more\npurposeful way by giving a different error message for\nARRAY_ELEM_DELIMITED.\n\nSo I end up with the attached. I went ahead and dropped\nArrayGetOffset0() as part of 0001, and I split 0002 into two patches\nwhere the new 0002 avoids re-indenting any existing code in order\nto ease review, and then 0003 is just a mechanical application\nof pgindent.\n\nI still didn't do anything about \"number of array dimensions (7)\nexceeds the maximum allowed (6)\". There are quite a few instances\nof that wording, not only array_in's, and I'm not sure whether to\nchange the rest. In any case that looks like something that\ncould be addressed separately. The other error message wording\nchanges here seem to me to be okay.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 04 Jul 2023 14:33:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "I wrote:\n> So I end up with the attached. I went ahead and dropped\n> ArrayGetOffset0() as part of 0001, and I split 0002 into two patches\n> where the new 0002 avoids re-indenting any existing code in order\n> to ease review, and then 0003 is just a mechanical application\n> of pgindent.\n\nThat got sideswiped by ae6d06f09, so here's a trivial rebase to\npacify the cfbot.\n\n\t\t\tregards, tom lane\n\n#text/x-diff; name=\"v3-0001-Simplify-and-speed-up-ReadArrayStr.patch\" [v3-0001-Simplify-and-speed-up-ReadArrayStr.patch] /home/tgl/pgsql/v3-0001-Simplify-and-speed-up-ReadArrayStr.patch\n#text/x-diff; name=\"v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch\" [v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch] /home/tgl/pgsql/v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch\n#text/x-diff; name=\"v3-0003-Re-indent-ArrayCount.patch\" [v3-0003-Re-indent-ArrayCount.patch] /home/tgl/pgsql/v3-0003-Re-indent-ArrayCount.patch\n\n\n", "msg_date": "Sat, 08 Jul 2023 12:08:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "I wrote:\n> That got sideswiped by ae6d06f09, so here's a trivial rebase to\n> pacify the cfbot.\n\nSigh, this time with patch.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 08 Jul 2023 15:34:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On 08/07/2023 19:08, Tom Lane wrote:\n> I wrote:\n>> So I end up with the attached. I went ahead and dropped\n>> ArrayGetOffset0() as part of 0001, and I split 0002 into two patches\n>> where the new 0002 avoids re-indenting any existing code in order\n>> to ease review, and then 0003 is just a mechanical application\n>> of pgindent.\n> \n> That got sideswiped by ae6d06f09, so here's a trivial rebase to\n> pacify the cfbot.\n> \n> #text/x-diff; name=\"v3-0001-Simplify-and-speed-up-ReadArrayStr.patch\" [v3-0001-Simplify-and-speed-up-ReadArrayStr.patch] /home/tgl/pgsql/v3-0001-Simplify-and-speed-up-ReadArrayStr.patch\n> #text/x-diff; name=\"v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch\" [v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch] /home/tgl/pgsql/v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch\n> #text/x-diff; name=\"v3-0003-Re-indent-ArrayCount.patch\" [v3-0003-Re-indent-ArrayCount.patch] /home/tgl/pgsql/v3-0003-Re-indent-ArrayCount.patch\n\nSomething's wrong with your attachments.\n\nHmm, I wonder if ae6d06f09 had a negative performance impact. In an \nunquoted array element, scanner_isspace() function is called for every \ncharacter, so it might be worth inlining.\n\nOn the patches: They are a clear improvement, thanks for that. That \nsaid, I still find the logic very hard to follow, and there are some \nobvious performance optimizations that could be made.\n\nArrayCount() interprets low-level quoting and escaping, and tracks the \ndimensions at the same time. The state machine is pretty complicated. \nAnd when you've finally finished reading and grokking that function, you \nsee that ReadArrayStr() repeats most of the same logic. Ugh.\n\nI spent some time today refactoring it for readability and speed. I \nintroduced a separate helper function to tokenize the input. It deals \nwith whitespace, escapes, and backslashes. Then I merged ArrayCount() \nand ReadArrayStr() into one function that parses the elements and \ndetermines the dimensions in one pass. That speeds up parsing large \narrays. With the tokenizer function, the logic in ReadArrayStr() is \nstill quite readable, even though it's now checking the dimensions at \nthe same time.\n\nI also noticed that we used atoi() to parse the integers in the \ndimensions, which doesn't do much error checking. Some funny cases were \naccepted because of that, for example:\n\npostgres=# select '[1+-+-+-+-+-+:2]={foo,bar}'::text[];\n text\n-----------\n {foo,bar}\n(1 row)\n\nI tightened that up in the passing.\n\nAttached are your patches, rebased to fix the conflicts with ae6d06f09 \nlike you intended. On top of that, my patches. My patches need more \ntesting, benchmarking, and review, so if we want to sneak something into \nv16, better go with just your patches. If we're tightening up the \naccepted inputs, maybe fix that atoi() sloppiness, though.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sat, 8 Jul 2023 22:38:31 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 08/07/2023 19:08, Tom Lane wrote:\n>> That got sideswiped by ae6d06f09, so here's a trivial rebase to\n>> pacify the cfbot.\n\n> Something's wrong with your attachments.\n\nYeah, I forgot to run mhbuild :-(\n\n> I spent some time today refactoring it for readability and speed. I \n> introduced a separate helper function to tokenize the input. It deals \n> with whitespace, escapes, and backslashes. Then I merged ArrayCount() \n> and ReadArrayStr() into one function that parses the elements and \n> determines the dimensions in one pass. That speeds up parsing large \n> arrays. With the tokenizer function, the logic in ReadArrayStr() is \n> still quite readable, even though it's now checking the dimensions at \n> the same time.\n\nOh, thanks for taking a look!\n\n> I also noticed that we used atoi() to parse the integers in the \n> dimensions, which doesn't do much error checking.\n\nYup, I'd noticed that too but not gotten around to doing anything\nabout it. I agree with nailing it down better as long as we're\ntightening things in this area.\n\n> Attached are your patches, rebased to fix the conflicts with ae6d06f09 \n> like you intended. On top of that, my patches. My patches need more \n> testing, benchmarking, and review, so if we want to sneak something into \n> v16, better go with just your patches.\n\nAt this point I'm only proposing this for v17, so additional cleanup\nis welcome.\n\nBTW, what's your opinion of allowing \"[1:0]={}\" ? Although that was\nmy proposal to begin with, I'm having second thoughts about it now.\nThe main reason is that the input transformation would be lossy,\neg \"[1:0]={}\" and \"[101:100]={}\" would give the same results, which\nseems a little ugly. Given the lack of field complaints, maybe we\nshould leave that alone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Jul 2023 15:49:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On 08/07/2023 22:49, Tom Lane wrote:\n> BTW, what's your opinion of allowing \"[1:0]={}\" ? Although that was\n> my proposal to begin with, I'm having second thoughts about it now.\n> The main reason is that the input transformation would be lossy,\n> eg \"[1:0]={}\" and \"[101:100]={}\" would give the same results, which\n> seems a little ugly.\n\nHmm, yeah, that would feel wrong if you did something like this:\n\nselect ('[2:1]={}'::text[]) || '{x}'::text[];\n\nand expected it to return '[2:2]={x}'.\n\nI guess we could allow \"[1:0]={}\" as a special case, but not \n\"[101:100]={}\", but that would be weird too.\n\n> Given the lack of field complaints, maybe we should leave that\n> alone.\n+1 to leave it alone. It's a little weird either way, so better to stay \nput. We can revisit it later if we want to, but I wouldn't want to go \nback and forth on it.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sat, 8 Jul 2023 23:03:47 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On Sun, Jul 9, 2023 at 3:38 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 08/07/2023 19:08, Tom Lane wrote:\n> > I wrote:\n> >> So I end up with the attached. I went ahead and dropped\n> >> ArrayGetOffset0() as part of 0001, and I split 0002 into two patches\n> >> where the new 0002 avoids re-indenting any existing code in order\n> >> to ease review, and then 0003 is just a mechanical application\n> >> of pgindent.\n> >\n> > That got sideswiped by ae6d06f09, so here's a trivial rebase to\n> > pacify the cfbot.\n> >\n> > #text/x-diff; name=\"v3-0001-Simplify-and-speed-up-ReadArrayStr.patch\" [v3-0001-Simplify-and-speed-up-ReadArrayStr.patch] /home/tgl/pgsql/v3-0001-Simplify-and-speed-up-ReadArrayStr.patch\n> > #text/x-diff; name=\"v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch\" [v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch] /home/tgl/pgsql/v3-0002-Rewrite-ArrayCount-to-make-dimensionality-checks-.patch\n> > #text/x-diff; name=\"v3-0003-Re-indent-ArrayCount.patch\" [v3-0003-Re-indent-ArrayCount.patch] /home/tgl/pgsql/v3-0003-Re-indent-ArrayCount.patch\n>\n> Something's wrong with your attachments.\n>\n> Hmm, I wonder if ae6d06f09 had a negative performance impact. In an\n> unquoted array element, scanner_isspace() function is called for every\n> character, so it might be worth inlining.\n>\n> On the patches: They are a clear improvement, thanks for that. That\n> said, I still find the logic very hard to follow, and there are some\n> obvious performance optimizations that could be made.\n>\n> ArrayCount() interprets low-level quoting and escaping, and tracks the\n> dimensions at the same time. The state machine is pretty complicated.\n> And when you've finally finished reading and grokking that function, you\n> see that ReadArrayStr() repeats most of the same logic. Ugh.\n>\n> I spent some time today refactoring it for readability and speed. I\n> introduced a separate helper function to tokenize the input. It deals\n> with whitespace, escapes, and backslashes. Then I merged ArrayCount()\n> and ReadArrayStr() into one function that parses the elements and\n> determines the dimensions in one pass. That speeds up parsing large\n> arrays. With the tokenizer function, the logic in ReadArrayStr() is\n> still quite readable, even though it's now checking the dimensions at\n> the same time.\n>\n> I also noticed that we used atoi() to parse the integers in the\n> dimensions, which doesn't do much error checking. Some funny cases were\n> accepted because of that, for example:\n>\n> postgres=# select '[1+-+-+-+-+-+:2]={foo,bar}'::text[];\n> text\n> -----------\n> {foo,bar}\n> (1 row)\n>\n> I tightened that up in the passing.\n>\n> Attached are your patches, rebased to fix the conflicts with ae6d06f09\n> like you intended. On top of that, my patches. My patches need more\n> testing, benchmarking, and review, so if we want to sneak something into\n> v16, better go with just your patches. If we're tightening up the\n> accepted inputs, maybe fix that atoi() sloppiness, though.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n\nyour idea is so clear!!!\nall the Namings are way more descriptive. ArrayToken, personally\nsomething with \"state\", \"type\" will be more clear.\n\n> /*\n> * FIXME: Is this still required? I believe all the checks it performs are\n> * redundant with other checks in ReadArrayDimension() and ReadArrayStr()\n> */\n> nitems_according_to_dims = ArrayGetNItemsSafe(ndim, dim, escontext);\n> if (nitems_according_to_dims < 0)\n> PG_RETURN_NULL();\n> if (nitems != nitems_according_to_dims)\n> elog(ERROR, \"mismatch nitems, %d vs %d\", nitems, nitems_according_to_dims);\n> if (!ArrayCheckBoundsSafe(ndim, dim, lBound, escontext))\n> PG_RETURN_NULL();\n\n--first time run\nselect '[0:3][0:2]={{1,2,3}, {4,5,6}, {7,8,9},{1,2,3}}'::int[];\nINFO: 253 after ReadArrayDimensions dim: 4 3 71803430 21998\n103381120 21998 ndim: 2\nINFO: 770 after ReadArrayStr: dim: 4 3 71803430 21998 103381120\n21998 nitems:12, ndim:2\n\n--second time run.\nINFO: 253 after ReadArrayDimensions dim: 4 3 0 0 0 0 ndim: 2\nINFO: 770 after ReadArrayStr: dim: 4 3 0 0 0 0 nitems:12, ndim:2\n\nselect '{{1,2,3}, {4,5,6}, {7,8,9},{1,2,3}}'::int[]; --every time run,\nthe result is the same.\nINFO: 253 after ReadArrayDimensions dim: 0 0 0 0 0 0 ndim: 0\nINFO: 770 after ReadArrayStr: dim: 4 3 -1 -1 -1 -1 nitems:12, ndim:2\n\nI think the reason is that the dim int array didn't explicitly assign\nvalue when initializing it.\n\n> /* Now it's safe to compute ub + 1 */\n> if (ub + 1 < lBound[ndim])\n> ereturn(escontext, false,\n> (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),\n> errmsg(\"upper bound cannot be less than lower bound minus one\")));\n\nThis part seems safe against cases like select\n'[-2147483649:-2147483648]={1,2}'::int[];\nbut I am not sure. If so, then ArrayCheckBoundsSafe is unnecessary.\n\nAnother corner case successed: select '{1,}'::int[]; should fail.\n\n\n", "msg_date": "Tue, 11 Jul 2023 11:34:37 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "hi.\nbased on Heikki v3.\nI made some changes:\narray_in: dim[6] all initialize with -1, lBound[6] all initialize with 1.\nif ReadArrayDimensions called, then corresponding dimension lBound\nwill replace the initialized default 1 value.\nReadArrayStr, since array_in main function initialized dim array,\ndimensions_specified true or false, I don't need to initialize again,\nso I deleted that part.\n\nto solve corner cases like '{{1,},{1},}'::text[]. in ReadArrayStr\nmain switch function, like other ArrayToken, first evaluate\nexpect_delim then assign expect_delim.\nIn ATOK_LEVEL_END. if non-empty array, closing bracket either precede\nwith an element or another closing element. In both cases, the\nprevious expect_delim should be true.\n\nin\n * FIXME: Is this still required? I believe all the checks it\nperforms are\n * redundant with other checks in ReadArrayDimension() and\nReadArrayStr()\n */\nI deleted\n- nitems_according_to_dims = ArrayGetNItemsSafe(ndim, dim, escontext);\n- if (nitems_according_to_dims < 0)\n- PG_RETURN_NULL();\n- if (nitems != nitems_according_to_dims)\n- elog(ERROR, \"mismatch nitems, %d vs %d\", nitems,\nnitems_according_to_dims);\nbut I am not sure if the following is necessary.\n if (!ArrayCheckBoundsSafe(ndim, dim, lBound, escontext))\n PG_RETURN_NULL();\n\nI added some corner case tests like select '{{1,},{1},}'::text[];\n\nsome changes broken:\nselect '{{1},{}}'::text[];\n-DETAIL: Multidimensional arrays must have sub-arrays with matching dimensions.\n+DETAIL: Unexpected \",\" character.\nI added some error checks in ATOK_LEVEL_END. The first expect_delim\npart check will first generate an error, the dimension error part will\nnot be reached.", "msg_date": "Mon, 7 Aug 2023 13:35:03 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "hi.\nattached v4.\nv4, 0001 to 0005 is the same as v3 in\nhttps://www.postgresql.org/message-id/5859ce4e-2be4-92b0-c85c-e1e24eab57c6%40iki.fi\n\nv4-0006 doing some modifications to address the corner case mentioned\nin the previous thread (like select '{{1,},{1},}'::text[]).\nalso fixed all these FIXME, Heikki mentioned in the code.\n\nv4-0007 refactor ReadDimensionInt. to make the array dimension bound\nvariables within the INT_MIN and INT_MAX. so it will make select\n'[21474836488:21474836489]={1,2}'::int[]; fail.", "msg_date": "Mon, 4 Sep 2023 08:00:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On Mon, Sep 4, 2023 at 8:00 AM jian he <jian.universality@gmail.com> wrote:\n>\n> hi.\n> attached v4.\n> v4, 0001 to 0005 is the same as v3 in\n> https://www.postgresql.org/message-id/5859ce4e-2be4-92b0-c85c-e1e24eab57c6%40iki.fi\n>\n> v4-0006 doing some modifications to address the corner case mentioned\n> in the previous thread (like select '{{1,},{1},}'::text[]).\n> also fixed all these FIXME, Heikki mentioned in the code.\n>\n> v4-0007 refactor ReadDimensionInt. to make the array dimension bound\n> variables within the INT_MIN and INT_MAX. so it will make select\n> '[21474836488:21474836489]={1,2}'::int[]; fail.\n\n\nattached, same as v4, but delete unused variable {nitems_according_to_dims}.", "msg_date": "Tue, 5 Sep 2023 07:53:13 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "Hello Jian,\n\n05.09.2023 02:53, jian he wrote:\n> On Mon, Sep 4, 2023 at 8:00 AM jian he <jian.universality@gmail.com> wrote:\n>> hi.\n>> attached v4.\n>> v4, 0001 to 0005 is the same as v3 in\n>> https://www.postgresql.org/message-id/5859ce4e-2be4-92b0-c85c-e1e24eab57c6%40iki.fi\n>>\n>> v4-0006 doing some modifications to address the corner case mentioned\n>> in the previous thread (like select '{{1,},{1},}'::text[]).\n>> also fixed all these FIXME, Heikki mentioned in the code.\n>>\n>> v4-0007 refactor ReadDimensionInt. to make the array dimension bound\n>> variables within the INT_MIN and INT_MAX. so it will make select\n>> '[21474836488:21474836489]={1,2}'::int[]; fail.\n>\n> attached, same as v4, but delete unused variable {nitems_according_to_dims}.\n\nPlease look at the differences, I've observed with the latest patches\napplied, old vs new behavior:\n\nCase 1:\nSELECT '{1,'::integer[];\nERROR:  malformed array literal: \"{1,\"\nLINE 1: SELECT '{1,'::integer[];\n                ^\nDETAIL:  Unexpected end of input.\n\nvs\n\nERROR:  malformed array literal: \"{1,\"\nLINE 1: SELECT '{1,'::integer[];\n                ^\n\n(no DETAIL)\n\nCase 2:\nSELECT '{{},}'::text[];\nERROR:  malformed array literal: \"{{},}\"\nLINE 1: SELECT '{{},}'::text[];\n                ^\nDETAIL:  Unexpected \"}\" character\n\nvs\n  text\n------\n  {}\n(1 row)\n\nCase 3:\nselect '{\\{}'::text[];\n  text\n-------\n  {\"{\"}\n(1 row)\n\nvs\n  text\n------\n  {\"\"}\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 10 Sep 2023 13:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On Sun, Sep 10, 2023 at 6:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> Case 1:\n> SELECT '{1,'::integer[];\n> ERROR: malformed array literal: \"{1,\"\n> LINE 1: SELECT '{1,'::integer[];\n> ^\n> DETAIL: Unexpected end of input.\n>\n> vs\n>\n> ERROR: malformed array literal: \"{1,\"\n> LINE 1: SELECT '{1,'::integer[];\n> ^\n>\n> (no DETAIL)\n>\n> Case 2:\n> SELECT '{{},}'::text[];\n> ERROR: malformed array literal: \"{{},}\"\n> LINE 1: SELECT '{{},}'::text[];\n> ^\n> DETAIL: Unexpected \"}\" character\n>\n> vs\n> text\n> ------\n> {}\n> (1 row)\n>\n> Case 3:\n> select '{\\{}'::text[];\n> text\n> -------\n> {\"{\"}\n> (1 row)\n>\n> vs\n> text\n> ------\n> {\"\"}\n>\n> Best regards,\n> Alexander\n\nhi.\nThanks for reviewing it.\n\n> DETAIL: Unexpected end of input.\nIn many cases, ending errors will happen, so I consolidate it.\n\nSELECT '{{},}'::text[];\nsolved by tracking current token type and previous token type.\n\nselect '{\\{}'::text[];\nsolved by update dstendptr.\n\nattached.", "msg_date": "Mon, 11 Sep 2023 13:26:16 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "11.09.2023 08:26, jian he wrote:\n> hi.\n> Thanks for reviewing it.\n>\n>> DETAIL: Unexpected end of input.\n> In many cases, ending errors will happen, so I consolidate it.\n>\n> SELECT '{{},}'::text[];\n> solved by tracking current token type and previous token type.\n>\n> select '{\\{}'::text[];\n> solved by update dstendptr.\n>\n> attached.\n\nThank you!\nI can confirm that all those anomalies are fixed now.\nBut new version brings a warning when compiled with gcc:\narrayfuncs.c:659:9: warning: variable 'prev_tok' is uninitialized when used here [-Wuninitialized]\n                                 if (prev_tok == ATOK_DELIM || nest_level == 0)\n                                     ^~~~~~~~\narrayfuncs.c:628:3: note: variable 'prev_tok' is declared here\n                 ArrayToken      prev_tok;\n                 ^\n1 warning generated.\n\nAlso it looks like an updated comment needs fixing/improving:\n  /* No array dimensions, so first literal character should be oepn curl-braces */\n(should be an opening brace?)\n\n(I haven't look at the code closely.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 11 Sep 2023 15:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On Mon, Sep 11, 2023 at 8:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> I can confirm that all those anomalies are fixed now.\n> But new version brings a warning when compiled with gcc:\n> arrayfuncs.c:659:9: warning: variable 'prev_tok' is uninitialized when used here [-Wuninitialized]\n> if (prev_tok == ATOK_DELIM || nest_level == 0)\n> ^~~~~~~~\n> arrayfuncs.c:628:3: note: variable 'prev_tok' is declared here\n> ArrayToken prev_tok;\n> ^\n> 1 warning generated.\n>\n> Also it looks like an updated comment needs fixing/improving:\n> /* No array dimensions, so first literal character should be oepn curl-braces */\n> (should be an opening brace?)\n>\n\nfixed these 2 issues.\n--query\nSELECT ('{ ' || string_agg(chr((ascii('B') + round(random() * 25)) ::\ninteger),', ') || ' }')::text[]\nFROM generate_series(1,1e6) \\watch i=0.1 c=1\n\nAfter applying the patch, the above query runs slightly faster.", "msg_date": "Tue, 12 Sep 2023 16:45:29 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "12.09.2023 11:45, jian he wrote:\n> On Mon, Sep 11, 2023 at 8:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> I can confirm that all those anomalies are fixed now.\n>> But new version brings a warning when compiled with gcc:\n>> arrayfuncs.c:659:9: warning: variable 'prev_tok' is uninitialized when used here [-Wuninitialized]\n>> if (prev_tok == ATOK_DELIM || nest_level == 0)\n>> ^~~~~~~~\n>> arrayfuncs.c:628:3: note: variable 'prev_tok' is declared here\n>> ArrayToken prev_tok;\n>> ^\n>> 1 warning generated.\n>>\n>> Also it looks like an updated comment needs fixing/improving:\n>> /* No array dimensions, so first literal character should be oepn curl-braces */\n>> (should be an opening brace?)\n>>\n> fixed these 2 issues.\n> --query\n> SELECT ('{ ' || string_agg(chr((ascii('B') + round(random() * 25)) ::\n> integer),', ') || ' }')::text[]\n> FROM generate_series(1,1e6) \\watch i=0.1 c=1\n>\n> After applying the patch, the above query runs slightly faster.\n\nThank you, Jian He!\n\nNow I see only a few wrinkles.\n1) A minor asymmetry in providing details appeared:\nselect E'{\"a\"a}'::text[];\nERROR:  malformed array literal: \"{\"a\"a}\"\nLINE 1: select E'{\"a\"a}'::text[];\n                ^\nDETAIL:  Unexpected array element.\n\nselect E'{a\"a\"}'::text[];\nERROR:  malformed array literal: \"{a\"a\"}\"\nLINE 1: select E'{a\"a\"}'::text[];\n                ^\n(no DETAIL)\n\nOld behavior:\nselect E'{a\"a\"}'::text[];\nERROR:  malformed array literal: \"{a\"a\"}\"\nLINE 1: select E'{a\"a\"}'::text[];\n                ^\nDETAIL:  Unexpected array element.\n\nselect E'{\"a\"a}'::text[];\nERROR:  malformed array literal: \"{\"a\"a}\"\nLINE 1: select E'{\"a\"a}'::text[];\n                ^\nDETAIL:  Unexpected array element.\n\n2) CPPFLAGS=\"-DARRAYDEBUG\" ./configure ... breaks \"make check\", maybe change elog(NOTICE) to elog(DEBUG1)?\n2a) a message logged there lacks some delimiter before \"lBound info\":\nNOTICE:  array_in- ndim 1 ( 3 -1 -1 -1 -1 -1lBound info 1 1 1 1 1 1) for {red,green,blue}\nwhat about changing the format to \"ndim 1 ( 3 -1 -1 -1 -1 -1; lBound info: 1 1 1 1 1 1)\"?\n\n3) It seems that new comments need polishing, in particular:\n  /* initialize dim, lBound. useful for ReadArrayDimensions ReadArrayStr */\n->?\n  /* Initialize dim, lBound for ReadArrayDimensions, ReadArrayStr */\n\nOtherwise, we determine the dimensions from the in curly-braces\n->?\nOtherwise, we determine the dimensions from the curly braces.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 13 Sep 2023 09:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On Wed, Sep 13, 2023 at 2:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n>\n> Now I see only a few wrinkles.\n> 1) A minor asymmetry in providing details appeared:\n> select E'{\"a\"a}'::text[];\n> ERROR: malformed array literal: \"{\"a\"a}\"\n> LINE 1: select E'{\"a\"a}'::text[];\n> ^\n> DETAIL: Unexpected array element.\n>\n> select E'{a\"a\"}'::text[];\n> ERROR: malformed array literal: \"{a\"a\"}\"\n> LINE 1: select E'{a\"a\"}'::text[];\n> ^\n> (no DETAIL)\n>\n> Old behavior:\n> select E'{a\"a\"}'::text[];\n> ERROR: malformed array literal: \"{a\"a\"}\"\n> LINE 1: select E'{a\"a\"}'::text[];\n> ^\n> DETAIL: Unexpected array element.\n>\n> select E'{\"a\"a}'::text[];\n> ERROR: malformed array literal: \"{\"a\"a}\"\n> LINE 1: select E'{\"a\"a}'::text[];\n> ^\n> DETAIL: Unexpected array element.\n\nfixed and added these two query to the test.\n\n> 2) CPPFLAGS=\"-DARRAYDEBUG\" ./configure ... breaks \"make check\", maybe change elog(NOTICE) to elog(DEBUG1)?\n> 2a) a message logged there lacks some delimiter before \"lBound info\":\n> NOTICE: array_in- ndim 1 ( 3 -1 -1 -1 -1 -1lBound info 1 1 1 1 1 1) for {red,green,blue}\n> what about changing the format to \"ndim 1 ( 3 -1 -1 -1 -1 -1; lBound info: 1 1 1 1 1 1)\"?\n\nfixed. Use elog(DEBUG1) now.\n\n> 3) It seems that new comments need polishing, in particular:\n> /* initialize dim, lBound. useful for ReadArrayDimensions ReadArrayStr */\n> ->?\n> /* Initialize dim, lBound for ReadArrayDimensions, ReadArrayStr */\n>\n> Otherwise, we determine the dimensions from the in curly-braces\n> ->?\n> Otherwise, we determine the dimensions from the curly braces.\n>\n> Best regards,\n> Alexander\n\ncomments updates. please check the attached.", "msg_date": "Wed, 13 Sep 2023 16:55:23 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "13.09.2023 11:55, jian he wrote:\n>> 2) CPPFLAGS=\"-DARRAYDEBUG\" ./configure ... breaks \"make check\", maybe change elog(NOTICE) to elog(DEBUG1)?\n>> 2a) a message logged there lacks some delimiter before \"lBound info\":\n>> NOTICE: array_in- ndim 1 ( 3 -1 -1 -1 -1 -1lBound info 1 1 1 1 1 1) for {red,green,blue}\n>> what about changing the format to \"ndim 1 ( 3 -1 -1 -1 -1 -1; lBound info: 1 1 1 1 1 1)\"?\n> fixed. Use elog(DEBUG1) now.\n\nThanks for the fixes!\n\nI didn't mean to remove the prefix \"array_in-\", but in fact I was confused\nby the \"{function_name}-\" syntax, and now when I've looked at it closely, I\nsee that that syntax was quite popular (\"date_in- \", \"single_decode- \", ...)\nback in 1997 (see 9d8ae7977). But nowadays it is out of fashion, with most\nof such debugging prints were gone with 7a877dfd2 and the next-to-last one\nwith 50861cd68. Moreover, as the latter commit shows, such debugging output\ncan be eliminated completely without remorse. (And I couldn't find mentions\nof ARRAYDEBUG in pgsql-bugs, pgsql-hackers archives, so probably no one used\nthat debugging facility since it's introduction.)\nAs of now, the output still weird (I mean the excessive right parenthesis):\nDEBUG:  ndim 1 ( 2 -1 -1 -1 -1 -1); lBound info: 1 1 1 1 1 1) for {0,0}\n\nOtherwise, from a user perspective, the patch set looks good to me. (Though\nmaybe English language editorialization still needed before committing it.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 14 Sep 2023 09:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On Thu, Sep 14, 2023 at 2:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n>\n> I didn't mean to remove the prefix \"array_in-\", but in fact I was confused\n> by the \"{function_name}-\" syntax, and now when I've looked at it closely, I\n> see that that syntax was quite popular (\"date_in- \", \"single_decode- \", ...)\n> back in 1997 (see 9d8ae7977). But nowadays it is out of fashion, with most\n> of such debugging prints were gone with 7a877dfd2 and the next-to-last one\n> with 50861cd68. Moreover, as the latter commit shows, such debugging output\n> can be eliminated completely without remorse. (And I couldn't find mentions\n> of ARRAYDEBUG in pgsql-bugs, pgsql-hackers archives, so probably no one used\n> that debugging facility since it's introduction.)\n> As of now, the output still weird (I mean the excessive right parenthesis):\n> DEBUG: ndim 1 ( 2 -1 -1 -1 -1 -1); lBound info: 1 1 1 1 1 1) for {0,0}\n>\n\nhi.\nsimilar to NUMERIC_DEBUG. I made the following adjustments.\nif unnecessary, removing this part seems also fine, in GDB, you can\nprint it out directly.\n\n/* ----------\n * Uncomment the following to get a dump of a array's ndim, dim, lBound.\n * ----------\n#define ARRAYDEBUG\n */\n#ifdef ARRAYDEBUG\n{\nStringInfoData buf;\n\ninitStringInfo(&buf);\n\nappendStringInfo(&buf, \"array_in- ndim %d, dim info(\", ndim);\nfor (int i = 0; i < MAXDIM; i++)\nappendStringInfo(&buf, \" %d\", dim[i]);\nappendStringInfo(&buf, \"); lBound info(\");\nfor (int i = 0; i < MAXDIM; i++)\nappendStringInfo(&buf, \" %d\", lBound[i]);\nappendStringInfo(&buf, \") for %s\", string);\nelog(DEBUG1, \"%s\", buf.data);\npfree(buf.data);\n}\n#endif\n\nother than this, no other changes.", "msg_date": "Thu, 14 Sep 2023 16:14:01 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "rebase after commit\n(https://git.postgresql.org/cgit/postgresql.git/commit/?id=611806cd726fc92989ac918eac48fd8d684869c7)", "msg_date": "Mon, 30 Oct 2023 08:00:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "I got back to looking at this today (sorry for delay), and did a pass\nof code review. I think we are getting pretty close to something\ncommittable. The one loose end IMO is this bit in ReadArrayToken:\n\n+ case '\"':\n+\n+ /*\n+ * XXX \"Unexpected %c character\" would be more apropos, but\n+ * this message is what the pre-v17 implementation produced,\n+ * so we'll keep it for now.\n+ */\n+ errsave(escontext,\n+ (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n+ errmsg(\"malformed array literal: \\\"%s\\\"\", origStr),\n+ errdetail(\"Unexpected array element.\")));\n+ return ATOK_ERROR;\n\nThis comes out when you write something like '{foo\"bar\"}', and I'd\nsay the choice of message is not great. On the other hand, it's\nconsistent with what you get from '{\"foo\"\"bar\"}', and if we wanted\nto change that too then some tweaking of the state machine in\nReadArrayStr would be required (or else modify ReadArrayToken so\nit doesn't return instantly upon seeing the second quote mark).\nI'm not sure that this is worth messing with.\n\nAnyway, I think we are well past the point where splitting the patch\ninto multiple parts is worth doing, because we've rewritten pretty\nmuch all of this code, and the intermediate versions are not terribly\nhelpful. So I just folded it all into one patch.\n\nSome notes about specific points:\n\n* Per previous discussion, I undid the change to allow \"[1:0]\"\ndimensions, but I left a comment behind about that.\n\n* Removing the ArrayGetNItemsSafe/ArrayCheckBoundsSafe calls\nseems OK, but then we need to be more careful about detecting\noverflows and disallowed cases in ReadArrayDimensions.\n\n* I don't think the ARRAYDEBUG code is of any value whatever.\nThe fact that nobody bothered to improve it to print more than\nthe dim[] values proves it hasn't been used in decades.\nLet's just nuke it.\n\n* We can simplify the state machine in ReadArrayStr some more: it\nseems to me it's sufficient to track \"expect_delim\", as long as you\nrealize that that really means \"expect typdelim or right brace\".\n(Maybe another name would be better? I couldn't think of anything\nshort though.)\n\n* I switched to using a StringInfo instead of a fixed-size elembuf,\nas Heikki speculated about.\n\n* I added some more test cases to cover things that evidently weren't\nsufficiently tested, like the has_escapes business which was flat\nout broken in v10, and to improve the code coverage report.\n\nComments?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 07 Nov 2023 18:52:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "Hello Tom,\n\n08.11.2023 02:52, Tom Lane wrote:\n> Comments?\n\nThank you for the update! I haven't looked into the code, just did manual\ntesting and rechecked commands given in the arrays documentation ([1]).\nEverything works correctly, except for one minor difference:\nINSERT INTO sal_emp\n     VALUES ('Bill',\n     '{10000, 10000, 10000, 10000}',\n     '{{\"meeting\", \"lunch\"}, {\"meeting\"}}');\n\ncurrently gives:\nERROR:  malformed array literal: \"{{\"meeting\", \"lunch\"}, {\"meeting\"}}\"\nLINE 4:     '{{\"meeting\", \"lunch\"}, {\"meeting\"}}');\n             ^\nDETAIL:  Multidimensional arrays must have sub-arrays with matching dimensions.\n\nnot\nERROR:  multidimensional arrays must have array expressions with matching dimensions\n\nIt seems that this inconsistency appeared with 475aedd1e, so it's not new\nat all, but maybe fix it or describe the error more generally. (Though it\nmight be supposed that \"for example\" covers slight deviations.)\n\n[1] https://www.postgresql.org/docs/devel/arrays.html\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 8 Nov 2023 18:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> Thank you for the update! I haven't looked into the code, just did manual\n> testing and rechecked commands given in the arrays documentation ([1]).\n> Everything works correctly, except for one minor difference:\n> INSERT INTO sal_emp\n>     VALUES ('Bill',\n>     '{10000, 10000, 10000, 10000}',\n>     '{{\"meeting\", \"lunch\"}, {\"meeting\"}}');\n\n> currently gives:\n> ERROR:  malformed array literal: \"{{\"meeting\", \"lunch\"}, {\"meeting\"}}\"\n> LINE 4:     '{{\"meeting\", \"lunch\"}, {\"meeting\"}}');\n>             ^\n> DETAIL:  Multidimensional arrays must have sub-arrays with matching dimensions.\n\n> not\n> ERROR:  multidimensional arrays must have array expressions with matching dimensions\n\nOh! I had not realized we had actual documentation examples covering\nthis area. Yeah, that doc needs to be updated to show the current\nwording of the error. Thanks for catching that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Nov 2023 10:56:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "I wrote:\n> This comes out when you write something like '{foo\"bar\"}', and I'd\n> say the choice of message is not great. On the other hand, it's\n> consistent with what you get from '{\"foo\"\"bar\"}', and if we wanted\n> to change that too then some tweaking of the state machine in\n> ReadArrayStr would be required (or else modify ReadArrayToken so\n> it doesn't return instantly upon seeing the second quote mark).\n> I'm not sure that this is worth messing with.\n\nAfter further thought I concluded that this area is worth spending\na little more code for. If we have input like '{foo\"bar\"}' or\n'{\"foo\"bar}' or '{\"foo\"\"bar\"}', what it most likely means is that\nthe user misunderstood the quoting rules. A message like \"Unexpected\narray element\" is pretty completely unhelpful for figuring that out.\nThe alternative I was considering, \"Unexpected \"\"\" character\", would\nnot be much better. What we want to say is something like \"Incorrectly\nquoted array element\", and the attached v12 makes ReadArrayToken do\nthat for both quoted and unquoted cases.\n\nI also fixed the obsolete documentation that Alexander noted, and\ncleaned up a couple other infelicities (notably, I'd blindly written\nereport(ERROR) in one place where ereturn is now the way).\n\nBarring objections, I think v12 is committable.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 09 Nov 2023 11:57:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "On 09/11/2023 18:57, Tom Lane wrote:\n> After further thought I concluded that this area is worth spending\n> a little more code for. If we have input like '{foo\"bar\"}' or\n> '{\"foo\"bar}' or '{\"foo\"\"bar\"}', what it most likely means is that\n> the user misunderstood the quoting rules. A message like \"Unexpected\n> array element\" is pretty completely unhelpful for figuring that out.\n> The alternative I was considering, \"Unexpected \"\"\" character\", would\n> not be much better. What we want to say is something like \"Incorrectly\n> quoted array element\", and the attached v12 makes ReadArrayToken do\n> that for both quoted and unquoted cases.\n\n+1\n\n> I also fixed the obsolete documentation that Alexander noted, and\n> cleaned up a couple other infelicities (notably, I'd blindly written\n> ereport(ERROR) in one place where ereturn is now the way).\n> \n> Barring objections, I think v12 is committable.\n\nLooks good to me. Just two little things caught my eye:\n\n1.\n\n> \t/* Initialize dim, lBound for ReadArrayDimensions, ReadArrayStr */\n> \tfor (int i = 0; i < MAXDIM; i++)\n> \t{\n> \t\tdim[i] = -1;\t\t\t/* indicates \"not yet known\" */\n> \t\tlBound[i] = 1;\t\t\t/* default lower bound */\n> \t}\n\nThe function comments in ReadArrayDimensions and ReadArrayStr don't make \nit clear that these arrays need to be initialized like this. \nReadArrayDimensions() says that they are output variables, and \nReadArrayStr() doesn't mention anything about having to initialize them.\n\n\n2. This was the same before this patch, but:\n\npostgres=# select '{{{{{{{{{{1}}}}}}}}}}'::int[];\nERROR: number of array dimensions (7) exceeds the maximum allowed (6)\nLINE 1: select '{{{{{{{{{{1}}}}}}}}}}'::int[];\n ^\n\nThe error message isn't great, as the literal contains 10 dimensions, \nnot 7 as the error message claims.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 13 Nov 2023 00:30:52 +0100", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 09/11/2023 18:57, Tom Lane wrote:\n>> Barring objections, I think v12 is committable.\n\n> Looks good to me. Just two little things caught my eye:\n\n> 1.\n> The function comments in ReadArrayDimensions and ReadArrayStr don't make \n> it clear that these arrays need to be initialized like this. \n> ReadArrayDimensions() says that they are output variables, and \n> ReadArrayStr() doesn't mention anything about having to initialize them.\n\nRoger, will fix that.\n\n> 2. This was the same before this patch, but:\n\n> postgres=# select '{{{{{{{{{{1}}}}}}}}}}'::int[];\n> ERROR: number of array dimensions (7) exceeds the maximum allowed (6)\n> LINE 1: select '{{{{{{{{{{1}}}}}}}}}}'::int[];\n> ^\n> The error message isn't great, as the literal contains 10 dimensions, \n> not 7 as the error message claims.\n\nYeah. To make that report accurate, we'd have to somehow postpone\nissuing the error until we've seen all the left braces (or at least\nall the initial ones). There's a related problem in reading an\nexplicitly-dimensioned array:\n\npostgres=# select '[1][2][3][4][5][6][7][8][9]={}'::text[];\nERROR: number of array dimensions (7) exceeds the maximum allowed (6)\n\nI kind of think it's not worth the trouble. What was discussed\nupthread was revising the message to not claim it knows how many\ndimensions there are. The related cases in plperl and plpython just\nsay \"number of array dimensions exceeds the maximum allowed (6)\",\nand there's a case to be made for adjusting the core messages\nsimilarly. I figured that could be a separate patch though,\nsince it'd touch more than array_in (there's about a dozen\noccurrences of the former wording).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Nov 2023 19:11:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" }, { "msg_contents": "I wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> 2. This was the same before this patch, but:\n\n>> postgres=# select '{{{{{{{{{{1}}}}}}}}}}'::int[];\n>> ERROR: number of array dimensions (7) exceeds the maximum allowed (6)\n>> LINE 1: select '{{{{{{{{{{1}}}}}}}}}}'::int[];\n>> ^\n>> The error message isn't great, as the literal contains 10 dimensions, \n>> not 7 as the error message claims.\n\n> Yeah. To make that report accurate, we'd have to somehow postpone\n> issuing the error until we've seen all the left braces (or at least\n> all the initial ones). There's a related problem in reading an\n> explicitly-dimensioned array:\n\n> postgres=# select '[1][2][3][4][5][6][7][8][9]={}'::text[];\n> ERROR: number of array dimensions (7) exceeds the maximum allowed (6)\n\n> I kind of think it's not worth the trouble. What was discussed\n> upthread was revising the message to not claim it knows how many\n> dimensions there are.\n\nI pushed the main patch. Here's a proposed delta to deal with\nthe bogus-dimensionality-count issue. There are a few more places\nwhere I left things alone because the code does know what the\nintended dimensionality will be; so there are still two versions\nof the translatable error message.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 13 Nov 2023 13:23:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up array_in()" } ]
[ { "msg_contents": "Hi,\n\nThere is at present a comment at the top of transformStatsStmt which\nsays \"To avoid race conditions, it's important that this function\nrelies only on the passed-in relid (and not on stmt->relation) to\ndetermine the target relation.\" However, doing what the comment says\nwe need to do doesn't actually avoid the problem we need to avoid. The\nissue here, as I understand it, is that if we look up the same\nless-than-fully-qualified name multiple times, we might get different\nanswers due to concurrent activity, and that might create a security\nvulnerability, along the lines of CVE-2014-0062. So what the code\nshould be doing is looking up the user-provided name just once and\nthen using that value throughout all subsequent processing stages, but\nit doesn't actually. The caller of transformStatsStmt() looks up the\nRangeVar and gets an OID, but that value is then discarded and\nCreateStatistics does another lookup on the same name, which means\nthat we're not really avoiding the hazard about which the comment\nseems to be concerned.\n\nSo that leads to the question of whether there's a security\nvulnerability here. I and a few other members of the pgsql-security\nhaven't been able to find one in brief review, but we may have missed\nsomething. Fortunately, the permissions checks happen after the second\nname lookup inside CreateStatistics(), so it doesn't seem that, for\nexample, you can leverage this to create extended statistics on a\ntable that you don't own. You can possibly get the first part of the\noperation, where we transform the CREATE STATISTICS command's WHERE\nclause, to operate on one table that you do own and then the second\npart on another table that you don't own, but even if that's so, the\nsecond part is just going to fail before doing anything interesting,\nso it doesn't seem like there's a problem. If anyone reading this can\nspot an exploit, please speak up!\n\nSo the attached patch is proposed as code cleanup, rather than\nsecurity patches. It changes the code to avoid the duplicate name\nlookup altogether. There is no reason that I know of why this needs to\nbe back-patched for correctness, but I think it's worth putting into\nmaster to make the code nicer and avoid doing things that in some\ncircumstances can be risky.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 2 May 2023 13:49:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "code cleanup for CREATE STATISTICS" } ]
[ { "msg_contents": "Hi,\n\nI just found the naming of the ItemId variables is not consistent in\nheapam.c. There are 13 'lpp's and 112 'lp's. Technically 'lpp' is correct\nas ItemId is a line pointer's pointer and there used to be code like\n\"++lpp\" for line pointer array iteration. Now that all the \"++lpp\" code has\nbeen removed and there are 100+ more occurrences of 'Ip' than 'lpp', I\nsuggest we change 'lpp' to 'lp' to make things consistent and avoid\nconfusion.\n\nBest Regards,\nZian Wang", "msg_date": "Tue, 2 May 2023 20:15:59 -0400", "msg_from": "Yaphters W <yaphters@gmail.com>", "msg_from_op": true, "msg_subject": "Rename 'lpp' to 'lp' in heapam.c" }, { "msg_contents": "On Wed, 3 May 2023 at 12:16, Yaphters W <yaphters@gmail.com> wrote:\n> I just found the naming of the ItemId variables is not consistent in heapam.c. There are 13 'lpp's and 112 'lp's. Technically 'lpp' is correct as ItemId is a line pointer's pointer and there used to be code like \"++lpp\" for line pointer array iteration. Now that all the \"++lpp\" code has been removed and there are 100+ more occurrences of 'Ip' than 'lpp', I suggest we change 'lpp' to 'lp' to make things consistent and avoid confusion.\n\nI don't really agree that one is any more correct than the other. I\nalso don't think we should be making changes like this as doing this\nmay give some false impression that we have some standard to follow\nhere that a local variable of a given type must be given a certain\nname. To comply with such a standard seems like it would take close to\nan endless number of patches which would just result in wasted\nreviewer and committer time and give us nothing but pain while back\npatching.\n\n-1 from me.\n\nDavid\n\n\n", "msg_date": "Wed, 3 May 2023 14:18:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename 'lpp' to 'lp' in heapam.c" }, { "msg_contents": "On Tue, May 2, 2023 at 10:18 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I don't really agree that one is any more correct than the other. I\n> also don't think we should be making changes like this as doing this\n> may give some false impression that we have some standard to follow\n> here that a local variable of a given type must be given a certain\n> name. To comply with such a standard seems like it would take close to\n> an endless number of patches which would just result in wasted\n> reviewer and committer time and give us nothing but pain while back\n> patching.\n>\n> -1 from me.\n\nI agree with David. This seems like pointless code churn.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 May 2023 10:48:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename 'lpp' to 'lp' in heapam.c" } ]
[ { "msg_contents": "Hi\n\nCommit b23cd185f [1] forbids manual creation of ON SELECT rule on\na table, and updated the main rules documentation [2], but didn't update\nthe corresponding CREATE RULE page [3].\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b23cd185fd5410e5204683933f848d4583e34b35\n[2] https://www.postgresql.org/docs/devel/rules-views.html\n[3] https://www.postgresql.org/docs/devel/sql-createrule.html\n\nWhile poking around at an update for that, unless I'm missing something it is\nnow not possible to use \"CREATE RULE ... ON SELECT\" for any kind of relation,\ngiven that it's disallowed on views / material views already.\n\nAssuming that's the case, that makes this useless syntax in a non-SQL-standard\ncommand, is there any reason to keep it in the grammar at all?\n\nAttached suggested patch removes it entirely and updates the CREATE RULE\ndocumentation.\n\nApart from removing ON SELECT from the grammar, the main change is the removal\nof usage checks in DefineQueryRewrite(), as the only time it is called with the\nevent_type set to \"CMD_SELECT\" is when a view/matview is created, and presumably\nwe can trust the internal caller to do the right thing. I added an Assert in\njust in case, dunno if that's really needed. In passing, a redundant workaround\nfor pre-7.3 rule names gets removed as well.\n\nI note that with or without this change, pg_get_ruledef() e.g. executed with:\n\n SELECT pg_get_ruledef(oid) FROM pg_rewrite WHERE\nev_class='some_view'::regclass;\n\nemits SQL for CREATE RULE which can no longer be executed; I don't think there\nis anything which can be done about that other than noting it as a historical\nimplementation oddity.\n\n\nRegards\n\nIan Barwick", "msg_date": "Thu, 4 May 2023 12:39:17 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "\"CREATE RULE ... ON SELECT\": redundant?" }, { "msg_contents": "Ian Lawrence Barwick <barwick@gmail.com> writes:\n> While poking around at an update for that, unless I'm missing something it is\n> now not possible to use \"CREATE RULE ... ON SELECT\" for any kind of relation,\n> given that it's disallowed on views / material views already.\n\nWhat makes you think it's disallowed on views? You do need to use\nCREATE OR REPLACE, since the rule will already exist.\n\nregression=# create view v as select * from int8_tbl ;\nCREATE VIEW\nregression=# create or replace rule \"_RETURN\" as on select to v do instead select q1, q2+1 as q2 from int8_tbl ;\nCREATE RULE\nregression=# \\d+ v\n View \"public.v\"\n Column | Type | Collation | Nullable | Default | Storage | Description \n--------+--------+-----------+----------+---------+---------+-------------\n q1 | bigint | | | | plain | \n q2 | bigint | | | | plain | \nView definition:\n SELECT int8_tbl.q1,\n int8_tbl.q2 + 1 AS q2\n FROM int8_tbl;\n\nNow, this is certainly syntax that's deprecated in favor of using\nCREATE OR REPLACE VIEW, but I'm very hesitant to remove it. ISTR\nthat ancient pg_dump files used it in cases involving circular\ndependencies. If you want to adjust the docs to say that it's\ndeprecated in favor of CREATE OR REPLACE VIEW, I could get on\nboard with that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 May 2023 23:51:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"CREATE RULE ... ON SELECT\": redundant?" }, { "msg_contents": "2023年5月4日(木) 12:51 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Ian Lawrence Barwick <barwick@gmail.com> writes:\n> > While poking around at an update for that, unless I'm missing something it is\n> > now not possible to use \"CREATE RULE ... ON SELECT\" for any kind of relation,\n> > given that it's disallowed on views / material views already.\n>\n> What makes you think it's disallowed on views? You do need to use\n> CREATE OR REPLACE, since the rule will already exist.\n\nAh, \"OR REPLACE\". Knew I was missing something.\n\n> regression=# create view v as select * from int8_tbl ;\n> CREATE VIEW\n> regression=# create or replace rule \"_RETURN\" as on select to v do instead select q1, q2+1 as q2 from int8_tbl ;\n> CREATE RULE\n> regression=# \\d+ v\n> View \"public.v\"\n> Column | Type | Collation | Nullable | Default | Storage | Description\n> --------+--------+-----------+----------+---------+---------+-------------\n> q1 | bigint | | | | plain |\n> q2 | bigint | | | | plain |\n> View definition:\n> SELECT int8_tbl.q1,\n> int8_tbl.q2 + 1 AS q2\n> FROM int8_tbl;\n>\n> Now, this is certainly syntax that's deprecated in favor of using\n> CREATE OR REPLACE VIEW, but I'm very hesitant to remove it. ISTR\n> that ancient pg_dump files used it in cases involving circular\n> dependencies. If you want to adjust the docs to say that it's\n> deprecated in favor of CREATE OR REPLACE VIEW, I could get on\n> board with that.\n\n'k, I will work on a doc patch.\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Thu, 4 May 2023 13:12:49 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"CREATE RULE ... ON SELECT\": redundant?" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Now, this is certainly syntax that's deprecated in favor of using\n Tom> CREATE OR REPLACE VIEW, but I'm very hesitant to remove it. ISTR\n Tom> that ancient pg_dump files used it in cases involving circular\n Tom> dependencies.\n\nI thought they used CREATE RULE on a table?\n\nIn fact here is an example from a pg 9.5 pg_dump output (with cruft\nremoved):\n\nCREATE TABLE public.cdep (\n a integer,\n b text\n);\nCREATE FUNCTION public.cdep_impl() RETURNS SETOF public.cdep\n LANGUAGE plpgsql\n AS $$ begin return query select a,b from (values (1,'foo'),(2,'bar')) v(a,b); end; $$;\nCREATE RULE \"_RETURN\" AS\n ON SELECT TO public.cdep DO INSTEAD SELECT cdep_impl.a,\n cdep_impl.b\n FROM public.cdep_impl() cdep_impl(a, b);\n\nand this now fails to restore:\n\npsql:t1.sql:68: ERROR: relation \"cdep\" cannot have ON SELECT rules\nDETAIL: This operation is not supported for tables.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Thu, 04 May 2023 07:13:04 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: \"CREATE RULE ... ON SELECT\": redundant?" }, { "msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n\n Andrew> I thought they used CREATE RULE on a table?\n\n Andrew> In fact here is an example from a pg 9.5 pg_dump output (with\n Andrew> cruft removed):\n\nAnd checking other versions, 9.6 is the same, it's only with pg 10 that\nit switches to creating a dummy view instead of a table (and using\nCREATE OR REPLACE VIEW, no mention of rules).\n\nSo if the goal was to preserve compatibility with pre-pg10 dumps, that's\nalready broken; if that's ok, then I don't see any obvious reason not to\nalso remove or at least deprecate CREATE RULE ... ON SELECT for views.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Thu, 04 May 2023 07:20:46 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: \"CREATE RULE ... ON SELECT\": redundant?" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> And checking other versions, 9.6 is the same, it's only with pg 10 that\n> it switches to creating a dummy view instead of a table (and using\n> CREATE OR REPLACE VIEW, no mention of rules).\n\n9.6.16 or later will use CREATE OR REPLACE VIEW, cf 404cbc562.\n\n> So if the goal was to preserve compatibility with pre-pg10 dumps, that's\n> already broken; if that's ok, then I don't see any obvious reason not to\n> also remove or at least deprecate CREATE RULE ... ON SELECT for views.\n\nSince the CREATE OR REPLACE case still works, I don't think removing\nit is OK.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 May 2023 08:17:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"CREATE RULE ... ON SELECT\": redundant?" } ]
[ { "msg_contents": "When working on the improper qual pushdown issue [1], there is a need in\nthe proposed fix to avoid scanning all the SpecialJoinInfos, since that\nis too expensive. I think this might be a common requirement. In the\ncurrent codes there are several places where we need to scan all the\nSpecialJoinInfos in join_info_list looking for SpecialJoinInfos that\nbelong to a given outer join relid set, which is an O(n) operation. So\nstart a new thread for this requirement.\n\nTo improve the O(n) operation, introduce join_info_array to allow direct\nlookups of SpecialJoinInfo by ojrelid. This is doable because for each\nnon-zero ojrelid there can only be one SpecialJoinInfo. This can\nbenefit clause_is_computable_at() and have_unsafe_outer_join_ref(), as\nthe patch does, and more future usages such as\nadd_outer_joins_to_relids() in the proposed patch for issue [1].\n\n[1]\nhttps://www.postgresql.org/message-id/flat/0b819232-4b50-f245-1c7d-c8c61bf41827%40postgrespro.ru\n\nThanks\nRichard", "msg_date": "Thu, 4 May 2023 16:07:10 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Introduce join_info_array for direct lookups of SpecialJoinInfo by\n ojrelid" }, { "msg_contents": "On Thu, May 4, 2023 at 4:07 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> When working on the improper qual pushdown issue [1], there is a need in\n> the proposed fix to avoid scanning all the SpecialJoinInfos, since that\n> is too expensive. I think this might be a common requirement. In the\n> current codes there are several places where we need to scan all the\n> SpecialJoinInfos in join_info_list looking for SpecialJoinInfos that\n> belong to a given outer join relid set, which is an O(n) operation. So\n> start a new thread for this requirement.\n>\n> To improve the O(n) operation, introduce join_info_array to allow direct\n> lookups of SpecialJoinInfo by ojrelid. This is doable because for each\n> non-zero ojrelid there can only be one SpecialJoinInfo. This can\n> benefit clause_is_computable_at() and have_unsafe_outer_join_ref(), as\n> the patch does, and more future usages such as\n> add_outer_joins_to_relids() in the proposed patch for issue [1].\n>\n\nBTW, I just noticed that the introduction of join_info_array can also\nbenefit make_outerjoininfo(), check_redundant_nullability_qual() and\nget_join_domain_min_rels(). So update the patch to do the changes.\n\nI'd like to devise a test query that shows performance gain from this\npatch, but I'm not sure how to do that. May need help here.\n\nAny thoughts on this patch?\n\nThanks\nRichard", "msg_date": "Mon, 8 May 2023 10:30:05 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce join_info_array for direct lookups of SpecialJoinInfo\n by ojrelid" }, { "msg_contents": "On Mon, May 8, 2023 at 10:30 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> I'd like to devise a test query that shows performance gain from this\n> patch, but I'm not sure how to do that. May need help here.\n>\n\nI've been trying for some time but still haven't been able to come up\nwith a test case that shows the performance improvement of this patch.\nMy best guess is that situations that can benefit from direct lookups of\nSpecialJoinInfo are pretty rare, and the related codes are not in the\ncritical path. So for now I think I'd better withdraw this patch to\navoid people wasting time reviewing it.\n\nThanks\nRichard\n\nOn Mon, May 8, 2023 at 10:30 AM Richard Guo <guofenglinux@gmail.com> wrote:I'd like to devise a test query that shows performance gain from thispatch, but I'm not sure how to do that.  May need help here.I've been trying for some time but still haven't been able to come upwith a test case that shows the performance improvement of this patch.My best guess is that situations that can benefit from direct lookups ofSpecialJoinInfo are pretty rare, and the related codes are not in thecritical path.  So for now I think I'd better withdraw this patch toavoid people wasting time reviewing it.ThanksRichard", "msg_date": "Wed, 6 Sep 2023 17:10:14 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce join_info_array for direct lookups of SpecialJoinInfo\n by ojrelid" } ]
[ { "msg_contents": "When reading a memory contexts log I realized that we have this:\n\nLOG: level: 2; EventTriggerCache: 8192 total in 1 blocks; 7928 free (4 chunks); 264 used\nLOG: level: 3; Event Trigger Cache: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used\n\nThe reason is that BuildEventTriggerCache sets up a context \"EventTriggerCache\"\nwhich house a hash named \"Event Trigger Cache\" which in turn creates a context\nwith the table name. I think it makes sense that these share the same name,\nbut I think it would be less confusing if they also shared the same spelling\nwhitespace-wise. Any reason to not rename the hash EventTriggerCache to make\nthe logging a tiny bit easier to read and grep?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 May 2023 13:38:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "evtcache: EventTriggerCache vs Event Trigger Cache" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> When reading a memory contexts log I realized that we have this:\n> LOG: level: 2; EventTriggerCache: 8192 total in 1 blocks; 7928 free (4 chunks); 264 used\n> LOG: level: 3; Event Trigger Cache: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used\n\n> The reason is that BuildEventTriggerCache sets up a context \"EventTriggerCache\"\n> which house a hash named \"Event Trigger Cache\" which in turn creates a context\n> with the table name. I think it makes sense that these share the same name,\n> but I think it would be less confusing if they also shared the same spelling\n> whitespace-wise. Any reason to not rename the hash EventTriggerCache to make\n> the logging a tiny bit easier to read and grep?\n\nHmm, I'm kinda -1 on them having the same name visible in the\ncontexts dump --- that seems very confusing. How about naming\nthe hash \"EventTriggerCacheHash\" or so?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 May 2023 08:09:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: evtcache: EventTriggerCache vs Event Trigger Cache" }, { "msg_contents": "> On 4 May 2023, at 14:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> When reading a memory contexts log I realized that we have this:\n>> LOG: level: 2; EventTriggerCache: 8192 total in 1 blocks; 7928 free (4 chunks); 264 used\n>> LOG: level: 3; Event Trigger Cache: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used\n> \n>> The reason is that BuildEventTriggerCache sets up a context \"EventTriggerCache\"\n>> which house a hash named \"Event Trigger Cache\" which in turn creates a context\n>> with the table name. I think it makes sense that these share the same name,\n>> but I think it would be less confusing if they also shared the same spelling\n>> whitespace-wise. Any reason to not rename the hash EventTriggerCache to make\n>> the logging a tiny bit easier to read and grep?\n> \n> Hmm, I'm kinda -1 on them having the same name visible in the\n> contexts dump --- that seems very confusing. How about naming\n> the hash \"EventTriggerCacheHash\" or so?\n\nI think the level is the indicator here, but I have no strong opinions,\nEventTriggerCacheHash is fine by me.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 May 2023 14:18:13 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: evtcache: EventTriggerCache vs Event Trigger Cache" }, { "msg_contents": "> On 4 May 2023, at 14:18, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 4 May 2023, at 14:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> When reading a memory contexts log I realized that we have this:\n>>> LOG: level: 2; EventTriggerCache: 8192 total in 1 blocks; 7928 free (4 chunks); 264 used\n>>> LOG: level: 3; Event Trigger Cache: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used\n>> \n>>> The reason is that BuildEventTriggerCache sets up a context \"EventTriggerCache\"\n>>> which house a hash named \"Event Trigger Cache\" which in turn creates a context\n>>> with the table name. I think it makes sense that these share the same name,\n>>> but I think it would be less confusing if they also shared the same spelling\n>>> whitespace-wise. Any reason to not rename the hash EventTriggerCache to make\n>>> the logging a tiny bit easier to read and grep?\n>> \n>> Hmm, I'm kinda -1 on them having the same name visible in the\n>> contexts dump --- that seems very confusing. How about naming\n>> the hash \"EventTriggerCacheHash\" or so?\n> \n> I think the level is the indicator here, but I have no strong opinions,\n> EventTriggerCacheHash is fine by me.\n\nThe attached trivial diff does that, parking this in the next CF.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 8 May 2023 10:39:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: evtcache: EventTriggerCache vs Event Trigger Cache" }, { "msg_contents": "On Mon, May 08, 2023 at 10:39:42AM +0200, Daniel Gustafsson wrote:\n> On 4 May 2023, at 14:18, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 4 May 2023, at 14:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Hmm, I'm kinda -1 on them having the same name visible in the\n>>> contexts dump --- that seems very confusing. How about naming\n>>> the hash \"EventTriggerCacheHash\" or so?\n>> \n>> I think the level is the indicator here, but I have no strong opinions,\n>> EventTriggerCacheHash is fine by me.\n> \n> The attached trivial diff does that, parking this in the next CF.\n\n+1 for EventTriggerCacheHash\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 May 2023 16:07:33 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: evtcache: EventTriggerCache vs Event Trigger Cache" }, { "msg_contents": "> On 8 May 2023, at 10:39, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 4 May 2023, at 14:18, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> On 4 May 2023, at 14:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>>> How about naming\n>>> the hash \"EventTriggerCacheHash\" or so?\n>> \n>> I think the level is the indicator here, but I have no strong opinions,\n>> EventTriggerCacheHash is fine by me.\n> \n> The attached trivial diff does that, parking this in the next CF.\n\nPushed, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 5 Jul 2023 09:18:44 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: evtcache: EventTriggerCache vs Event Trigger Cache" } ]
[ { "msg_contents": "Hello,\nI am trying out logical replication for upgrading postgres instance from\nversion 11 to 15.x. In the process, I noticed that some tables get stuck in\nthe 's' state during logical replication and they do not move to the 'r'\nstate. I tried to drop the subscription and create a new subscriber, but\nthe same thing repeated. Also, each time different tables get stuck, it is\nnot like the same tables get stuck every time. This also happens to tables\nthat do not have frequent writes to them. Then I tried to drop the tables\nwhich got stuck from the original publication and created a new publication\nand new subscribers just with the tables which got stuck in the previous\nsubscription. Now, in this new subscription, some tables go to 'r' state\n(even though they did not in the older subscription) and some tables remain\nin 's' state. If I continue doing this process for 5-6 times I am able to\ncover all of the tables. Since these tables get stuck at 's' state, the\nreplication origins are not deleted automatically when I delete the table\nfrom the publication and I had to manually delete the replication origin.\nThe logs contain \"table synchronization worker for <table_name> has\nfinished even for table in 's' state. Can somebody please help with this\nissue as I do not want to do manual intervention of creating a new\npublication and subscription for tables that get stuck at 's'.\n\nConfiguration\n\nmax_replication_slots : 75 in publisher and subscriber\nmax_worker_processes: 60 (in subscriber)\nmax_logical_replication_workers: 55 (in subscriber)\n\n\nThanks,\n\nPadmavathi\n\nHello,I am trying out logical replication for upgrading postgres instance from version 11 to 15.x. In the process, I noticed that some tables get stuck in the 's' state during logical replication and they do not move to the 'r' state. I tried to drop the subscription and create a new subscriber, but the same thing repeated. Also, each time different tables get stuck, it is not like the same tables get stuck every time. This also happens to tables that do not have frequent writes to them. Then I tried to drop the tables which got stuck from the original publication and created a new publication and new subscribers just with the tables which got stuck in the previous subscription. Now, in this new subscription, some tables go to 'r' state (even though they did not in the older subscription) and some tables remain in 's' state. If I continue doing this process for 5-6 times I am able to cover all of the tables. Since these tables get stuck at 's' state, the replication origins are not deleted automatically when I delete the table from the publication and I had to manually delete the replication origin. The logs contain \"table synchronization worker for <table_name> has finished even for table in 's' state. Can somebody please help with this issue as I do not want to do manual intervention of creating a new publication and subscription for tables that get stuck at 's'.Configurationmax_replication_slots : 75 in publisher and subscriber\nmax_worker_processes: 60 (in subscriber)\nmax_logical_replication_workers: 55 (in subscriber)Thanks, Padmavathi", "msg_date": "Fri, 5 May 2023 11:29:01 +0530", "msg_from": "Padmavathi G <padma9.9.1999@gmail.com>", "msg_from_op": true, "msg_subject": "Tables getting stuck at 's' state during logical replication" }, { "msg_contents": "On Fri, May 5, 2023 at 3:04 PM Padmavathi G <padma9.9.1999@gmail.com> wrote:\n>\n> Hello,\n> I am trying out logical replication for upgrading postgres instance from version 11 to 15.x. In the process, I noticed that some tables get stuck in the 's' state during logical replication and they do not move to the 'r' state. I tried to drop the subscription and create a new subscriber, but the same thing repeated. Also, each time different tables get stuck, it is not like the same tables get stuck every time.\n>\n\nThis is strange. BTW, we don't save slots after the upgrade, so the\nsubscriptions in the upgraded node won't be valid. We have some\ndiscussion on this topic in threads [1][2]. So, I think after the\nupgrade one needs to anyway re-create the subscriptions. Can you share\nyour exact steps for the upgrade and what is the state before the\nupgrade? Is it possible to share some minimal test case to show the\nexact problem you are facing?\n\n[1] - https://www.postgresql.org/message-id/20230217075433.u5mjly4d5cr4hcfe%40jrouhaud\n[2] - https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 5 May 2023 16:04:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tables getting stuck at 's' state during logical replication" }, { "msg_contents": "Some background on the setup on which I am trying to carry out the upgrade:\n\nWe have a pod in a kubernetes cluster which contains the postgres 11 image.\nWe are following the logical replication process for upgrade\n\nSteps followed for logical replication:\n\n1. Created a new pod in the same kubernetes cluster with the latest\npostgres 15 image\n2. Created a publication (say publication 1) in the old pod including all\ntables in a database\n3. Created a subscription (say subscription 1) in the new pod for the above\nmentioned publication\n4. When monitoring the subscription via pg_subscription_rel in the\nsubscriber, I noticed that out of 45 tables 20 were in the 'r' state and 25\nwere in 's' state and they remained in the same state for almost 2 days,\nthere was no improvement in the state. But the logs showed that the tables\nwhich had 's' state also had \"synchronization workers for <table_name>\nfinished\".\n5. Then I removed the tables which got stuck in the 's' state from\npublication 1 and created a new publication (publication 2) with only these\ntables which got stuck and created a new subscription (subscription 2) for\nthis publication in the subscriber.\n6. Now on monitoring subscription 2 via pg_subscription_rel I noticed that\nout of 25, now 12 were in 'r' state and 13 again got stuck in 's' state.\nRepeated this process of dropping tables which got stuck from publication\nand created a new publisher and subscriber and finally I was able to bring\nall tables to sync in this way. But still the tables were present in\nreplication origin.\n7. On executing pg_replication_origins command, I saw that every\nsubscription had one origin and every table which got stuck in each\npublication had one origin with roname pg_<subid>_<relid>. Eventhough they\nwere stuck, these replication origins were not removed.\n\n\nOn Fri, May 5, 2023 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, May 5, 2023 at 3:04 PM Padmavathi G <padma9.9.1999@gmail.com>\n> wrote:\n> >\n> > Hello,\n> > I am trying out logical replication for upgrading postgres instance from\n> version 11 to 15.x. In the process, I noticed that some tables get stuck in\n> the 's' state during logical replication and they do not move to the 'r'\n> state. I tried to drop the subscription and create a new subscriber, but\n> the same thing repeated. Also, each time different tables get stuck, it is\n> not like the same tables get stuck every time.\n> >\n>\n> This is strange. BTW, we don't save slots after the upgrade, so the\n> subscriptions in the upgraded node won't be valid. We have some\n> discussion on this topic in threads [1][2]. So, I think after the\n> upgrade one needs to anyway re-create the subscriptions. Can you share\n> your exact steps for the upgrade and what is the state before the\n> upgrade? Is it possible to share some minimal test case to show the\n> exact problem you are facing?\n>\n> [1] -\n> https://www.postgresql.org/message-id/20230217075433.u5mjly4d5cr4hcfe%40jrouhaud\n> [2] -\n> https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n>\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nSome background on the setup on which I am trying to carry out the upgrade:We have a pod in a kubernetes cluster which contains the postgres 11 image. We are following the logical replication process for upgradeSteps followed for logical replication:1. Created a new pod in the same kubernetes cluster with the latest postgres 15 image2. Created a publication  (say publication 1) in the old pod including all tables in a database3. Created a subscription (say subscription 1) in the new pod for the above mentioned publication4. When monitoring the subscription via pg_subscription_rel in the subscriber, I noticed that out of 45 tables 20 were in the 'r' state and 25 were in 's' state and they remained in the same state for almost 2 days, there was no improvement in the state. But the logs showed that the tables which had 's' state also had \"synchronization workers for <table_name> finished\".5. Then I removed the tables which got stuck in the 's' state from publication 1 and created a new publication (publication 2) with only these tables which got stuck and created a new subscription (subscription 2) for this publication in the subscriber.6. Now on monitoring subscription 2 via pg_subscription_rel I noticed that out of 25, now 12 were in 'r' state and 13 again got stuck in 's' state. Repeated this process of dropping tables which got stuck from publication and created a new publisher and subscriber and finally I was able to bring all tables to sync in this way. But still the tables were present in replication origin.7. On executing pg_replication_origins command, I saw that every subscription had one origin and every table which got stuck in each publication had one origin with roname pg_<subid>_<relid>. Eventhough they were stuck, these replication origins were not removed.On Fri, May 5, 2023 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, May 5, 2023 at 3:04 PM Padmavathi G <padma9.9.1999@gmail.com> wrote:\n>\n> Hello,\n> I am trying out logical replication for upgrading postgres instance from version 11 to 15.x. In the process, I noticed that some tables get stuck in the 's' state during logical replication and they do not move to the 'r' state. I tried to drop the subscription and create a new subscriber, but the same thing repeated. Also, each time different tables get stuck, it is not like the same tables get stuck every time.\n>\n\nThis is strange. BTW, we don't save slots after the upgrade, so the\nsubscriptions in the upgraded node won't be valid. We have some\ndiscussion on this topic in threads [1][2]. So, I think after the\nupgrade one needs to anyway re-create the subscriptions. Can you share\nyour exact steps for the upgrade and what is the state before the\nupgrade? Is it possible to share some minimal test case to show the\nexact problem you are facing?\n\n[1] - https://www.postgresql.org/message-id/20230217075433.u5mjly4d5cr4hcfe%40jrouhaud\n[2] - https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 5 May 2023 19:26:56 +0530", "msg_from": "Padmavathi G <padma9.9.1999@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Tables getting stuck at 's' state during logical replication" }, { "msg_contents": "On Fri, May 5, 2023 at 7:27 PM Padmavathi G <padma9.9.1999@gmail.com> wrote:\n>\n> Some background on the setup on which I am trying to carry out the upgrade:\n>\n> We have a pod in a kubernetes cluster which contains the postgres 11 image. We are following the logical replication process for upgrade\n>\n> Steps followed for logical replication:\n>\n> 1. Created a new pod in the same kubernetes cluster with the latest postgres 15 image\n> 2. Created a publication (say publication 1) in the old pod including all tables in a database\n> 3. Created a subscription (say subscription 1) in the new pod for the above mentioned publication\n> 4. When monitoring the subscription via pg_subscription_rel in the subscriber, I noticed that out of 45 tables 20 were in the 'r' state and 25 were in 's' state and they remained in the same state for almost 2 days, there was no improvement in the state. But the logs showed that the tables which had 's' state also had \"synchronization workers for <table_name> finished\".\n>\n\nI think the problem happened in this step where some of the tables\nremained in 's' state. It is not clear why this could happen because\napply worker should eventually update these relations state to 'r'. If\nthis is reproducible, we can try two things to further investigate the\nissue: (a) Disable and Enable the subscription once and see if that\nhelps; (b) try increasing the LOG level to DEBUG2 and see if we get\nany useful LOG message.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 May 2023 16:37:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tables getting stuck at 's' state during logical replication" }, { "msg_contents": "On Tue, May 9, 2023 at 4:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 5, 2023 at 7:27 PM Padmavathi G <padma9.9.1999@gmail.com> wrote:\n> >\n> > Some background on the setup on which I am trying to carry out the upgrade:\n> >\n> > We have a pod in a kubernetes cluster which contains the postgres 11 image. We are following the logical replication process for upgrade\n> >\n> > Steps followed for logical replication:\n> >\n> > 1. Created a new pod in the same kubernetes cluster with the latest postgres 15 image\n> > 2. Created a publication (say publication 1) in the old pod including all tables in a database\n> > 3. Created a subscription (say subscription 1) in the new pod for the above mentioned publication\n> > 4. When monitoring the subscription via pg_subscription_rel in the subscriber, I noticed that out of 45 tables 20 were in the 'r' state and 25 were in 's' state and they remained in the same state for almost 2 days, there was no improvement in the state. But the logs showed that the tables which had 's' state also had \"synchronization workers for <table_name> finished\".\n> >\n>\n> I think the problem happened in this step where some of the tables\n> remained in 's' state. It is not clear why this could happen because\n> apply worker should eventually update these relations state to 'r'. If\n> this is reproducible, we can try two things to further investigate the\n> issue: (a) Disable and Enable the subscription once and see if that\n> helps; (b) try increasing the LOG level to DEBUG2 and see if we get\n> any useful LOG message.\n\nIt looks like the respective apply workers for the tables whose state\nwas 's' are not able to get to UpdateSubscriptionRelState with\nSUBREL_STATE_READY. In addition to what Amit suggested above, is it\npossible for you to reproduce this problem on upstream (reproducing on\nHEAD cool otherwise PG 15 enough) code? If yes, you can add custom\ndebug messages to see what's happening.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 May 2023 19:03:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tables getting stuck at 's' state during logical replication" } ]
[ { "msg_contents": "Hi pg-hackers,\n\n\nWhen I wrote an extension to implement a new storage by table access method. I found some issues\nthat the existing code has strong assumptions for heap tables now. Here are 3 issues that I currently have:\n\n\n1. Index access method has a callback to handle reloptions, but table access method hasn't. We can't\nadd storage-specific parameters by `WITH` clause when creating a table.\n2. Existing code strongly assumes that the data file of a table structures by a serial physical files named\nin a hard coded rule: <relfilenode>[.<segno>]. It may only fit for heap like tables. A new storage may have its\nowner structure on how the data files are organized. The problem happens when dropping a table.\n\n3. The existing code also assumes that the data file consists of a series of fix-sized block. It may not\nbe desired by other storage. Is there any suggestions on this situation?\n\n\nThe rest of this mail is to talk about the first issue. It looks reasonable to add a similar callback in\nstruct TableAmRoutine, and parse reloptions by the callback. This patch is in the attachment file.\n\n\nAnother thing about reloption is that the current API passes a parameter `validate` to tell the parse\nfunctioin to check and whether to raise an error. It doesn't have enough context when these reloptioins\nare used:\n1. CREATE TABLE ... WITH(...)\n2. ALTER TABLE ... SET ...\n3. ALTER TABLE ... RESET ...\nThe reason why the context matters is that some reloptions are disallowed to change after creating\nthe table, while some reloptions are allowed.\n\n\n\nI wonder if this change makes sense for you.\n\n\nThe attached patch only supports callback for TAM-specific implementation, not include the change\nabout usage context.\n\n\nRegards,\nHao Wu", "msg_date": "Fri, 5 May 2023 16:44:39 +0800 (GMT+08:00)", "msg_from": "=?UTF-8?B?5ZC05piK?= <wuhao@hashdata.cn>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?RmVhdHVyZTogQWRkIHJlbG9wdGlvbiBzdXBwb3J0IGZvciB0YWJsZSBhY2Nlc3MgbWV0aG9k?=" }, { "msg_contents": "Hi,\n\nOn 2023-05-05 16:44:39 +0800, 吴昊 wrote:\n> When I wrote an extension to implement a new storage by table access method. I found some issues\n> that the existing code has strong assumptions for heap tables now. Here are 3 issues that I currently have:\n> \n> \n> 1. Index access method has a callback to handle reloptions, but table access method hasn't. We can't\n> add storage-specific parameters by `WITH` clause when creating a table.\n\nMakes sense to add that.\n\n\n> 2. Existing code strongly assumes that the data file of a table structures by a serial physical files named\n> in a hard coded rule: <relfilenode>[.<segno>]. It may only fit for heap like tables. A new storage may have its\n> owner structure on how the data files are organized. The problem happens when dropping a table.\n\nI agree that it's not great, but I don't think it's particularly easy to fix\n(because things like transactional DDL require fairly tight integration). Most\nof the time it should be possible can also work around the limitations though.\n\n\n> 3. The existing code also assumes that the data file consists of a series of fix-sized block. It may not\n> be desired by other storage. Is there any suggestions on this situation?\n\nThat's a requirement of using the buffer manager, but if you don't want to\nrely on that, you can use a different pattern. There's some limitations\n(format of TIDs, most prominently), but you should be able to deal with that.\n\nI don't think it would make sense to support other block sizes in the buffer\nmanager.\n\n\n> The rest of this mail is to talk about the first issue. It looks reasonable to add a similar callback in\n> struct TableAmRoutine, and parse reloptions by the callback. This patch is in the attachment file.\n\nWhy did you add relkind to the callbacks? The callbacks are specific to a\ncertain relkind, so I don't think that makes sense.\n\nI don't think we really need GetTableAmRoutineByAmId() that raises nice\nerrors etc - as the AM has already been converted to an oid, we shouldn't need\nto recheck?\n\n\n\n> +bytea *\n> +table_reloptions_am(Oid accessMethodId, Datum reloptions, char relkind, bool validate)\n> {\n> ...\n> \n> +\t\t/* built-in table access method put here to fetch TAM fast */\n> +\t\tcase HEAP_TABLE_AM_OID:\n> +\t\t\ttam = GetHeapamTableAmRoutine();\n> +\t\t\tbreak;\n> \t\tdefault:\n> -\t\t\t/* other relkinds are not supported */\n> -\t\t\treturn NULL;\n> +\t\t\ttam = GetTableAmRoutineByAmId(accessMethodId);\n> +\t\t\tbreak;\n\nWhy do we need this fastpath? This shouldn't be something called at a\nmeaningful frequency?\n\n\n> }\n> +\treturn table_reloptions(tam->amoptions, reloptions, relkind, validate);\n> }\n\nI'd just pass the tam, instead of an individual function.\n\n> @@ -866,6 +866,11 @@ typedef struct TableAmRoutine\n> \t\t\t\t\t\t\t\t\t\t struct SampleScanState *scanstate,\n> \t\t\t\t\t\t\t\t\t\t TupleTableSlot *slot);\n> \n> +\t/*\n> +\t * This callback is used to parse reloptions for relation/matview/toast.\n> +\t */\n> +\tbytea *(*amoptions)(Datum reloptions, char relkind, bool validate);\n> +\n> } TableAmRoutine;\n\nDid you mean table instead of relation in the comment?\n\n\n\n> Another thing about reloption is that the current API passes a parameter `validate` to tell the parse\n> functioin to check and whether to raise an error. It doesn't have enough context when these reloptioins\n> are used:\n> 1. CREATE TABLE ... WITH(...)\n> 2. ALTER TABLE ... SET ...\n> 3. ALTER TABLE ... RESET ...\n> The reason why the context matters is that some reloptions are disallowed to change after creating\n> the table, while some reloptions are allowed.\n\nWhat kind of reloption are you thinking of here?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 May 2023 13:59:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Feature: Add reloption support for table access method" }, { "msg_contents": "> > The rest of this mail is to talk about the first issue. It looks reasonable to add a similar callback in\n> > struct TableAmRoutine, and parse reloptions by the callback. This patch is in the attachment file.\n>\n> Why did you add relkind to the callbacks? The callbacks are specific to a\n> certain relkind, so I don't think that makes sense.\nAn implementation of table access method may be used for table/toast/matview, different relkinds\nmay define different set of reloptions. If they have the same reloption set, just ignore the relkind\nparameter.\n> I don't think we really need GetTableAmRoutineByAmId() that raises nice\n> errors etc - as the AM has already been converted to an oid, we shouldn't need\n> to recheck?\n\nWhen defining a relation, the function knows only the access method name, not the AM routine struct.\nThe AmRoutine needs to be looked-up by its access method name or oid. The existing function\ncalculates AmRoutine by the handler oid, not by am oid.\n\n> > +bytea *\n> > +table_reloptions_am(Oid accessMethodId, Datum reloptions, char relkind, bool validate)\n> > {\n> > ...\n> > \n> > +\t\t/* built-in table access method put here to fetch TAM fast */\n> > +\t\tcase HEAP_TABLE_AM_OID:\n> > +\t\t\ttam = GetHeapamTableAmRoutine();\n> > +\t\t\tbreak;\n> > \t\tdefault:\n> > -\t\t\t/* other relkinds are not supported */\n> > -\t\t\treturn NULL;\n> > +\t\t\ttam = GetTableAmRoutineByAmId(accessMethodId);\n> > +\t\t\tbreak;\n\n> Why do we need this fastpath? This shouldn't be something called at a\n> meaningful frequency?\nOK, it make sense.\n\n> > }\n> > +\treturn table_reloptions(tam->amoptions, reloptions, relkind, validate);\n> > }\n>\n> I'd just pass the tam, instead of an individual function.\nIt's aligned to index_reloptions, and the function extractRelOptions also uses\nan individual function other a pointer to AmRoutine struct.\n\n\n\n> Did you mean table instead of relation in the comment?\n\n\nYes, the comment doesn't update.\n\n\n \n> > Another thing about reloption is that the current API passes a parameter `validate` to tell the parse\n> > functioin to check and whether to raise an error. It doesn't have enough context when these reloptioins\n> > are used:\n> > 1. CREATE TABLE ... WITH(...)\n> > 2. ALTER TABLE ... SET ...\n> > 3. ALTER TABLE ... RESET ...\n> > The reason why the context matters is that some reloptions are disallowed to change after creating\n> > the table, while some reloptions are allowed.\n>\n> What kind of reloption are you thinking of here?\n\nDRAFT: The amoptions in TableAmRoutine may change to\n```\nbytea *(*amoptions)(Datum reloptions, char relkind, ReloptionContext context);\nenum ReloptionContext {\nRELOPTION_INIT, // CREATE TABLE ... WITH(...)\nRELOPTION_SET, // ALTER TABLE ... SET ...\nRELOPTION_RESET, // ALTER TABLE ... RESET ...\nRELOPTION_EXTRACT, // build reloptions from pg_class.reloptions\n}\n```\nThe callback always validates the reloptions if the context is not RELOPTION_EXTRACT.\nIf the TAM disallows to update some reloptions, it may throw an error when the context is\none of (RELOPTION_SET, RELOPTION_RESET).\nThe similar callback `amoptions` in IndexRoutine also applies this change.\nBTW, it's hard to find an appropriate header file to define the ReloptionContext, which is\nused by index/table AM.\n\nRegards,\nHao Wu\n\n\n\r\n\r\n\n> > The rest of this mail is to talk about the first issue. It looks reasonable to add a similar callback in\n> > struct TableAmRoutine, and parse reloptions by the callback. This patch is in the attachment file.\n>\n> Why did you add relkind to the callbacks? The callbacks are specific to a\n> certain relkind, so I don't think that makes sense.An implementation of table access method may be used for table/toast/matview, different relkindsmay define different set of reloptions. If they have the same reloption set, just ignore the relkindparameter.> I don't think we really need GetTableAmRoutineByAmId() that raises nice\n> errors etc - as the AM has already been converted to an oid, we shouldn't need\n> to recheck?When defining a relation, the function knows only the access method name, not the AM routine struct.The AmRoutine needs to be looked-up by its access method name or oid. The existing functioncalculates AmRoutine by the handler oid, not by am oid.\n\n> > +bytea *\n> > +table_reloptions_am(Oid accessMethodId, Datum reloptions, char relkind, bool validate)\n> > {\n> > ...\n> > \n> > +\t\t/* built-in table access method put here to fetch TAM fast */\n> > +\t\tcase HEAP_TABLE_AM_OID:\n> > +\t\t\ttam = GetHeapamTableAmRoutine();\n> > +\t\t\tbreak;\n> > \t\tdefault:\n> > -\t\t\t/* other relkinds are not supported */\n> > -\t\t\treturn NULL;\n> > +\t\t\ttam = GetTableAmRoutineByAmId(accessMethodId);\n> > +\t\t\tbreak;\n\n> Why do we need this fastpath? This shouldn't be something called at a\n> meaningful frequency?OK, it make sense.\n\n> > }\n> > +\treturn table_reloptions(tam->amoptions, reloptions, relkind, validate);\n> > }\n>\n> I'd just pass the tam, instead of an individual function.It's aligned to index_reloptions, and the function extractRelOptions also usesan individual function other a pointer to AmRoutine struct.> Did you mean table instead of relation in the comment?Yes, the comment doesn't update. > > Another thing about reloption is that the current API passes a parameter `validate` to tell the parse\n> > functioin to check and whether to raise an error. It doesn't have enough context when these reloptioins\n> > are used:\n> > 1. CREATE TABLE ... WITH(...)\n> > 2. ALTER TABLE ... SET ...\n> > 3. ALTER TABLE ... RESET ...\n> > The reason why the context matters is that some reloptions are disallowed to change after creating\n> > the table, while some reloptions are allowed.\n>\n> What kind of reloption are you thinking of here?DRAFT: The amoptions in TableAmRoutine may change to```bytea *(*amoptions)(Datum reloptions, char relkind, ReloptionContext context);enum ReloptionContext {RELOPTION_INIT, // CREATE TABLE ... WITH(...)RELOPTION_SET, // ALTER TABLE ... SET ...RELOPTION_RESET, // ALTER TABLE ... RESET ...RELOPTION_EXTRACT, // build reloptions from pg_class.reloptions}```The callback always validates the reloptions if the context is not RELOPTION_EXTRACT.If the TAM disallows to update some reloptions, it may throw an error when the context isone of (RELOPTION_SET, RELOPTION_RESET).The similar callback `amoptions` in IndexRoutine also applies this change.BTW, it's hard to find an appropriate header file to define the ReloptionContext, which isused by index/table AM.Regards,Hao Wu", "msg_date": "Wed, 10 May 2023 13:24:38 +0800 (GMT+08:00)", "msg_from": "=?UTF-8?B?5ZC05piK?= <wuhao@hashdata.cn>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6UmU6IEZlYXR1cmU6IEFkZCByZWxvcHRpb24gc3VwcG9ydCBmb3IgdGFibGUgYWNjZXNzIG1ldGhvZA==?=" }, { "msg_contents": "I'm definitely in favor of this general idea of supporting custom rel\noptions, we could use that for Citus its columnar TableAM. There have\nbeen at least two other discussions on this topic:\n1. https://www.postgresql.org/message-id/flat/CAFF0-CG4KZHdtYHMsonWiXNzj16gWZpduXAn8yF7pDDub%2BGQMg%40mail.gmail.com\n2. https://www.postgresql.org/message-id/flat/429fb58fa3218221bb17c7bf9e70e1aa6cfc6b5d.camel%40j-davis.com\n\nI haven't looked at the patch in detail, but it's probably good to\ncheck what part of those previous discussions apply to it. And if\nthere's any ideas from those previous attempts that this patch could\nborrow.\n\nOn Tue, 9 May 2023 at 22:59, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-05-05 16:44:39 +0800, 吴昊 wrote:\n> > When I wrote an extension to implement a new storage by table access method. I found some issues\n> > that the existing code has strong assumptions for heap tables now. Here are 3 issues that I currently have:\n> >\n> >\n> > 1. Index access method has a callback to handle reloptions, but table access method hasn't. We can't\n> > add storage-specific parameters by `WITH` clause when creating a table.\n>\n> Makes sense to add that.\n>\n>\n> > 2. Existing code strongly assumes that the data file of a table structures by a serial physical files named\n> > in a hard coded rule: <relfilenode>[.<segno>]. It may only fit for heap like tables. A new storage may have its\n> > owner structure on how the data files are organized. The problem happens when dropping a table.\n>\n> I agree that it's not great, but I don't think it's particularly easy to fix\n> (because things like transactional DDL require fairly tight integration). Most\n> of the time it should be possible can also work around the limitations though.\n>\n>\n> > 3. The existing code also assumes that the data file consists of a series of fix-sized block. It may not\n> > be desired by other storage. Is there any suggestions on this situation?\n>\n> That's a requirement of using the buffer manager, but if you don't want to\n> rely on that, you can use a different pattern. There's some limitations\n> (format of TIDs, most prominently), but you should be able to deal with that.\n>\n> I don't think it would make sense to support other block sizes in the buffer\n> manager.\n>\n>\n> > The rest of this mail is to talk about the first issue. It looks reasonable to add a similar callback in\n> > struct TableAmRoutine, and parse reloptions by the callback. This patch is in the attachment file.\n>\n> Why did you add relkind to the callbacks? The callbacks are specific to a\n> certain relkind, so I don't think that makes sense.\n>\n> I don't think we really need GetTableAmRoutineByAmId() that raises nice\n> errors etc - as the AM has already been converted to an oid, we shouldn't need\n> to recheck?\n>\n>\n>\n> > +bytea *\n> > +table_reloptions_am(Oid accessMethodId, Datum reloptions, char relkind, bool validate)\n> > {\n> > ...\n> >\n> > + /* built-in table access method put here to fetch TAM fast */\n> > + case HEAP_TABLE_AM_OID:\n> > + tam = GetHeapamTableAmRoutine();\n> > + break;\n> > default:\n> > - /* other relkinds are not supported */\n> > - return NULL;\n> > + tam = GetTableAmRoutineByAmId(accessMethodId);\n> > + break;\n>\n> Why do we need this fastpath? This shouldn't be something called at a\n> meaningful frequency?\n>\n>\n> > }\n> > + return table_reloptions(tam->amoptions, reloptions, relkind, validate);\n> > }\n>\n> I'd just pass the tam, instead of an individual function.\n>\n> > @@ -866,6 +866,11 @@ typedef struct TableAmRoutine\n> > struct SampleScanState *scanstate,\n> > TupleTableSlot *slot);\n> >\n> > + /*\n> > + * This callback is used to parse reloptions for relation/matview/toast.\n> > + */\n> > + bytea *(*amoptions)(Datum reloptions, char relkind, bool validate);\n> > +\n> > } TableAmRoutine;\n>\n> Did you mean table instead of relation in the comment?\n>\n>\n>\n> > Another thing about reloption is that the current API passes a parameter `validate` to tell the parse\n> > functioin to check and whether to raise an error. It doesn't have enough context when these reloptioins\n> > are used:\n> > 1. CREATE TABLE ... WITH(...)\n> > 2. ALTER TABLE ... SET ...\n> > 3. ALTER TABLE ... RESET ...\n> > The reason why the context matters is that some reloptions are disallowed to change after creating\n> > the table, while some reloptions are allowed.\n>\n> What kind of reloption are you thinking of here?\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n\n\n", "msg_date": "Wed, 10 May 2023 11:47:19 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Feature: Add reloption support for table access method" } ]
[ { "msg_contents": "Hi,\n\nWe have recently used the PostgreSQL documentation when setting up our \nlogical replication. We noticed there was a step missing in the \ndocumentation on how to drop a logical replication subscription with a \nreplication slot attached.\n\nWe clarify the documentation to include prerequisites for running the \nDROP SUBSCRIPTION command. Please see attached patch.\n\nBest regards,\nRobert Sjöblom,\nOscar Carlberg\n-- \nInnehållet i detta e-postmeddelande är konfidentiellt och avsett endast för \nadressaten.Varje spridning, kopiering eller utnyttjande av innehållet är \nförbjuden utan tillåtelse av avsändaren. Om detta meddelande av misstag \ngått till fel adressat vänligen radera det ursprungliga meddelandet och \nunderrätta avsändaren via e-post", "msg_date": "Fri, 5 May 2023 15:17:31 +0200", "msg_from": "=?UTF-8?Q?Robert_Sj=c3=b6blom?= <robert.sjoblom@fortnox.se>", "msg_from_op": true, "msg_subject": "[DOC] Update ALTER SUBSCRIPTION documentation" }, { "msg_contents": "On Fri, May 5, 2023 at 6:47 PM Robert Sjöblom <robert.sjoblom@fortnox.se> wrote:\n>\n> We have recently used the PostgreSQL documentation when setting up our\n> logical replication. We noticed there was a step missing in the\n> documentation on how to drop a logical replication subscription with a\n> replication slot attached.\n>\n> We clarify the documentation to include prerequisites for running the\n> DROP SUBSCRIPTION command. Please see attached patch.\n>\n\nShouldn't we also change the following errhint in the code as well?\nReportSlotConnectionError()\n{\n...\nereport(ERROR,\n(errcode(ERRCODE_CONNECTION_FAILURE),\nerrmsg(\"could not connect to publisher when attempting to drop\nreplication slot \\\"%s\\\": %s\",\nslotname, err),\n/* translator: %s is an SQL ALTER command */\nerrhint(\"Use %s to disassociate the subscription from the slot.\",\n\"ALTER SUBSCRIPTION ... SET (slot_name = NONE)\")));\n...\n}\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 8 May 2023 08:37:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation" }, { "msg_contents": "On Fri, May 5, 2023 at 11:17 PM Robert Sjöblom\n<robert.sjoblom@fortnox.se> wrote:\n>\n>\n> Hi,\n>\n> We have recently used the PostgreSQL documentation when setting up our\n> logical replication. We noticed there was a step missing in the\n> documentation on how to drop a logical replication subscription with a\n> replication slot attached.\n>\n> We clarify the documentation to include prerequisites for running the\n> DROP SUBSCRIPTION command. Please see attached patch.\n\nRight, there is a \"missing step\" in the documentation, but OTOH that\nstep is going to be obvious from the error you get when attempting to\nset the slot_name to NONE:\n\ne.g.\ntest_sub=# ALTER SUBSCRIPTION sub1 SET (slot_name= NONE);\nERROR: cannot set slot_name = NONE for enabled subscription\n\n~\n\nIMO this scenario is sort of a trade-off between (a) wanting to give\nevery little step explicitly versus (b) trying to keep the\ndocumentation free of clutter.\n\nI think a comprise here is just to mention the need for disabling the\nsubscription but without spelling out the details of the ALTER ...\nDISABLE command.\n\nFor example,\n\nBEFORE\nTo proceed in this situation, disassociate the subscription from the\nreplication slot by executing ALTER SUBSCRIPTION ... SET (slot_name =\nNONE).\n\nSUGGESTION\nTo proceed in this situation, first DISABLE the subscription, and then\ndisassociate it from the replication slot by executing ALTER\nSUBSCRIPTION ... SET (slot_name = NONE).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 8 May 2023 13:38:53 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation" }, { "msg_contents": "On Mon, May 8, 2023 at 12:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 5, 2023 at 6:47 PM Robert Sjöblom <robert.sjoblom@fortnox.se> wrote:\n> >\n> > We have recently used the PostgreSQL documentation when setting up our\n> > logical replication. We noticed there was a step missing in the\n> > documentation on how to drop a logical replication subscription with a\n> > replication slot attached.\n> >\n> > We clarify the documentation to include prerequisites for running the\n> > DROP SUBSCRIPTION command. Please see attached patch.\n> >\n>\n> Shouldn't we also change the following errhint in the code as well?\n> ReportSlotConnectionError()\n> {\n> ...\n> ereport(ERROR,\n> (errcode(ERRCODE_CONNECTION_FAILURE),\n> errmsg(\"could not connect to publisher when attempting to drop\n> replication slot \\\"%s\\\": %s\",\n> slotname, err),\n> /* translator: %s is an SQL ALTER command */\n> errhint(\"Use %s to disassociate the subscription from the slot.\",\n> \"ALTER SUBSCRIPTION ... SET (slot_name = NONE)\")));\n> ...\n> }\n\nYeah, if the subscription is enabled, it might be helpful for users if\nthe error hint message says something like:\n\nUse ALTER SUBSCRIPTION ... SET (slot_name = NONE) to disassociate the\nsubscription from the slot after disabling the subscription.\n\nApart from the documentation change, given that setting slot_name =\nNONE always requires for the subscription to be disabled beforehand,\ndoes it make sense to change ALTER SUBSCRIPTION SET so that we disable\nthe subscription when setting slot_name = NONE? Setting slot_name to a\nvalid slot name doesn't enable the subscription, though.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 May 2023 17:21:18 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation" }, { "msg_contents": "On Mon, May 8, 2023 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Apart from the documentation change, given that setting slot_name =\n> NONE always requires for the subscription to be disabled beforehand,\n> does it make sense to change ALTER SUBSCRIPTION SET so that we disable\n> the subscription when setting slot_name = NONE? Setting slot_name to a\n> valid slot name doesn't enable the subscription, though.\n>\n\nI think this is worth considering. Offhand, I don't see any problem\nwith this idea but users may not like the automatic disabling of\nsubscriptions along with setting slot_name=NONE. It would be better to\ndiscuss this in a separate thread to seek the opinion of others.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 May 2023 12:10:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation" }, { "msg_contents": "On Tue, May 9, 2023 at 3:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 8, 2023 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Apart from the documentation change, given that setting slot_name =\n> > NONE always requires for the subscription to be disabled beforehand,\n> > does it make sense to change ALTER SUBSCRIPTION SET so that we disable\n> > the subscription when setting slot_name = NONE? Setting slot_name to a\n> > valid slot name doesn't enable the subscription, though.\n> >\n>\n> I think this is worth considering. Offhand, I don't see any problem\n> with this idea but users may not like the automatic disabling of\n> subscriptions along with setting slot_name=NONE. It would be better to\n> discuss this in a separate thread to seek the opinion of others.\n>\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 May 2023 17:14:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation" }, { "msg_contents": "On 2023-05-05 15:17, Robert Sjöblom wrote:\n> \n> Hi,\n> \n> We have recently used the PostgreSQL documentation when setting up our \n> logical replication. We noticed there was a step missing in the \n> documentation on how to drop a logical replication subscription with a \n> replication slot attached.\n\nFollowing discussions, please see revised documentation patch.\n\nBest regards,\nRobert Sjöblom\n-- \nInnehållet i detta e-postmeddelande är konfidentiellt och avsett endast för \nadressaten.Varje spridning, kopiering eller utnyttjande av innehållet är \nförbjuden utan tillåtelse av avsändaren. Om detta meddelande av misstag \ngått till fel adressat vänligen radera det ursprungliga meddelandet och \nunderrätta avsändaren via e-post", "msg_date": "Mon, 15 May 2023 15:36:15 +0200", "msg_from": "=?UTF-8?Q?Robert_Sj=c3=b6blom?= <robert.sjoblom@fortnox.se>", "msg_from_op": true, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v2" }, { "msg_contents": "On Mon, May 15, 2023 at 11:36 PM Robert Sjöblom\n<robert.sjoblom@fortnox.se> wrote:\n>\n>\n>\n> On 2023-05-05 15:17, Robert Sjöblom wrote:\n> >\n> > Hi,\n> >\n> > We have recently used the PostgreSQL documentation when setting up our\n> > logical replication. We noticed there was a step missing in the\n> > documentation on how to drop a logical replication subscription with a\n> > replication slot attached.\n>\n> Following discussions, please see revised documentation patch.\n>\n\nLGTM.\n\nBTW, in the previous thread, there was also a suggestion from Amit [1]\nto change the errhint in a similar way. There was no reply to Amit's\nidea, so it's not clear whether it's an accidental omission from your\nv2 patch or not.\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1J11phiaoCOmsjNqPZ9BOWyLXYrfgrm5vU2uCFPF2kN1Q%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 16 May 2023 09:43:37 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v2" }, { "msg_contents": "Den tis 16 maj 2023 kl 01:44 skrev Peter Smith <smithpb2250@gmail.com>:\n>\n> On Mon, May 15, 2023 at 11:36 PM Robert Sjöblom\n> <robert.sjoblom@fortnox.se> wrote:\n> >\n> >\n> >\n> > On 2023-05-05 15:17, Robert Sjöblom wrote:\n> > >\n> > > Hi,\n> > >\n> > > We have recently used the PostgreSQL documentation when setting up our\n> > > logical replication. We noticed there was a step missing in the\n> > > documentation on how to drop a logical replication subscription with a\n> > > replication slot attached.\n> >\n> > Following discussions, please see revised documentation patch.\n> >\n>\n> LGTM.\n>\n> BTW, in the previous thread, there was also a suggestion from Amit [1]\n> to change the errhint in a similar way. There was no reply to Amit's\n> idea, so it's not clear whether it's an accidental omission from your\n> v2 patch or not.\n>\n> ------\n> [1] https://www.postgresql.org/message-id/CAA4eK1J11phiaoCOmsjNqPZ9BOWyLXYrfgrm5vU2uCFPF2kN1Q%40mail.gmail.com\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n\nAccidental omission by way of mail client, I suppose -- some messages\ngot flagged as spam and moved to another folder. I went ahead with\nMasahiko Sawada's suggestion for the error message; see revised patch.\n\nBest regards,\nRobert Sjöblom\n\n-- \nInnehållet i detta e-postmeddelande är konfidentiellt och avsett endast för \nadressaten.Varje spridning, kopiering eller utnyttjande av innehållet är \nförbjuden utan tillåtelse av avsändaren. Om detta meddelande av misstag \ngått till fel adressat vänligen radera det ursprungliga meddelandet och \nunderrätta avsändaren via e-post", "msg_date": "Tue, 16 May 2023 15:01:52 +0200", "msg_from": "=?UTF-8?Q?Robert_Sj=C3=B6blom?= <robert.sjoblom@fortnox.se>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "+ errhint(\"Use %s to disassociate the subscription from the slot after\ndisabling the subscription.\",\n\nIMO it looked strange having the word \"subscription\" 2x in the same sentence.\n\nMaybe you can reword the errhint like:\n\nBEFORE\n\"Use %s to disassociate the subscription from the slot after disabling\nthe subscription.\"\n\nSUGGESTION#1\n\"Disable the subscription, then use %s to disassociate it from the slot.\"\n\nSUGGESTION#2\n\"After disabling the subscription use %s to disassociate it from the slot.\"\n\n~~~\n\nBTW, it is a bit difficult to follow this thread because the subject\nkeeps changing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 17 May 2023 11:18:26 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "Den ons 17 maj 2023 kl 03:18 skrev Peter Smith <smithpb2250@gmail.com>:\n>\n> + errhint(\"Use %s to disassociate the subscription from the slot after\n> disabling the subscription.\",\n>\n> IMO it looked strange having the word \"subscription\" 2x in the same sentence.\n>\n> Maybe you can reword the errhint like:\n>\n> BEFORE\n> \"Use %s to disassociate the subscription from the slot after disabling\n> the subscription.\"\n>\n> SUGGESTION#1\n> \"Disable the subscription, then use %s to disassociate it from the slot.\"\n>\n> SUGGESTION#2\n> \"After disabling the subscription use %s to disassociate it from the slot.\"\n>\n> ~~~\n>\n> BTW, it is a bit difficult to follow this thread because the subject\n> keeps changing.\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n\nGood catch, I definitely agree. I'm sorry about changing the subject\nline, I'm unaccustomed to mailing lists -- I'll leave it as it is now.\n\nAttached is the revised version.\n\nBest regards,\nRobert Sjöblom\n\n-- \nInnehållet i detta e-postmeddelande är konfidentiellt och avsett endast för \nadressaten.Varje spridning, kopiering eller utnyttjande av innehållet är \nförbjuden utan tillåtelse av avsändaren. Om detta meddelande av misstag \ngått till fel adressat vänligen radera det ursprungliga meddelandet och \nunderrätta avsändaren via e-post", "msg_date": "Wed, 17 May 2023 06:53:10 +0200", "msg_from": "=?UTF-8?Q?Robert_Sj=C3=B6blom?= <robert.sjoblom@fortnox.se>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On Wed, May 17, 2023 at 2:53 PM Robert Sjöblom\n<robert.sjoblom@fortnox.se> wrote:\n>\n> Den ons 17 maj 2023 kl 03:18 skrev Peter Smith <smithpb2250@gmail.com>:\n> >\n> > + errhint(\"Use %s to disassociate the subscription from the slot after\n> > disabling the subscription.\",\n> >\n> > IMO it looked strange having the word \"subscription\" 2x in the same sentence.\n> >\n> > Maybe you can reword the errhint like:\n> >\n> > BEFORE\n> > \"Use %s to disassociate the subscription from the slot after disabling\n> > the subscription.\"\n> >\n> > SUGGESTION#1\n> > \"Disable the subscription, then use %s to disassociate it from the slot.\"\n> >\n> > SUGGESTION#2\n> > \"After disabling the subscription use %s to disassociate it from the slot.\"\n> >\n> > ~~~\n> >\n> > BTW, it is a bit difficult to follow this thread because the subject\n> > keeps changing.\n> >\n> > ------\n> > Kind Regards,\n> > Peter Smith.\n> > Fujitsu Australia\n>\n> Good catch, I definitely agree. I'm sorry about changing the subject\n> line, I'm unaccustomed to mailing lists -- I'll leave it as it is now.\n>\n> Attached is the revised version.\n>\n\nv4 looks good to me.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 17 May 2023 16:27:15 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On Wed, May 17, 2023 at 11:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, May 17, 2023 at 2:53 PM Robert Sjöblom\n> <robert.sjoblom@fortnox.se> wrote:\n> >\n> > Attached is the revised version.\n> >\n>\n> v4 looks good to me.\n>\n\nThe latest version looks good to me as well. I think we should\nbackpatch this change as this is a user-facing message change in docs\nand code. What do you guys think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 14 Jun 2023 08:39:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On Wed, Jun 14, 2023 at 1:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 17, 2023 at 11:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, May 17, 2023 at 2:53 PM Robert Sjöblom\n> > <robert.sjoblom@fortnox.se> wrote:\n> > >\n> > > Attached is the revised version.\n> > >\n> >\n> > v4 looks good to me.\n> >\n>\n> The latest version looks good to me as well. I think we should\n> backpatch this change as this is a user-facing message change in docs\n> and code. What do you guys think?\n>\n\nI do not know the exact criteria for deciding to back-patch, but I am\nnot sure back-patching is so important for this one.\n\nIt is not a critical bug-fix, and despite being a user-facing change,\nthere is no functional change. Also, IIUC the previous docs existed\nfor 6 years without problem.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 14 Jun 2023 13:54:46 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On Wed, Jun 14, 2023 at 9:25 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Jun 14, 2023 at 1:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, May 17, 2023 at 11:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Wed, May 17, 2023 at 2:53 PM Robert Sjöblom\n> > > <robert.sjoblom@fortnox.se> wrote:\n> > > >\n> > > > Attached is the revised version.\n> > > >\n> > >\n> > > v4 looks good to me.\n> > >\n> >\n> > The latest version looks good to me as well. I think we should\n> > backpatch this change as this is a user-facing message change in docs\n> > and code. What do you guys think?\n> >\n>\n> I do not know the exact criteria for deciding to back-patch, but I am\n> not sure back-patching is so important for this one.\n>\n> It is not a critical bug-fix, and despite being a user-facing change,\n> there is no functional change.\n>\n\nRight neither this is a functional change nor a critical but where any\nwork will be stopped due to this but I think we do prefer to backpatch\nchanges (doc) where user-facing docs have an additional explanation.\nFor example, see [1][2]. OTOH, there is an argument that we should do\nthis only in v17 but I guess this is a simple improvement that will be\nhelpful for even current users, so it is better to change this in\nexisting branches as well.\n\n[1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e126d817c7af989c47366b0e344ee83d761f334a\n[2] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f170b572d2b4cc232c5b6d391b4ecf3e368594b7\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 14 Jun 2023 09:47:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "Hi:\n postgres ODBC's driver psqlodbcw.so supports Unicode. You can do this by checking the value of the SQL_ATTR_ANSI_APP attribute; if it is SQL_AA_FALSE, Unicode is supported; If the value is SQL_AA_TRUE, ANSI is supported\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2023-06-14 11:54:46, \"Peter Smith\" <smithpb2250@gmail.com> wrote:\n>On Wed, Jun 14, 2023 at 1:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, May 17, 2023 at 11:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>> >\n>> > On Wed, May 17, 2023 at 2:53 PM Robert Sjöblom\n>> > <robert.sjoblom@fortnox.se> wrote:\n>> > >\n>> > > Attached is the revised version.\n>> > >\n>> >\n>> > v4 looks good to me.\n>> >\n>>\n>> The latest version looks good to me as well. I think we should\n>> backpatch this change as this is a user-facing message change in docs\n>> and code. What do you guys think?\n>>\n>\n>I do not know the exact criteria for deciding to back-patch, but I am\n>not sure back-patching is so important for this one.\n>\n>It is not a critical bug-fix, and despite being a user-facing change,\n>there is no functional change. Also, IIUC the previous docs existed\n>for 6 years without problem.\n>\n>------\n>Kind Regards,\n>Peter Smith.\n>Fujitsu Australia\n>\n\nHi:  postgres ODBC's driver psqlodbcw.so supports Unicode. You can do this by checking the value of the SQL_ATTR_ANSI_APP attribute; if it is SQL_AA_FALSE, Unicode is supported; If the value is SQL_AA_TRUE, ANSI is supportedAt 2023-06-14 11:54:46, \"Peter Smith\" <smithpb2250@gmail.com> wrote:\n>On Wed, Jun 14, 2023 at 1:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, May 17, 2023 at 11:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>> >\n>> > On Wed, May 17, 2023 at 2:53 PM Robert Sjöblom\n>> > <robert.sjoblom@fortnox.se> wrote:\n>> > >\n>> > > Attached is the revised version.\n>> > >\n>> >\n>> > v4 looks good to me.\n>> >\n>>\n>> The latest version looks good to me as well. I think we should\n>> backpatch this change as this is a user-facing message change in docs\n>> and code. What do you guys think?\n>>\n>\n>I do not know the exact criteria for deciding to back-patch, but I am\n>not sure back-patching is so important for this one.\n>\n>It is not a critical bug-fix, and despite being a user-facing change,\n>there is no functional change. Also, IIUC the previous docs existed\n>for 6 years without problem.\n>\n>------\n>Kind Regards,\n>Peter Smith.\n>Fujitsu Australia\n>", "msg_date": "Wed, 14 Jun 2023 18:23:42 +0800 (CST)", "msg_from": "=?UTF-8?B?5YiY5bqE?= <lzwp0521@163.com>", "msg_from_op": false, "msg_subject": "Re:Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On 14.06.23 05:09, Amit Kapila wrote:\n> The latest version looks good to me as well. I think we should\n> backpatch this change as this is a user-facing message change in docs\n> and code. What do you guys think?\n\nIsn't that a reason *not* to backpatch it?\n\n\n", "msg_date": "Wed, 14 Jun 2023 15:22:46 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On Wed, Jun 14, 2023 at 6:52 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 14.06.23 05:09, Amit Kapila wrote:\n> > The latest version looks good to me as well. I think we should\n> > backpatch this change as this is a user-facing message change in docs\n> > and code. What do you guys think?\n>\n> Isn't that a reason *not* to backpatch it?\n>\n\nI wanted to backpatch the following change which provides somewhat\naccurate information about what a user needs to do when it faces an\nerror.\nTo proceed in\n- this situation, disassociate the subscription from the replication slot by\n- executing <literal>ALTER SUBSCRIPTION ... SET (slot_name = NONE)</literal>.\n+ this situation, first <literal>DISABLE</literal> the subscription, and then\n+ disassociate it from the replication slot by executing\n+ <literal>ALTER SUBSCRIPTION ... SET (slot_name = NONE)</literal>.\n\nNow, along with this change, there is a change in errhint as well\nwhich I am not sure about whether to backpatch or not. I think we have\nthe following options (a) commit both doc and code change in HEAD (b)\ncommit both doc and code change in v17 when the next version branch\nopens (c) backpatch the doc change and commit the code change in HEAD\nonly (d) backpatch the doc change and commit the code change in v17\n(e) backpatch both the doc and code change.\n\nWhat do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 15 Jun 2023 08:19:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On 15.06.23 04:49, Amit Kapila wrote:\n> I wanted to backpatch the following change which provides somewhat\n> accurate information about what a user needs to do when it faces an\n> error.\n> To proceed in\n> - this situation, disassociate the subscription from the replication slot by\n> - executing <literal>ALTER SUBSCRIPTION ... SET (slot_name = NONE)</literal>.\n> + this situation, first <literal>DISABLE</literal> the subscription, and then\n> + disassociate it from the replication slot by executing\n> + <literal>ALTER SUBSCRIPTION ... SET (slot_name = NONE)</literal>.\n> \n> Now, along with this change, there is a change in errhint as well\n> which I am not sure about whether to backpatch or not. I think we have\n> the following options (a) commit both doc and code change in HEAD (b)\n> commit both doc and code change in v17 when the next version branch\n> opens (c) backpatch the doc change and commit the code change in HEAD\n> only (d) backpatch the doc change and commit the code change in v17\n> (e) backpatch both the doc and code change.\n\nReading the thread again now, I think this is essentially a bug fix, so \nI don't mind backpatching it.\n\nI wish the errhint would show the actual command to disable the \nsubscription. It already shows the command to detach the replication \nslot, so it would only be consistent to also show the other command.\n\n\n", "msg_date": "Fri, 16 Jun 2023 15:45:06 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On Fri, Jun 16, 2023 at 7:15 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 15.06.23 04:49, Amit Kapila wrote:\n> >\n> > Now, along with this change, there is a change in errhint as well\n> > which I am not sure about whether to backpatch or not. I think we have\n> > the following options (a) commit both doc and code change in HEAD (b)\n> > commit both doc and code change in v17 when the next version branch\n> > opens (c) backpatch the doc change and commit the code change in HEAD\n> > only (d) backpatch the doc change and commit the code change in v17\n> > (e) backpatch both the doc and code change.\n>\n> Reading the thread again now, I think this is essentially a bug fix, so\n> I don't mind backpatching it.\n>\n> I wish the errhint would show the actual command to disable the\n> subscription. It already shows the command to detach the replication\n> slot, so it would only be consistent to also show the other command.\n>\n\nFair enough. I updated the errhint and slightly adjusted the docs as\nwell in the attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 17 Jun 2023 12:03:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "FYI - I have created and tested back-patches for Amit's v5 patch,\ngoing all the way to REL_10_STABLE.\n\n(the patches needed tweaking several times due to minor code/docs\ndifferences in the earlier versions)\n\nPSA.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 20 Jun 2023 13:32:21 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" }, { "msg_contents": "On Tue, Jun 20, 2023 at 9:02 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> FYI - I have created and tested back-patches for Amit's v5 patch,\n> going all the way to REL_10_STABLE.\n>\n\nPushed. I haven't used PG10 patch as REL_10_STABLE is out of support now.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Jun 2023 11:54:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Update ALTER SUBSCRIPTION documentation v3" } ]
[ { "msg_contents": "I made this function:\n\nCREATE OR REPLACE FUNCTION test_fun()\nRETURNS void\nLANGUAGE SQL\nBEGIN ATOMIC\nMERGE INTO target\nUSING source s on s.id = target.id\nWHEN MATCHED THEN\n UPDATE SET data = s.data\nWHEN NOT MATCHED THEN\n INSERT VALUES (s.id, s.data);\nend;\n\nIt appears to work fine, but:\n\nregression=# \\sf+ test_fun()\nERROR: unrecognized query command type: 5\n\nand it also breaks pg_dump. Somebody screwed up pretty badly\nhere. Is there any hope of fixing it for Monday's releases?\n\n(I'd guess that decompiling the WHEN clause would take a nontrivial\namount of new code, so maybe fixing it on such short notice is\nimpractical. But ugh.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 May 2023 09:36:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "On 2023-May-05, Tom Lane wrote:\n\n> I made this function:\n> \n> CREATE OR REPLACE FUNCTION test_fun()\n> RETURNS void\n> LANGUAGE SQL\n> BEGIN ATOMIC\n> MERGE INTO target\n> USING source s on s.id = target.id\n> WHEN MATCHED THEN\n> UPDATE SET data = s.data\n> WHEN NOT MATCHED THEN\n> INSERT VALUES (s.id, s.data);\n> end;\n> \n> It appears to work fine, but:\n> \n> regression=# \\sf+ test_fun()\n> ERROR: unrecognized query command type: 5\n> \n> and it also breaks pg_dump. Somebody screwed up pretty badly\n> here. Is there any hope of fixing it for Monday's releases?\n> \n> (I'd guess that decompiling the WHEN clause would take a nontrivial\n> amount of new code, so maybe fixing it on such short notice is\n> impractical. But ugh.)\n\nHmm, there is *some* code in ruleutils for MERGE, but clearly something\nis missing. Let me have a look ...\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 5 May 2023 18:21:30 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "On 2023-May-05, Tom Lane wrote:\n\n> (I'd guess that decompiling the WHEN clause would take a nontrivial\n> amount of new code, so maybe fixing it on such short notice is\n> impractical. But ugh.)\n\nHere's a first attempt. I mostly just copied code from the insert and\nupdate support routines. There's a couple of things missing still, but\nI'm not sure I'll get to it tonight. I only tested to the extent of\nwhat's in the new regression test.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.", "msg_date": "Fri, 5 May 2023 20:43:38 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Here's a first attempt. I mostly just copied code from the insert and\n> update support routines. There's a couple of things missing still, but\n> I'm not sure I'll get to it tonight. I only tested to the extent of\n> what's in the new regression test.\n\nI did a bit of review and more work on this:\n\n* Added the missing OVERRIDING support\n\n* Played around with the pretty-printing indentation\n\n* Improved test coverage, and moved the test to rules.sql where\nI thought it fit more naturally.\n\nI think we could probably commit this, though I've not tried it\nin v15 yet.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 05 May 2023 17:06:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "On Fri, May 05, 2023 at 05:06:34PM -0400, Tom Lane wrote:\n> I did a bit of review and more work on this:\n> \n> * Added the missing OVERRIDING support\n> \n> * Played around with the pretty-printing indentation\n> \n> * Improved test coverage, and moved the test to rules.sql where\n> I thought it fit more naturally.\n> \n> I think we could probably commit this, though I've not tried it\n> in v15 yet.\n\nSeems rather OK..\n\n+WHEN NOT MATCHED\n+ AND s.a > 100\n+ THEN INSERT (id, data) OVERRIDING SYSTEM VALUE\n+ VALUES (s.a, DEFAULT)\n\nAbout OVERRIDING, I can see that this is still missing coverage for\nOVERRIDING USER VALUE.\n--\nMichael", "msg_date": "Sat, 6 May 2023 11:46:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> +WHEN NOT MATCHED\n> + AND s.a > 100\n> + THEN INSERT (id, data) OVERRIDING SYSTEM VALUE\n> + VALUES (s.a, DEFAULT)\n\n> About OVERRIDING, I can see that this is still missing coverage for\n> OVERRIDING USER VALUE.\n\nYeah, I couldn't see that covering that too was worth any cycles.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 May 2023 22:49:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "I wrote:\n> I think we could probably commit this, though I've not tried it\n> in v15 yet.\n\nPing? The hours grow short, if we're to get this into 15.3.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 May 2023 00:49:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "I wrote:\n> Ping? The hours grow short, if we're to get this into 15.3.\n\nI went ahead and pushed v2, since we can't wait any longer if\nwe're to get reasonable buildfarm coverage before 15.3 wraps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 May 2023 11:02:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "On 2023-May-07, Tom Lane wrote:\n\n> I wrote:\n> > Ping? The hours grow short, if we're to get this into 15.3.\n> \n> I went ahead and pushed v2, since we can't wait any longer if\n> we're to get reasonable buildfarm coverage before 15.3 wraps.\n\nMuch appreciated. I wanted to get back to this yesterday but was unable\nto.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 7 May 2023 19:06:09 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" }, { "msg_contents": "BTW, I spent some time adding a cross-check to see if the code was at\nleast approximately correct for all the queries in the MERGE regression\ntests, and couldn't find any failures. I then extended the test to the\nother optimizable commands, and couldn't find any problems there either.\n\nMy approach was perhaps a bit simple-minded: I just patched\npg_analyze_and_rewrite_fixedparams() to call pg_get_querydef() after\nparse_analyze_fixedparams(), then ran the main regression tests. No\ncrashes. Also had it output as a WARNING together with the\nquery_string, so that I could eyeball for any discrepancies; I couldn't\nfind any queries that produce wrong contents, though this was just\nmanual examination of the resulting noise logs.\n\nI suppose a better approach might be to run the query produced by\npg_get_querydef() again through parse analysis and see if it produces\nthe same; or better yet, discard the original parsed query and parse the\npg_get_querydef(). I didn't try to do this.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)\n\n\n", "msg_date": "Wed, 10 May 2023 11:40:36 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE lacks ruleutils.c decompiling support!?" } ]
[ { "msg_contents": "... at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=56e869a0987c93f594e73c1c3e49274de5c502d3\n\nAs usual, please send corrections/comments by Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 May 2023 12:42:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "First draft of back-branch release notes is done" } ]
[ { "msg_contents": "Let me know if results like this shouldn't be posted here.\n\nThis is mostly a hobby project for me - my other hobby is removing\ninvasive weeds. I am happy to answer questions and run more tests, but\nturnaround for answers won't be instant. Getting results from Linux perf\nfor these tests is on my TODO list. For now I am just re-running a subset\nof these to get more certainty that the regressions are real and not noise.\n\nThese are results for the insert benchmark on a small server comparing\nperformance for versions 15.2 and 16. For version 16 I built from source at\ngit sha 1ab763fc.\nhttps://github.com/postgres/postgres/commit/1ab763fc22adc88e5d779817e7b42b25a9dd7c9e\n\nLate in the version 15 beta release cycle this benchmark found a\nsignificant regression. I don't see anything significant this time, but\nthere are potential small regressions.\n\nMore detail on how I run the benchmark and the HW is here, the server is\nsmall -- Beelink with 8 cores, 16G RAM and 1T NVMe SSD.\nhttp://smalldatum.blogspot.com/2023/05/the-insert-benchmark-postgres-versions.html\n\nPerformance reports are linked below. But first, disclaimers:\n* the goal is to determine whether there are CPU improvements or\nregressions. To make that easier to spot I disable fsync on commit.\n* my scripts compute CPU/operation where operation is a SQL statement.\nHowever, CPU in this case is measured by vmstat and includes CPU from the\nbenchmark client and Postgres server\n* the regressions here are small, usually less than 5% which means it can\nbe hard to distinguish between normal variance and a regression but the\nresults are repeatable\n* the links below are to the Summary section which includes throughput\n(absolute and relative). The relative throughput is the (throughput for PG\n16 / throughput for PG 15.2) and\n* I used the same compiler options for the builds of 15.2 and 16\n\nSummary of the results:\n* from r1 - insert-heavy (l.i0, l.i1) and create indexes (l.x) steps are\n~2% slower in PG 16\n* from r2 - create index (l.x) step is ~4% slower in PG 16\n* from r3 - regressions are similar to r1\n* from r4, r5 and r6 - regressions are mostly worse than r1, r2, r3. Note\nr4, r5, r6 are the same workload as r1, r2, r3 except the database is\ncached by PG for r1, r2, r3 so the r4, r5 and r6 benchmarks will do much\nmore copying from the OS page cache into the Postgres buffer pool.\n\nI will repeat r1, r2, r4 and r5 but with the tests run in a different order\nto confirm this isn't just noise.\n\nDatabase cached by Postgres:\nr1) 1 table, 1 client -\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.1u.1tno.cached/all.html#summary\nr2) 4 tables, 4 clients -\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tno.cached/all.html#summary\nr3) 1 table, 4 clients -\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tyes.cached/all.html#summary\n\nDatabase cached by OS but not by Postgres:\nr4) 1 table, 1 client -\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.1u.1tno.1g/all.html#summary\nr5) 4 tables, 4 clients -\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tno.1g/all.html#summary\nr6) 1 table, 4 clients -\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tyes.1g/all.html#summary\n\n\n-- \nMark Callaghan\nmdcallag@gmail.com\n\nLet me know if results like this shouldn't be posted here.This is mostly a hobby project for me - my other hobby is removing invasive weeds. I am happy to answer questions and run more tests, but turnaround for answers won't be instant. Getting results from Linux perf for these tests is on my TODO list. For now I am just re-running a subset of these to get more certainty that the regressions are real and not noise.These are results for the insert benchmark on a small server comparing performance for versions 15.2 and 16. For version 16 I built from source at git sha 1ab763fc.https://github.com/postgres/postgres/commit/1ab763fc22adc88e5d779817e7b42b25a9dd7c9eLate in the version 15 beta release cycle this benchmark found a significant regression. I don't see anything significant this time, but there are potential small regressions.More detail on how I run the benchmark and the HW is here, the server is small -- Beelink with 8 cores, 16G RAM and 1T NVMe SSD.http://smalldatum.blogspot.com/2023/05/the-insert-benchmark-postgres-versions.htmlPerformance reports are linked below. But first, disclaimers:* the goal is to determine whether there are CPU improvements or regressions. To make that easier to spot I disable fsync on commit.* my scripts compute CPU/operation where operation is a SQL statement. However, CPU in this case is measured by vmstat and includes CPU from the benchmark client and Postgres server* the regressions here are small, usually less than 5% which means it can be hard to distinguish between normal variance and a regression but the results are repeatable* the links below are to the Summary section which includes throughput (absolute and relative). The relative throughput is the (throughput for PG 16 / throughput for PG 15.2) and * I used the same compiler options for the builds of 15.2 and 16Summary of the results:* from r1 - insert-heavy (l.i0, l.i1) and create indexes (l.x) steps are ~2% slower in PG 16* from r2 - create index (l.x) step is ~4% slower in PG 16* from r3 - regressions are similar to r1* from r4, r5 and r6 - regressions are mostly worse than r1, r2, r3. Note r4, r5, r6 are the same workload as r1, r2, r3 except the database is cached by PG for r1, r2, r3 so the r4, r5 and r6 benchmarks will do much more copying from the OS page cache into the Postgres buffer pool.I will repeat r1, r2, r4 and r5 but with the tests run in a different order to confirm this isn't just noise.Database cached by Postgres:r1) 1 table, 1 client - https://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.1u.1tno.cached/all.html#summaryr2) 4 tables, 4 clients - https://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tno.cached/all.html#summaryr3) 1 table, 4 clients - https://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tyes.cached/all.html#summaryDatabase cached by OS but not by Postgres:r4) 1 table, 1 client - https://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.1u.1tno.1g/all.html#summaryr5) 4 tables, 4 clients - https://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tno.1g/all.html#summaryr6) 1 table, 4 clients - https://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tyes.1g/all.html#summary-- Mark Callaghanmdcallag@gmail.com", "msg_date": "Fri, 5 May 2023 10:45:12 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": true, "msg_subject": "benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hi,\n\nOn 2023-05-05 10:45:12 -0700, MARK CALLAGHAN wrote:\n> This is mostly a hobby project for me - my other hobby is removing\n> invasive weeds.\n\nHah :)\n\n\n> Summary of the results:\n> * from r1 - insert-heavy (l.i0, l.i1) and create indexes (l.x) steps are\n> ~2% slower in PG 16\n> * from r2 - create index (l.x) step is ~4% slower in PG 16\n> * from r3 - regressions are similar to r1\n> * from r4, r5 and r6 - regressions are mostly worse than r1, r2, r3. Note\n> r4, r5, r6 are the same workload as r1, r2, r3 except the database is\n> cached by PG for r1, r2, r3 so the r4, r5 and r6 benchmarks will do much\n> more copying from the OS page cache into the Postgres buffer pool.\n\nOne thing that's somewhat odd is that there's very marked changes in l.i0's\np99 latency for the four clients cases - but whether 15 or 16 are better\ndiffers between the runs.\n\nr2)\n p99\n20m.pg152_o3_native_lto.cx7 300\n20m.pg16prebeta.cx7 23683\n\nr3)\n p99\n20m.pg152_o3_native_lto.cx7 70245\n20m.pg16prebeta.cx7 8191\n\nr5)\n p99\n20m.pg152_o3_native_lto.cx7 11188\n20m.pg16prebeta.cx7 72720\n\nr6)\n p99\n20m.pg152_o3_native_lto.cx7 1898\n20m.pg16prebeta.cx7 31666\n\n\nI do wonder if there's something getting scheduled in some of these runs\nincreasing latency?\n\nOr what we're seeing depends on the time between the start of the server and\nthe start of the benchmark? It is interesting that the per-second throughput\ngraph shows a lot of up/down at the end:\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tyes.1g/tput.l.i0.html\n\nBoth 15 and 16 have one very high result at 70s, 15 then has one low, but 16\nhas two low results.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 May 2023 13:34:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "I have two more runs of the benchmark in progress so we will have 3 results\nfor each of the test cases to confirm that the small regressions are\nrepeatable.\n\nOn Fri, May 5, 2023 at 1:34 PM Andres Freund <andres@anarazel.de> wrote:\n\n\n> One thing that's somewhat odd is that there's very marked changes in l.i0's\n> p99 latency for the four clients cases - but whether 15 or 16 are better\n> differs between the runs.\n>\n\n From the response time sections for l.i0 the [1ms, 4ms) bucket has 99% or\nmore for all 6 cases.\nFor example,\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.1u.1tno.cached/all.html#l.i0.rt\nBut yes, the p99 is as you state. I will wade through my test scripts\ntomorrow to see how the p99 is computed.\n\nI do wonder if there's something getting scheduled in some of these runs\n> increasing latency?\n>\n\nDo you mean interference from other processes? Both the big and small\nservers run Ubuntu 22.04 server (no X) and there shouldn't be many extra\nthings, although Google Cloud adds a few extra things that run in the\nbackground.\n\n\n> Or what we're seeing depends on the time between the start of the server\n> and\n> the start of the benchmark? It is interesting that the per-second\n> throughput\n> graph shows a lot of up/down at the end:\n>\n> https://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tyes.1g/tput.l.i0.html\n>\n> Both 15 and 16 have one very high result at 70s, 15 then has one low, but\n> 16\n> has two low results.\n>\n\nImmediately prior to l.i0 the database directory is wiped and then Postgres\nis initialized and started.\n\nThe IPS vs time graphs are more interesting for benchmark steps that run\nlonger. Alas, this can't run too long if the resulting database is to fit\nin <= 16G. But that is a problem for another day. The IPS vs time graphs\nare not a flat line, but I am not ready to pursue that as problem unless it\nshows multi-second write-stalls (fortunately it does not).\n\n-- \nMark Callaghan\nmdcallag@gmail.com\n\nI have two more runs of the benchmark in progress so we will have 3 results for each of the test cases to confirm that the small regressions are repeatable.On Fri, May 5, 2023 at 1:34 PM Andres Freund <andres@anarazel.de> wrote: \nOne thing that's somewhat odd is that there's very marked changes in l.i0's\np99 latency for the four clients cases - but whether 15 or 16 are better\ndiffers between the runs.From the response time sections for l.i0 the [1ms, 4ms) bucket has 99% or more for all 6 cases. For example, https://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.1u.1tno.cached/all.html#l.i0.rtBut yes, the p99 is as you state. I will wade through my test scripts tomorrow to see how the p99 is computed.\nI do wonder if there's something getting scheduled in some of these runs\nincreasing latency?Do you mean interference from other processes? Both the big and small servers run Ubuntu 22.04 server (no X) and there shouldn't be many extra things, although Google Cloud adds a few extra things that run in the background.  \nOr what we're seeing depends on the time between the start of the server and\nthe start of the benchmark? It is interesting that the per-second throughput\ngraph shows a lot of up/down at the end:\nhttps://mdcallag.github.io/reports/23_05_04_ibench.beelink.pg16b.4u.1tyes.1g/tput.l.i0.html\n\nBoth 15 and 16 have one very high result at 70s, 15 then has one low, but 16\nhas two low results.Immediately prior to l.i0 the database directory is wiped and then Postgres is initialized and started.The IPS vs time graphs are more interesting for benchmark steps that run longer. Alas, this can't run too long if the resulting database is to fit in <= 16G. But that is a problem for another day. The IPS vs time graphs are not a flat line, but I am not ready to pursue that as problem unless it shows multi-second write-stalls (fortunately it does not).-- Mark Callaghanmdcallag@gmail.com", "msg_date": "Fri, 5 May 2023 22:01:57 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": true, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hello Mark,\n\n05.05.2023 20:45, MARK CALLAGHAN wrote:\n> This is mostly a hobby project for me - my other hobby is removing invasive weeds. I am happy to answer questions and \n> run more tests, but turnaround for answers won't be instant. Getting results from Linux perf for these tests is on my \n> TODO list. For now I am just re-running a subset of these to get more certainty that the regressions are real and not \n> noise.\n>\n\nIt's a very interesting topic to me, too. I had developed some scripts to\nmeasure and compare postgres`s performance using miscellaneous public\nbenchmarks (ycsb, tpcds, benchmarksql_tpcc, htapbench, benchbase, gdprbench,\ns64da-benchmark, ...). Having compared 15.3 (56e869a09) with master\n(58f5edf84) I haven't seen significant regressions except a few minor ones.\nFirst regression observed with a simple pgbench test:\npgbench -i benchdb\npgbench -c 10 -T 300 benchdb\n(with default compilation options and fsync = off)\n\nOn master I get:\ntps = 10349.826645 (without initial connection time)\nOn 15.3:\ntps = 11296.064992 (without initial connection time)\n\nThis difference is confirmed by multiple test runs. `git bisect` for this\nregression pointed at f193883fc.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 8 May 2023 16:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hi,\n\nOn 2023-05-08 16:00:01 +0300, Alexander Lakhin wrote:\n> This difference is confirmed by multiple test runs. `git bisect` for this\n> regression pointed at f193883fc.\n\nI can reproduce a significant regression due to f193883fc of a workload just\nrunning\nSELECT CURRENT_TIMESTAMP;\n\nA single session running it on my workstation via pgbench -Mprepared gets\nbefore:\ntps = 89359.128359 (without initial connection time)\nafter:\ntps = 83843.585152 (without initial connection time)\n\nObviously this is an extreme workload, but that nevertheless seems too large\nto just accept...\n\n\nMichael, the commit message notes that there were no measured performance\nregression - yet I see one in a trivial test. What were you measuring?\n\n\nI'm a bit surprised by the magnitude of the regression, but it's not\nsurprising that there is a performance effect. You're replacing something that\ndoesn't go through the whole generic function rigamarole, and replace it with\nsomething that does...\n\nLooking at two perf profiles, the biggest noticable difference is\n\nBefore:\n\n- 5.51% 0.13% postgres postgres [.] ExecInitResult\n - 5.38% ExecInitResult\n + 2.29% ExecInitResultTupleSlotTL\n - 2.22% ExecAssignProjectionInfo\n - 2.19% ExecBuildProjectionInfo\n 0.47% ExecReadyInterpretedExpr\n - 0.43% ExecInitExprRec\n - 0.10% palloc\n AllocSetAlloc.localalias (inlined)\n + 0.32% expression_tree_walker_impl.localalias (inlined)\n + 0.28% get_typlen\n 0.09% ExecPushExprSlots\n + 0.06% MemoryContextAllocZeroAligned\n + 0.04% MemoryContextAllocZeroAligned\n 0.02% exprType.localalias (inlined)\n + 0.41% ExecAssignExprContext\n + 0.35% MemoryContextAllocZeroAligned\n 0.11% ExecInitQual.localalias (inlined)\n + 0.11% _start\n + 0.02% 0x55b89c764d7f\n\nAfter:\n\n- 6.57% 0.17% postgres postgres [.] ExecInitResult\n - 6.40% ExecInitResult\n - 3.00% ExecAssignProjectionInfo\n - ExecBuildProjectionInfo\n - 0.91% ExecInitExprRec\n - 0.65% ExecInitFunc\n 0.23% fmgr_info_cxt_security\n + 0.18% palloc0\n + 0.07% object_aclcheck\n 0.04% fmgr_info\n 0.05% check_stack_depth\n + 0.05% palloc\n + 0.58% expression_tree_walker_impl.localalias (inlined)\n + 0.55% get_typlen\n 0.37% ExecReadyInterpretedExpr\n + 0.11% MemoryContextAllocZeroAligned\n 0.09% ExecPushExprSlots\n 0.04% exprType.localalias (inlined)\n + 2.77% ExecInitResultTupleSlotTL\n + 0.50% ExecAssignExprContext\n + 0.09% MemoryContextAllocZeroAligned\n 0.05% ExecInitQual.localalias (inlined)\n + 0.10% _start\n\nI.e. we spend more time building the expression state for expression\nevaluation, because we now go through the generic ExecInitFunc(), instead of\nsomething dedicated. We also now need to do permission checking etc.\n\n\nI don't think that's the entirety of the regression...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 May 2023 12:11:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hi,\n\nOn 2023-05-08 12:11:17 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2023-05-08 16:00:01 +0300, Alexander Lakhin wrote:\n> > This difference is confirmed by multiple test runs. `git bisect` for this\n> > regression pointed at f193883fc.\n> \n> I can reproduce a significant regression due to f193883fc of a workload just\n> running\n> SELECT CURRENT_TIMESTAMP;\n> \n> A single session running it on my workstation via pgbench -Mprepared gets\n> before:\n> tps = 89359.128359 (without initial connection time)\n> after:\n> tps = 83843.585152 (without initial connection time)\n> \n> Obviously this is an extreme workload, but that nevertheless seems too large\n> to just accept...\n\nAdded an open item for this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 May 2023 09:48:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Fri, May 5, 2023 at 10:01 PM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n\n> I have two more runs of the benchmark in progress so we will have 3\n> results for each of the test cases to confirm that the small regressions\n> are repeatable.\n>\n\nThey get similar results. Then I tried Linux perf but the hierarchical call\nstacks, to be used for Flamegraph, have too many \"[unknown]\" entries.\nI was using: ./configure --prefix=$pfx --enable-debug CFLAGS=\"-O3\n-march=native -mtune=native -flto\" LDFLAGS=\"-flto\" > o.cf.$x 2> e.cf.$x\nAdding -no-omit-frame-pointer fixes the problem, so I am repeating the\nbenchmark with that to confirm there are still regressions and then I will\nget flamegraphs.\n\n-- \nMark Callaghan\nmdcallag@gmail.com\n\nOn Fri, May 5, 2023 at 10:01 PM MARK CALLAGHAN <mdcallag@gmail.com> wrote:I have two more runs of the benchmark in progress so we will have 3 results for each of the test cases to confirm that the small regressions are repeatable.They get similar results. Then I tried Linux perf but the hierarchical call stacks, to be used for Flamegraph, have too many \"[unknown]\" entries.I was using: ./configure --prefix=$pfx --enable-debug CFLAGS=\"-O3 -march=native -mtune=native -flto\" LDFLAGS=\"-flto\" > o.cf.$x 2> e.cf.$xAdding -no-omit-frame-pointer fixes the problem, so I am repeating the benchmark with that to confirm there are still regressions and then I will get flamegraphs.-- Mark Callaghanmdcallag@gmail.com", "msg_date": "Tue, 9 May 2023 10:36:09 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": true, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "08.05.2023 16:00, Alexander Lakhin wrote:\n> ... Having compared 15.3 (56e869a09) with master\n> (58f5edf84) I haven't seen significant regressions except a few minor ones.\n> First regression observed with a simple pgbench test:\n\nAnother noticeable, but not critical performance degradation is revealed by\nquery 87 from TPC-DS (I use s64da-benchmark):\nhttps://github.com/swarm64/s64da-benchmark-toolkit/blob/master/benchmarks/tpcds/queries/queries_10/87.sql\n\nWith `prepare_benchmark --scale-factor=2`, `run_benchmark --scale-factor=10`\nI get on master:\n2023-05-10 09:27:52,888 INFO    : finished 80/103: query 87 of stream  0: 2.26s OK\nbut on REL_15_STABLE:\n2023-05-10 08:13:40,648 INFO    : finished 80/103: query 87 of stream  0: 1.94s OK\n\nThis time `git bisect` pointed at 3c6fc5820. Having compared execution plans\n(both attached), I see the following differences (3c6fc5820~1 vs 3c6fc5820):\n->  Subquery Scan on \"*SELECT* 1\"  (cost=149622.00..149958.68 rows=16834 width=21) (actual time=1018.606..1074.468 \nrows=93891 loops=1)\n  ->  Unique  (cost=149622.00..149790.34 rows=16834 width=17) (actual time=1018.604..1064.790 rows=93891 loops=1)\n   ->  Sort  (cost=149622.00..149664.09 rows=16834 width=17) (actual time=1018.603..1052.591 rows=94199 loops=1)\n    ->  Gather  (cost=146588.59..148440.33 rows=16834 width=17) (actual time=880.899..913.978 rows=94199 loops=1)\nvs\n->  Subquery Scan on \"*SELECT* 1\"  (cost=147576.79..149829.53 rows=16091 width=21) (actual time=1126.489..1366.751 \nrows=93891 loops=1)\n  ->  Unique  (cost=147576.79..149668.62 rows=16091 width=17) (actual time=1126.487..1356.938 rows=93891 loops=1)\n   ->  Gather Merge  (cost=147576.79..149547.94 rows=16091 width=17) (actual time=1126.487..1345.253 rows=94204 loops=1)\n    ->  Unique  (cost=146576.78..146737.69 rows=16091 width=17) (actual time=1124.426..1306.532 rows=47102 loops=2)\n     ->  Sort  (cost=146576.78..146617.01 rows=16091 width=17) (actual time=1124.424..1245.110 rows=533434 loops=2)\n\n->  Subquery Scan on \"*SELECT* 2\"  (cost=52259.82..52428.16 rows=8417 width=21) (actual time=653.640..676.879 rows=62744 \nloops=1)\n  ->  Unique  (cost=52259.82..52343.99 rows=8417 width=17) (actual time=653.639..670.405 rows=62744 loops=1)\n   ->  Sort  (cost=52259.82..52280.86 rows=8417 width=17) (actual time=653.637..662.428 rows=62744 loops=1)\n    ->  Gather  (cost=50785.20..51711.07 rows=8417 width=17) (actual time=562.158..571.737 rows=62744 loops=1)\n     ->  HashAggregate  (cost=49785.20..49869.37 rows=8417 width=17) (actual time=538.263..544.336 rows=31372 loops=2)\n      ->  Nested Loop  (cost=0.85..49722.07 rows=8417 width=17) (actual time=2.049..469.747 rows=284349 loops=2)\nvs\n->  Subquery Scan on \"*SELECT* 2\"  (cost=48503.68..49630.12 rows=8046 width=21) (actual time=700.050..828.388 rows=62744 \nloops=1)\n  ->  Unique  (cost=48503.68..49549.66 rows=8046 width=17) (actual time=700.047..821.836 rows=62744 loops=1)\n   ->  Gather Merge  (cost=48503.68..49489.31 rows=8046 width=17) (actual time=700.047..814.136 rows=62744 loops=1)\n    ->  Unique  (cost=47503.67..47584.13 rows=8046 width=17) (actual time=666.348..763.403 rows=31372 loops=2)\n     ->  Sort  (cost=47503.67..47523.78 rows=8046 width=17) (actual time=666.347..730.336 rows=284349 loops=2)\n      ->  Nested Loop  (cost=0.85..46981.72 rows=8046 width=17) (actual time=1.852..454.111 rows=284349 loops=2)\n\n->  Subquery Scan on \"*SELECT* 3\"  (cost=50608.83..51568.70 rows=7165 width=21) (actual time=302.571..405.305 rows=23737 \nloops=1)\n  ->  Unique  (cost=50608.83..51497.05 rows=7165 width=17) (actual time=302.568..402.818 rows=23737 loops=1)\n   ->  Gather Merge  (cost=50608.83..51443.31 rows=7165 width=17) (actual time=302.567..372.246 rows=287761 loops=1)\n    ->  Sort  (cost=49608.81..49616.27 rows=2985 width=17) (actual time=298.204..310.075 rows=95920 loops=3)\n     ->  Nested Loop  (cost=2570.65..49436.52 rows=2985 width=17) (actual time=3.205..229.192 rows=95920 loops=3)\nvs\n->  Subquery Scan on \"*SELECT* 3\"  (cost=50541.84..51329.11 rows=5708 width=21) (actual time=302.042..336.820 rows=23737 \nloops=1)\n  ->  Unique  (cost=50541.84..51272.03 rows=5708 width=17) (actual time=302.039..334.329 rows=23737 loops=1)\n   ->  Gather Merge  (cost=50541.84..51229.22 rows=5708 width=17) (actual time=302.039..331.296 rows=24128 loops=1)\n    ->  Unique  (cost=49541.81..49570.35 rows=2854 width=17) (actual time=298.771..320.560 rows=8043 loops=3)\n     ->  Sort  (cost=49541.81..49548.95 rows=2854 width=17) (actual time=298.770..309.603 rows=95920 loops=3)\n      ->  Nested Loop  (cost=2570.52..49378.01 rows=2854 width=17) (actual time=3.209..230.291 rows=95920 loops=3)\n\n From the commit message and the discussion [1], it's not clear to me, whether\nthis plan change is expected. Perhaps it's too minor issue to bring attention\nto, but maybe this information could be useful for v16 performance analysis.\n\n[1] https://postgr.es/m/CAApHDvo8Lz2H=42urBbfP65LTcEUOh288MT7DsG2_EWtW1AXHQ@mail.gmail.com\n\nBest regards,\nAlexander", "msg_date": "Wed, 10 May 2023 16:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Thu, 11 May 2023 at 01:00, Alexander Lakhin <exclusion@gmail.com> wrote:\n> This time `git bisect` pointed at 3c6fc5820. Having compared execution plans\n> (both attached), I see the following differences (3c6fc5820~1 vs 3c6fc5820):\n\nBased on what you've sent, I'm uninspired to want to try to do\nanything about it. The patched version finds a plan that's cheaper.\nThe row estimates are miles off with both plans. I'm not sure what\nwe're supposed to do here. It's pretty hard to make changes to the\nplanner's path generation without risking that a bad plan is chosen\nwhen it wasn't beforehand with bad row estimates.\n\nIs the new plan still slower if you increase work_mem so that the sort\nno longer goes to disk? Maybe the planner would have picked Hash\nAggregate if the row estimates had been such that cost_tuplesort()\nknew that the sort would have gone to disk.\n\nDavid\n\n\n", "msg_date": "Thu, 11 May 2023 10:27:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Tue, May 09, 2023 at 09:48:24AM -0700, Andres Freund wrote:\n> On 2023-05-08 12:11:17 -0700, Andres Freund wrote:\n>> I can reproduce a significant regression due to f193883fc of a workload just\n>> running\n>> SELECT CURRENT_TIMESTAMP;\n>> \n>> A single session running it on my workstation via pgbench -Mprepared gets\n>> before:\n>> tps = 89359.128359 (without initial connection time)\n>> after:\n>> tps = 83843.585152 (without initial connection time)\n>> \n>> Obviously this is an extreme workload, but that nevertheless seems too large\n>> to just accept...\n> \n> Added an open item for this.\n\nThanks for the report, I'll come back to it and look at it at the\nbeginning of next week. In the worst case, that would mean a revert\nof this refactoring, I assume.\n--\nMichael", "msg_date": "Thu, 11 May 2023 13:28:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "11.05.2023 01:27, David Rowley wrote:\n> On Thu, 11 May 2023 at 01:00, Alexander Lakhin <exclusion@gmail.com> wrote:\n>> This time `git bisect` pointed at 3c6fc5820. Having compared execution plans\n>> (both attached), I see the following differences (3c6fc5820~1 vs 3c6fc5820):\n> Based on what you've sent, I'm uninspired to want to try to do\n> anything about it. The patched version finds a plan that's cheaper.\n> The row estimates are miles off with both plans.\n\nI've made sure that s64da-benchmark performs analyze before running the\nqueries (pg_class.reltuples fields for tables in question contain actual\ncounts), so it seems that nothing can be done on the benchmark side to\nimprove those estimates.\n\n> ... It's pretty hard to make changes to the\n> planner's path generation without risking that a bad plan is chosen\n> when it wasn't beforehand with bad row estimates.\n\nYeah, I see. It's also interesting to me, which tests perform better after\nthat commit. It takes several hours to run all tests, so I can't present\nresults quickly, but I'll try to collect this information next week.\n\n> Is the new plan still slower if you increase work_mem so that the sort\n> no longer goes to disk? Maybe the planner would have picked Hash\n> Aggregate if the row estimates had been such that cost_tuplesort()\n> knew that the sort would have gone to disk.\n\nYes, increasing work_mem to 50MB doesn't affect the plans (new plans\nattached), though the sort method changed to quicksort. The former plan is\nstill executed slightly faster.\n\nBest regards,\nAlexander", "msg_date": "Thu, 11 May 2023 16:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Thu, May 11, 2023 at 01:28:40PM +0900, Michael Paquier wrote:\n> On Tue, May 09, 2023 at 09:48:24AM -0700, Andres Freund wrote:\n>> On 2023-05-08 12:11:17 -0700, Andres Freund wrote:\n>>> I can reproduce a significant regression due to f193883fc of a workload just\n>>> running\n>>> SELECT CURRENT_TIMESTAMP;\n>>> \n>>> A single session running it on my workstation via pgbench -Mprepared gets\n>>> before:\n>>> tps = 89359.128359 (without initial connection time)\n>>> after:\n>>> tps = 83843.585152 (without initial connection time)\n>>> \n>>> Obviously this is an extreme workload, but that nevertheless seems too large\n>>> to just accept...\n\nExtreme is adapted for a worst-case scenario. Looking at my notes\nfrom a few months back, that's kind of what I did on my laptop, which\nwas the only machine I had at hand back then:\n- Compilation of code with -O2.\n- Prepared statement of a simple SELECT combined with a DO block\nrunning executed the query in a loop a few million times on a single\nbackend:\nPREPARE test AS SELECT CURRENT_TIMESTAMP;\nDO $$\nBEGIN\n FOR i IN 1..10000000\n LOOP\n EXECUTE 'EXECUTE test';\n END LOOP;\nEND $$;\n- The second test is mentioned at [1], with a generate_series() on a\nkeyword.\n- And actually I recall some pgbench runs similar to that.. But I\ndon't have any traces of that in my notes.\n\nThis was not showing much difference, and it does not now, either.\nFunny thing is that the pre-patch period was showing signs of being a\nbit slower in this environment. Anyway, I have just tested the DO\ncase in a second \"bigbox\" environment that I have set up for\nbenchmarking a few days ago, and the DO test is showing me a 1%~1.5%\nregression in runtime. That's repeatable so that's not noise.\n\nI have re-run a bit more pgbench (1 client, prepared query with a\nsingle SELECT on a SQL keyword, etc.). And, TBH, I am not seeing as\nmuch difference as you do (nothing with default pgbench setup, FWIW),\nstill that's able to show a bit more difference than the other two\ncases. HEAD shows me an average output close to 43900 TPS (3 run of\n60s each, for instance), while relying on SQLValueFunction shows an\naverage of 45000TPS. That counts for ~2.4% output regression here\non bigbox based on these numbers. Not a regression as high as\nmentioned above, still that's visible.\n\n>> Added an open item for this.\n> \n> Thanks for the report, I'll come back to it and look at it at the\n> beginning of next week. In the worst case, that would mean a revert\n> of this refactoring, I assume.\n\nSo, this involves commits 7aa81c6, f193883 and fb32748. 7aa81c6 has\nadded some tests, so I would let the tests it added be on HEAD as the\nprecision was not tested for the SQL keywords this has added cover\nfor.\n\nfb32748 has removed the dependency of SQLValueFunction on collations\nby making the name functions use COERCE_SQL_SYNTAX. One case where\nthese could be heavily used is RLS, for example, so that could be\nvisible. f193883 has removed the rest and the timestamp keywords.\n\nI am not going to let that hang in the air with beta1 getting released\nnext week, though, so attached are two patches to revert the change\n(these would be combined, just posted this way for clarity). The only\nconflict I could see is caused by the query jumbling where a\nSQLValueFunction needs \"typmod\" and \"op\" in the computation, ignoring\nthe rest, so the node definition primnodes.h gains the required\nnode_attr. Making the execution path cheaper while avoiding the \ncollation tweaks for SQLValueFunction would be nice, but not at this\ntime of the year on this branch.\n\n[1]: https://www.postgresql.org/message-id/Y0+dHDYA46UnnLs/@paquier.xyz\n--\nMichael", "msg_date": "Mon, 15 May 2023 14:20:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hi,\n\nOn 2023-05-15 14:20:24 +0900, Michael Paquier wrote:\n> On Thu, May 11, 2023 at 01:28:40PM +0900, Michael Paquier wrote:\n> > On Tue, May 09, 2023 at 09:48:24AM -0700, Andres Freund wrote:\n> >> On 2023-05-08 12:11:17 -0700, Andres Freund wrote:\n> >>> I can reproduce a significant regression due to f193883fc of a workload just\n> >>> running\n> >>> SELECT CURRENT_TIMESTAMP;\n> >>> \n> >>> A single session running it on my workstation via pgbench -Mprepared gets\n> >>> before:\n> >>> tps = 89359.128359 (without initial connection time)\n> >>> after:\n> >>> tps = 83843.585152 (without initial connection time)\n> >>> \n> >>> Obviously this is an extreme workload, but that nevertheless seems too large\n> >>> to just accept...\n> \n> Extreme is adapted for a worst-case scenario. Looking at my notes\n> from a few months back, that's kind of what I did on my laptop, which\n> was the only machine I had at hand back then:\n> - Compilation of code with -O2.\n\nI assume without assertions as well?\n\n\n> I have re-run a bit more pgbench (1 client, prepared query with a\n> single SELECT on a SQL keyword, etc.). And, TBH, I am not seeing as\n> much difference as you do (nothing with default pgbench setup, FWIW),\n> still that's able to show a bit more difference than the other two\n> cases.\n\n> HEAD shows me an average output close to 43900 TPS (3 run of\n> 60s each, for instance), while relying on SQLValueFunction shows an\n> average of 45000TPS. That counts for ~2.4% output regression here\n> on bigbox based on these numbers. Not a regression as high as\n> mentioned above, still that's visible.\n\n45k seems too low for a modern machine, given that I get > 80k in such a\nworkload, on a workstation with server CPUs (i.e. many cores, but not that\nfast individually). Hence wondering about assertions being enabled...\n\nI get quite variable performance if I don't pin client / server to the same\ncore, but even the slow performance is faster than 45k.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 May 2023 17:14:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Mon, May 15, 2023 at 05:14:47PM -0700, Andres Freund wrote:\n> On 2023-05-15 14:20:24 +0900, Michael Paquier wrote:\n>> On Thu, May 11, 2023 at 01:28:40PM +0900, Michael Paquier wrote:\n>> Extreme is adapted for a worst-case scenario. Looking at my notes\n>> from a few months back, that's kind of what I did on my laptop, which\n>> was the only machine I had at hand back then:\n>> - Compilation of code with -O2.\n> \n> I assume without assertions as well?\n\nYup, no assertions.\n\n> 45k seems too low for a modern machine, given that I get > 80k in such a\n> workload, on a workstation with server CPUs (i.e. many cores, but not that\n> fast individually). Hence wondering about assertions being enabled...\n\nNope, disabled.\n\n> I get quite variable performance if I don't pin client / server to the same\n> core, but even the slow performance is faster than 45k.\n\nOkay. You mean with something like taskset or similar, I guess?\n--\nMichael", "msg_date": "Tue, 16 May 2023 09:42:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hi,\n\nOn 2023-05-16 09:42:31 +0900, Michael Paquier wrote:\n> > I get quite variable performance if I don't pin client / server to the same\n> > core, but even the slow performance is faster than 45k.\n> \n> Okay. You mean with something like taskset or similar, I guess?\n\nYes. numactl --physcpubind ... in my case. Linux has an optimization where it\ndoes not need to send an IPI when the client and server are scheduled on the\nsame core. For single threaded ping-pong tasks like pgbench -c1, that can make\na huge difference, particularly on larger CPUs. So you get a lot better\nperformance when forcing things to be colocated.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 May 2023 17:54:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Mon, May 15, 2023 at 05:54:53PM -0700, Andres Freund wrote:\n> Yes. numactl --physcpubind ... in my case. Linux has an optimization where it\n> does not need to send an IPI when the client and server are scheduled on the\n> same core. For single threaded ping-pong tasks like pgbench -c1, that can make\n> a huge difference, particularly on larger CPUs. So you get a lot better\n> performance when forcing things to be colocated.\n\nYes, that's not bringing the numbers higher with the simple cases I\nreported previously, either.\n\nAnyway, even if I cannot see such a high difference, I don't see how\nto bring back the original numbers you are reporting without doing\nmore inlining and tying COERCE_SQL_SYNTAX more tightly within the\nexecutor's portions for the FuncExprs, and there are the collation\nassumptions as well. Perhaps that's not the correct thing to do with\nSQLValueFunction remaining around, but nothing can be done for v16, so\nI am planning to just revert the change before beta1, and look at it\nagain later, from scratch.\n--\nMichael", "msg_date": "Tue, 16 May 2023 14:42:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "15.05.2023 08:20, Michael Paquier wrote:\n> I am not going to let that hang in the air with beta1 getting released\n> next week, though, so attached are two patches to revert the change\n> (these would be combined, just posted this way for clarity).\n\nI can confirm that the patches improve (restore) performance for my test:\npgbench -i benchdb\npgbench -c 10 -T 300 benchdb\n\ntps (over three runs):\nHEAD (08c45ae23):\n10238.441580, 10697.202119, 10706.764703\n\nHEAD with the patches:\n11134.510118, 11176.554996, 11150.338488\n\nf193883fc~1 (240e0dbac)\n11082.561388, 11233.604446, 11087.071768\n\n15.3 (8382864eb)\n11328.699555, 11128.057883, 11057.934392\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 16 May 2023 18:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Tue, May 16, 2023 at 06:00:00PM +0300, Alexander Lakhin wrote:\n> I can confirm that the patches improve (restore) performance for my test:\n> pgbench -i benchdb\n> pgbench -c 10 -T 300 benchdb\n\nThanks for running these!\n\n> tps (over three runs):\n> HEAD (08c45ae23):\n> 10238.441580, 10697.202119, 10706.764703\n> \n> HEAD with the patches:\n> 11134.510118, 11176.554996, 11150.338488\n> \n> f193883fc~1 (240e0dbac)\n> 11082.561388, 11233.604446, 11087.071768\n> \n> 15.3 (8382864eb)\n> 11328.699555, 11128.057883, 11057.934392\n\nThe numbers between f193883fc~1 and HEAD+patch are close to each\nother. It does not seem to make the whole difference with 15.3, but\nmost of it. The difference can also be explained with some noise,\nbased on the number patterns of the third runs?\n\nI have now applied the revert, ready for beta1. Thanks for the\nfeedback!\n--\nMichael", "msg_date": "Wed, 17 May 2023 10:25:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "17.05.2023 04:25, Michael Paquier wrote:\n> The numbers between f193883fc~1 and HEAD+patch are close to each\n> other. It does not seem to make the whole difference with 15.3, but\n> most of it. The difference can also be explained with some noise,\n> based on the number patterns of the third runs?\n>\n> I have now applied the revert, ready for beta1. Thanks for the\n> feedback!\n>\n\nThank you for paying attention to it!\n\nYes, I ran the benchmark on my workstation, so numbers could vary due parallel\nactivity. Now I've compared 15.3 (8382864eb) with d8c3106bb and 1d369c9e9,\nthis time with the CPU boost mode disabled:\n1d369c9e9:\n10007.130326, 10047.722122, 9920.612426, 10016.053846, 10060.606408\nd8c3106bb:\n10492.100485, 10505.326827, 10535.918137, 10625.904871, 10573.608859\n15.3:\n10458.752330, 10308.677192, 10366.496526, 10489.395275, 10319.458041\n\nBest \"1d369c9e9\" worse than \"15.3\" by 4.1 percents (10060.61 < 10489.40)\nAverage \"1d369c9e9\" worse than \"15.3\" by 3.6 percents (10010.43 < 10388.56)\n\nBest \"d8c3106bb\" better than \"15.3\" by 1.3 percents (10625.90 > 10489.40)\nAverage \"d8c3106bb\" better than \"15.3\" by 1.5 percents (10546.57 > 10388.56)\n\nSo it seems that there is nothing left on this plate.\n\nBest regards,\nAlexander\n\n\n\n\n\n17.05.2023 04:25, Michael Paquier\n wrote:\n\n\nThe numbers between f193883fc~1 and HEAD+patch are close to each\nother. It does not seem to make the whole difference with 15.3, but\nmost of it. The difference can also be explained with some noise,\nbased on the number patterns of the third runs?\n\nI have now applied the revert, ready for beta1. Thanks for the\nfeedback!\n\n\n\n\n Thank you for paying attention to it!\n\n Yes, I ran the benchmark on my workstation, so numbers could vary\n due parallel\n activity. Now I've compared 15.3 (8382864eb) with d8c3106bb and\n 1d369c9e9,\n this time with the CPU boost mode disabled:\n 1d369c9e9:\n 10007.130326, 10047.722122, 9920.612426, 10016.053846, 10060.606408\n d8c3106bb:\n 10492.100485, 10505.326827, 10535.918137, 10625.904871, 10573.608859\n 15.3:\n 10458.752330, 10308.677192, 10366.496526, 10489.395275, 10319.458041\n\n Best \"1d369c9e9\" worse than \"15.3\" by 4.1 percents (10060.61 <\n 10489.40)\n Average \"1d369c9e9\" worse than \"15.3\" by 3.6 percents (10010.43 <\n 10388.56)\n\n Best \"d8c3106bb\" better than \"15.3\" by 1.3 percents (10625.90 >\n 10489.40)\n Average \"d8c3106bb\" better than \"15.3\" by 1.5 percents (10546.57\n > 10388.56)\n\n So it seems that there is nothing left on this plate.\n\n Best regards,\n Alexander", "msg_date": "Wed, 17 May 2023 13:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Tue, May 9, 2023 at 10:36 AM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n\n>\n>\n> On Fri, May 5, 2023 at 10:01 PM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n>\n>> I have two more runs of the benchmark in progress so we will have 3\n>> results for each of the test cases to confirm that the small regressions\n>> are repeatable.\n>>\n>\nI repeated the benchmark a few times using a more recent PG16 build (git\nsha 08c45ae2) and have yet to see any significant changes. So that is good\nnews. My testing scripts have been improved so I should be able to finish\nthe next round of tests in less time.\n\n--\nMark Callaghan\nmdcallag@gmail.com\n\nOn Tue, May 9, 2023 at 10:36 AM MARK CALLAGHAN <mdcallag@gmail.com> wrote:On Fri, May 5, 2023 at 10:01 PM MARK CALLAGHAN <mdcallag@gmail.com> wrote:I have two more runs of the benchmark in progress so we will have 3 results for each of the test cases to confirm that the small regressions are repeatable.I repeated the benchmark a few times using a more recent PG16 build (git sha 08c45ae2) and have yet to see any significant changes. So that is good news. My testing scripts have been improved so I should be able to finish the next round of tests in less time.--Mark Callaghanmdcallag@gmail.com", "msg_date": "Fri, 19 May 2023 15:44:09 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": true, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hi,\n\nOn 2023-05-19 15:44:09 -0700, MARK CALLAGHAN wrote:\n> On Tue, May 9, 2023 at 10:36 AM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n> \n> >\n> >\n> > On Fri, May 5, 2023 at 10:01 PM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n> >\n> >> I have two more runs of the benchmark in progress so we will have 3\n> >> results for each of the test cases to confirm that the small regressions\n> >> are repeatable.\n> >>\n> >\n> I repeated the benchmark a few times using a more recent PG16 build (git\n> sha 08c45ae2) and have yet to see any significant changes. So that is good\n> news. My testing scripts have been improved so I should be able to finish\n> the next round of tests in less time.\n\nWith \"yet to see any significant changes\" do you mean that the runs are\ncomparable with earlier runs, showing the same regression? Or that the\nregression vanished? Or ...?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 19 May 2023 16:04:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Fri, May 19, 2023 at 4:04 PM Andres Freund <andres@anarazel.de> wrote:\n\n> With \"yet to see any significant changes\" do you mean that the runs are\n> comparable with earlier runs, showing the same regression? Or that the\n> regression vanished? Or ...?\n>\n\nI mean that I might be chasing noise and the mean+stddev for throughput in\nversion 16 pre-beta so far appears to be similar to 15.2. When I ran the\ninsert benchmark a few times, I focused on the cases where 16 pre-beta was\nworse than 15.2 while ignoring the cases where it was better. Big\nregressions are easy to document, small ones not so much.\n\nRegardless, I am repeating tests from both the insert benchmark and\nsysbench for version 16 (pre-beta, and soon beta1).\n\nOn Fri, May 19, 2023 at 4:04 PM Andres Freund <andres@anarazel.de> wrote:\nWith \"yet to see any significant changes\" do you mean that the runs are\ncomparable with earlier runs, showing the same regression? Or that the\nregression vanished? Or ...?I mean that I might be chasing noise and the mean+stddev for throughput in version 16 pre-beta so far appears to be similar to 15.2. When I ran the insert benchmark a few times, I focused on the cases where 16 pre-beta was worse than 15.2 while ignoring the cases where it was better. Big regressions are easy to document, small ones not so much.Regardless, I am repeating tests from both the insert benchmark and sysbench for version 16 (pre-beta, and soon beta1).", "msg_date": "Sat, 20 May 2023 14:32:36 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": true, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "I ran sysbench on Postgres 15.2, 15.3 and 16 prebeta at git sha 1c006c0\n(built on May19).\nThe workload was in-memory on a small server (8 cores, 16G RAM) and the\nworkload had 1 connection (no concurrency).\nFor some details on past benchmarks like this see:\nhttp://smalldatum.blogspot.com/2023/03/searching-for-performance-regressions.html\n\nMy focus is on changes >= 10%, so a value <= 0.90 or >= 1.10.\nI used 3 builds of Postgres that I call def, o2_nofp, o3_nofp and ran the\nbenchmark once per build. The results for each build are similar\nand I only share the o2_nofp results there.\n\nGood news, that I have not fully explained ...\n\nOne of the microbenchmarks gets ~1.5X more transactions/second (TPS) in PG\n16 prebeta vs 15.2 and 15.3 for a read-only transaction that does:\n* 2 select statements that scans 10k rows from an index (these are Q1 and\nQ3 below and are slower in PG 16)\n* 2 select statements that scans 10k rows from an index and does\naggregation (these are Q2 and Q4 below are are a lot faster in PG 16)\n\nThe speedup for Q2 and Q4 is larger than the slowdown for Q1/Q3 so TPS is\n~1.5X more for PG 16.\nQuery plans don't appear to have changed. I assume some code got slower and\nsome got faster for the same plan.\n\nThe microbenchmarks are read-only_range=10000 and read-only.pre_range=10000\nshow.\nEach of these microbenchmarks run a read-only transaction with 4 SQL\nstatements. The statements are here:\nhttps://github.com/mdcallag/mytools/blob/master/bench/sysbench.lua/lua/oltp_common.lua#LL301C1-L312C21\n\nread-only.pre_range runs before a large number of writes, so the b-tree\nwill be more read-friendly.\nread-only.range runs after a large number of writes.\n\nThe =10000 means that each SQL statement processes 10000 rows. Note that\nthe microbenchmarks are also run for =100 and =10\nand for those the perf with PG16 is similar to 15.x rather than ~1.5X\nfaster.\n\n---\n\nThis table shows throughput relative to the base case. The base case is PG\n15.2 with the o2_nofp build.\nThroughput relative < 1.0 means perf regressed, > 1.0 means perf improved\n\ncol-1 : PG 15.3 with the o2_nofp build\ncol-2 : PG 16 prebeta build on May 19 at git sha 1c006c0 with the o2_nofp\nbuild\n\ncol-1 col-2\n0.99 1.03 hot-points_range=100\n1.02 1.05 point-query.pre_range=100\n1.06 1.10 point-query_range=100\n0.97 1.01 points-covered-pk.pre_range=100\n0.98 1.02 points-covered-pk_range=100\n0.98 1.01 points-covered-si.pre_range=100\n1.00 1.00 points-covered-si_range=100\n1.00 1.01 points-notcovered-pk.pre_range=100\n1.00 1.01 points-notcovered-pk_range=100\n1.01 1.03 points-notcovered-si.pre_range=100\n1.01 1.01 points-notcovered-si_range=100\n1.00 0.99 random-points.pre_range=1000\n1.00 1.02 random-points.pre_range=100\n1.01 1.01 random-points.pre_range=10\n1.01 1.00 random-points_range=1000\n1.01 1.01 random-points_range=100\n1.02 1.01 random-points_range=10\n1.00 1.00 range-covered-pk.pre_range=100\n1.00 1.00 range-covered-pk_range=100\n1.00 0.99 range-covered-si.pre_range=100\n1.00 0.99 range-covered-si_range=100\n1.03 1.01 range-notcovered-pk.pre_range=100\n1.02 1.00 range-notcovered-pk_range=100\n1.01 1.01 range-notcovered-si.pre_range=100\n1.01 1.01 range-notcovered-si_range=100\n1.04 1.54 read-only.pre_range=10000 <<<<<<<<<<\n1.00 1.00 read-only.pre_range=100\n1.01 1.01 read-only.pre_range=10\n1.03 1.45 read-only_range=10000 <<<<<<<<<<\n1.01 1.01 read-only_range=100\n1.04 1.00 read-only_range=10\n1.00 0.99 scan_range=100\n1.00 1.02 delete_range=100\n1.01 1.02 insert_range=100\n1.01 1.00 read-write_range=100\n1.01 0.98 read-write_range=10\n1.01 1.01 update-index_range=100\n1.00 1.00 update-inlist_range=100\n1.02 1.02 update-nonindex_range=100\n1.03 1.03 update-one_range=100\n1.02 1.02 update-zipf_range=100\n1.03 1.03 write-only_range=10000\n\n---\n\nThe read-only transaction has 4 SQL statements. I ran explain analyze for\neach of them assuming the range scan fetches 10k rows and then 100k rows.\nThe 10k result is similar to what was done above, then I added the 100k\nresult to see if the perf difference changes with more rows.\n\nIn each case there are two \"Execution Time\" entries. The top one is from PG\n15.2 and the bottom from PG 16 prebeta\n\nSummary:\n* Queries that do a sort show the largest improvement in PG 16 (Q2, Q4)\n* Queries that don't do a sort are slower in PG 16 (Q1, Q3)\n\nQ1.10k: explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND\n10010000;\n\n Execution Time: 4.222 ms\n\n Execution Time: 6.243 ms\n\n\nQ1.100k: explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000\nAND 10100000;\n\n Execution Time: 36.508 ms\n\n Execution Time: 49.344 ms\n\n\nQ2.10k: explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND\n10010000 order by c;\n\n Execution Time: 38.224 ms\n\n Execution Time: 15.700 ms\n\n\nQ2.100k: explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000\nAND 10100000 order by c;\n\n Execution Time: 392.380 ms\n\n Execution Time: 219.022 ms\n\n\nQ3.10k: explain analyze SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10010000;\n\n Execution Time: 3.660 ms\n\n Execution Time: 3.994 ms\n\n\nQ3.100k: explain analyze SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10100000;\n\nExecution Time: 35.917 ms\n\n Execution Time: 39.055 ms\n\n\nQ4.10k: explain analyze SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10010000 ORDER BY c;\n\n Execution Time: 29.998 ms\n\n Execution Time: 18.877 ms\n\n\nQ4.100k: explain analyze SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10010000 ORDER BY c;\n\n Execution Time: 29.272 ms\n\n Execution Time: 18.265 ms\n\n---\n\nFinally, the queries with full explain analyze output. Each section has two\nresults -- first for PG 15.3, second for PG 16\n\n--- Q1.10k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10010000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..525.86 rows=8971\nwidth=121) (actual time=0.061..3.676 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.034 ms\n Execution Time: 4.222 ms\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..589.80 rows=10068\nwidth=121) (actual time=0.094..5.456 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.063 ms\n Execution Time: 6.243 ms\n\n--- Q1.100k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10100000;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..5206.44 rows=89700\nwidth=121) (actual time=0.017..31.166 rows=100001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10100000))\n Planning Time: 0.024 ms\n Execution Time: 36.508 ms\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..5845.86 rows=100671\nwidth=121) (actual time=0.029..42.285 rows=100001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10100000))\n Planning Time: 0.061 ms\n Execution Time: 49.344 ms\n\n--- Q2.10k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10010000 order by c;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1114.85..1137.28 rows=8971 width=121) (actual\ntime=36.561..37.429 rows=10001 loops=1)\n Sort Key: c\n Sort Method: quicksort Memory: 2025kB\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..525.86\nrows=8971 width=121) (actual time=0.022..3.776 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.059 ms\n Execution Time: 38.224 ms\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1259.19..1284.36 rows=10068 width=121) (actual\ntime=14.419..15.042 rows=10001 loops=1)\n Sort Key: c\n Sort Method: quicksort Memory: 1713kB\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..589.80\nrows=10068 width=121) (actual time=0.023..3.473 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.049 ms\n Execution Time: 15.700 ms\n\n--- Q2.100k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10100000 order by c;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=18413.03..18637.28 rows=89700 width=121) (actual\ntime=300.717..385.193 rows=100001 loops=1)\n Sort Key: c\n Sort Method: external merge Disk: 12848kB\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..5206.44\nrows=89700 width=121) (actual time=0.028..29.590 rows=100001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10100000))\n Planning Time: 0.048 ms\n Execution Time: 392.380 ms\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=20749.26..21000.94 rows=100671 width=121) (actual\ntime=154.969..211.572 rows=100001 loops=1)\n Sort Key: c\n Sort Method: external merge Disk: 12240kB\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..5845.86\nrows=100671 width=121) (actual time=0.026..34.278 rows=100001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10100000))\n Planning Time: 0.034 ms\n Execution Time: 219.022 ms\n\n--- Q3.10k : explain analyze SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10010000;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=548.29..548.30 rows=1 width=8) (actual time=3.645..3.646\nrows=1 loops=1)\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..525.86\nrows=8971 width=4) (actual time=0.024..2.587 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.036 ms\n Execution Time: 3.660 ms\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=614.97..614.98 rows=1 width=8) (actual time=3.980..3.980\nrows=1 loops=1)\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..589.80\nrows=10068 width=4) (actual time=0.024..2.993 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.038 ms\n Execution Time: 3.994 ms\n\n--- Q3.100k : explain analyze SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN\n10000000 AND 10100000;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=5430.69..5430.70 rows=1 width=8) (actual\ntime=35.901..35.902 rows=1 loops=1)\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..5206.44\nrows=89700 width=4) (actual time=0.017..25.256 rows=100001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10100000))\n Planning Time: 0.032 ms\n Execution Time: 35.917 ms\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=6097.53..6097.55 rows=1 width=8) (actual\ntime=39.034..39.035 rows=1 loops=1)\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..5845.86\nrows=100671 width=4) (actual time=0.018..29.291 rows=100001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10100000))\n Planning Time: 0.051 ms\n Execution Time: 39.055 ms\n\n--- Q4.10k : explain analyze SELECT DISTINCT c FROM sbtest1 WHERE id\nBETWEEN 10000000 AND 10010000 ORDER BY c;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1114.85..1159.71 rows=8971 width=121) (actual\ntime=26.335..29.435 rows=10001 loops=1)\n -> Sort (cost=1114.85..1137.28 rows=8971 width=121) (actual\ntime=26.333..27.085 rows=10001 loops=1)\n Sort Key: c\n Sort Method: quicksort Memory: 2025kB\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..525.86\nrows=8971 width=121) (actual time=0.021..2.968 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.052 ms\n Execution Time: 29.998 ms\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1259.19..1309.53 rows=10068 width=121) (actual\ntime=14.203..18.318 rows=10001 loops=1)\n -> Sort (cost=1259.19..1284.36 rows=10068 width=121) (actual\ntime=14.200..14.978 rows=10001 loops=1)\n Sort Key: c\n Sort Method: quicksort Memory: 1713kB\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..589.80\nrows=10068 width=121) (actual time=0.030..3.475 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.053 ms\n Execution Time: 18.877 ms\n\n--- Q4.100k : explain analyze SELECT DISTINCT c FROM sbtest1 WHERE id\nBETWEEN 10000000 AND 10010000 ORDER BY c;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1114.85..1159.71 rows=8971 width=121) (actual\ntime=25.567..28.709 rows=10001 loops=1)\n -> Sort (cost=1114.85..1137.28 rows=8971 width=121) (actual\ntime=25.565..26.320 rows=10001 loops=1)\n Sort Key: c\n Sort Method: quicksort Memory: 2025kB\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..525.86\nrows=8971 width=121) (actual time=0.025..2.926 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.052 ms\n Execution Time: 29.272 ms\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1259.19..1309.53 rows=10068 width=121) (actual\ntime=13.620..17.714 rows=10001 loops=1)\n -> Sort (cost=1259.19..1284.36 rows=10068 width=121) (actual\ntime=13.618..14.396 rows=10001 loops=1)\n Sort Key: c\n Sort Method: quicksort Memory: 1713kB\n -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..589.80\nrows=10068 width=121) (actual time=0.024..3.478 rows=10001 loops=1)\n Index Cond: ((id >= 10000000) AND (id <= 10010000))\n Planning Time: 0.039 ms\n Execution Time: 18.265 ms\n\n-- \nMark Callaghan\nmdcallag@gmail.com\n\nI ran sysbench on Postgres 15.2, 15.3 and 16 prebeta at git sha 1c006c0 (built on May19). The workload was in-memory on a small server (8 cores, 16G RAM) and the workload had 1 connection (no concurrency).For some details on past benchmarks like this see: http://smalldatum.blogspot.com/2023/03/searching-for-performance-regressions.htmlMy focus is on changes >= 10%, so a value <= 0.90 or >= 1.10.I used 3 builds of Postgres that I call def, o2_nofp, o3_nofp and ran the benchmark once per build. The results for each build are similarand I only share the o2_nofp results there.Good news, that I have not fully explained ...One of the microbenchmarks gets ~1.5X more transactions/second (TPS) in PG 16 prebeta vs 15.2 and 15.3 for a read-only transaction that does:* 2 select statements that scans 10k rows from an index (these are Q1 and Q3 below and are slower in PG 16)* 2 select statements that scans 10k rows from an index and does aggregation (these are Q2 and Q4 below are are a lot faster in PG 16)The speedup for Q2 and Q4 is larger than the slowdown for Q1/Q3 so TPS is ~1.5X more for PG 16.Query plans don't appear to have changed. I assume some code got slower and some got faster for the same plan.The microbenchmarks are read-only_range=10000 and read-only.pre_range=10000 show.Each of these microbenchmarks run a read-only transaction with 4 SQL statements. The statements are here:https://github.com/mdcallag/mytools/blob/master/bench/sysbench.lua/lua/oltp_common.lua#LL301C1-L312C21read-only.pre_range runs before a large number of writes, so the b-tree will be more read-friendly. read-only.range runs after a large number of writes.The =10000 means that each SQL statement processes 10000 rows. Note that the microbenchmarks are also run for =100 and =10and for those the perf with PG16 is similar to 15.x rather than ~1.5X faster.---This table shows throughput relative to the base case. The base case is PG 15.2 with the o2_nofp build.Throughput relative < 1.0 means perf regressed, > 1.0 means perf improvedcol-1 : PG 15.3 with the o2_nofp buildcol-2 : PG 16 prebeta build on May 19 at git sha 1c006c0 with the o2_nofp buildcol-1   col-20.99    1.03    hot-points_range=1001.02    1.05    point-query.pre_range=1001.06    1.10    point-query_range=1000.97    1.01    points-covered-pk.pre_range=1000.98    1.02    points-covered-pk_range=1000.98    1.01    points-covered-si.pre_range=1001.00    1.00    points-covered-si_range=1001.00    1.01    points-notcovered-pk.pre_range=1001.00    1.01    points-notcovered-pk_range=1001.01    1.03    points-notcovered-si.pre_range=1001.01    1.01    points-notcovered-si_range=1001.00    0.99    random-points.pre_range=10001.00    1.02    random-points.pre_range=1001.01    1.01    random-points.pre_range=101.01    1.00    random-points_range=10001.01    1.01    random-points_range=1001.02    1.01    random-points_range=101.00    1.00    range-covered-pk.pre_range=1001.00    1.00    range-covered-pk_range=1001.00    0.99    range-covered-si.pre_range=1001.00    0.99    range-covered-si_range=1001.03    1.01    range-notcovered-pk.pre_range=1001.02    1.00    range-notcovered-pk_range=1001.01    1.01    range-notcovered-si.pre_range=1001.01    1.01    range-notcovered-si_range=1001.04    1.54    read-only.pre_range=10000                       <<<<<<<<<<1.00    1.00    read-only.pre_range=1001.01    1.01    read-only.pre_range=101.03    1.45    read-only_range=10000                            <<<<<<<<<<1.01    1.01    read-only_range=1001.04    1.00    read-only_range=101.00    0.99    scan_range=1001.00    1.02    delete_range=1001.01    1.02    insert_range=1001.01    1.00    read-write_range=1001.01    0.98    read-write_range=101.01    1.01    update-index_range=1001.00    1.00    update-inlist_range=1001.02    1.02    update-nonindex_range=1001.03    1.03    update-one_range=1001.02    1.02    update-zipf_range=1001.03    1.03    write-only_range=10000---The read-only transaction has 4 SQL statements. I ran explain analyze for each of them assuming the range scan fetches 10k rows and then 100k rows.The 10k result is similar to what was done above, then I added the 100k result to see if the perf difference changes with more rows.In each case there are two \"Execution Time\" entries. The top one is from PG 15.2 and the bottom from PG 16 prebetaSummary:* Queries that do a sort show the largest improvement in PG 16 (Q2, Q4)* Queries that don't do a sort are slower in PG 16 (Q1, Q3)\nQ1.10k: explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000;\n Execution Time: 4.222 ms\n Execution Time: 6.243 ms\n\nQ1.100k: explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10100000;\n Execution Time: 36.508 ms\n Execution Time: 49.344 ms\n\nQ2.10k: explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000 order by c;\n Execution Time: 38.224 ms\n Execution Time: 15.700 ms\n\nQ2.100k: explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10100000 order by c;\n Execution Time: 392.380 ms\n Execution Time: 219.022 ms\n\nQ3.10k: explain analyze SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000;\n Execution Time: 3.660 ms\n Execution Time: 3.994 ms\n\nQ3.100k: explain analyze SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN 10000000 AND 10100000;\nExecution Time: 35.917 ms\n Execution Time: 39.055 ms\n\nQ4.10k: explain analyze SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000 ORDER BY c;\n Execution Time: 29.998 ms\n Execution Time: 18.877 ms\n\nQ4.100k: explain analyze SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000 ORDER BY c;\n Execution Time: 29.272 ms\n Execution Time: 18.265 ms---Finally, the queries with full explain analyze output. Each section has two results -- first for PG 15.3, second for PG 16--- Q1.10k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000;                                                           QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------- Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..525.86 rows=8971 width=121) (actual time=0.061..3.676 rows=10001 loops=1)   Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.034 ms Execution Time: 4.222 ms                                                            QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------- Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..589.80 rows=10068 width=121) (actual time=0.094..5.456 rows=10001 loops=1)   Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.063 ms Execution Time: 6.243 ms--- Q1.100k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10100000;                                                             QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------- Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..5206.44 rows=89700 width=121) (actual time=0.017..31.166 rows=100001 loops=1)   Index Cond: ((id >= 10000000) AND (id <= 10100000)) Planning Time: 0.024 ms Execution Time: 36.508 ms                                                              QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------- Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..5845.86 rows=100671 width=121) (actual time=0.029..42.285 rows=100001 loops=1)   Index Cond: ((id >= 10000000) AND (id <= 10100000)) Planning Time: 0.061 ms Execution Time: 49.344 ms--- Q2.10k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000 order by c;                                                              QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=1114.85..1137.28 rows=8971 width=121) (actual time=36.561..37.429 rows=10001 loops=1)   Sort Key: c   Sort Method: quicksort  Memory: 2025kB   ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..525.86 rows=8971 width=121) (actual time=0.022..3.776 rows=10001 loops=1)         Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.059 ms Execution Time: 38.224 ms                                                               QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=1259.19..1284.36 rows=10068 width=121) (actual time=14.419..15.042 rows=10001 loops=1)   Sort Key: c   Sort Method: quicksort  Memory: 1713kB   ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..589.80 rows=10068 width=121) (actual time=0.023..3.473 rows=10001 loops=1)         Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.049 ms Execution Time: 15.700 ms--- Q2.100k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10100000 order by c;                                                                QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=18413.03..18637.28 rows=89700 width=121) (actual time=300.717..385.193 rows=100001 loops=1)   Sort Key: c   Sort Method: external merge  Disk: 12848kB   ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..5206.44 rows=89700 width=121) (actual time=0.028..29.590 rows=100001 loops=1)         Index Cond: ((id >= 10000000) AND (id <= 10100000)) Planning Time: 0.048 ms Execution Time: 392.380 ms                                                                 QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=20749.26..21000.94 rows=100671 width=121) (actual time=154.969..211.572 rows=100001 loops=1)   Sort Key: c   Sort Method: external merge  Disk: 12240kB   ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..5845.86 rows=100671 width=121) (actual time=0.026..34.278 rows=100001 loops=1)         Index Cond: ((id >= 10000000) AND (id <= 10100000)) Planning Time: 0.034 ms Execution Time: 219.022 ms--- Q3.10k : explain analyze SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000;                                                             QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=548.29..548.30 rows=1 width=8) (actual time=3.645..3.646 rows=1 loops=1)   ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..525.86 rows=8971 width=4) (actual time=0.024..2.587 rows=10001 loops=1)         Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.036 ms Execution Time: 3.660 ms                                                              QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=614.97..614.98 rows=1 width=8) (actual time=3.980..3.980 rows=1 loops=1)   ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..589.80 rows=10068 width=4) (actual time=0.024..2.993 rows=10001 loops=1)         Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.038 ms Execution Time: 3.994 ms--- Q3.100k : explain analyze SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN 10000000 AND 10100000;                                                               QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=5430.69..5430.70 rows=1 width=8) (actual time=35.901..35.902 rows=1 loops=1)   ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..5206.44 rows=89700 width=4) (actual time=0.017..25.256 rows=100001 loops=1)         Index Cond: ((id >= 10000000) AND (id <= 10100000)) Planning Time: 0.032 ms Execution Time: 35.917 ms                                                                QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=6097.53..6097.55 rows=1 width=8) (actual time=39.034..39.035 rows=1 loops=1)   ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..5845.86 rows=100671 width=4) (actual time=0.018..29.291 rows=100001 loops=1)         Index Cond: ((id >= 10000000) AND (id <= 10100000)) Planning Time: 0.051 ms Execution Time: 39.055 ms--- Q4.10k : explain analyze SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000 ORDER BY c;                                                                 QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------- Unique  (cost=1114.85..1159.71 rows=8971 width=121) (actual time=26.335..29.435 rows=10001 loops=1)   ->  Sort  (cost=1114.85..1137.28 rows=8971 width=121) (actual time=26.333..27.085 rows=10001 loops=1)         Sort Key: c         Sort Method: quicksort  Memory: 2025kB         ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..525.86 rows=8971 width=121) (actual time=0.021..2.968 rows=10001 loops=1)               Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.052 ms Execution Time: 29.998 ms                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Unique  (cost=1259.19..1309.53 rows=10068 width=121) (actual time=14.203..18.318 rows=10001 loops=1)   ->  Sort  (cost=1259.19..1284.36 rows=10068 width=121) (actual time=14.200..14.978 rows=10001 loops=1)         Sort Key: c         Sort Method: quicksort  Memory: 1713kB         ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..589.80 rows=10068 width=121) (actual time=0.030..3.475 rows=10001 loops=1)               Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.053 ms Execution Time: 18.877 ms--- Q4.100k : explain analyze SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000 ORDER BY c;                                                                 QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------- Unique  (cost=1114.85..1159.71 rows=8971 width=121) (actual time=25.567..28.709 rows=10001 loops=1)   ->  Sort  (cost=1114.85..1137.28 rows=8971 width=121) (actual time=25.565..26.320 rows=10001 loops=1)         Sort Key: c         Sort Method: quicksort  Memory: 2025kB         ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..525.86 rows=8971 width=121) (actual time=0.025..2.926 rows=10001 loops=1)               Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.052 ms Execution Time: 29.272 ms                                                                  QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------- Unique  (cost=1259.19..1309.53 rows=10068 width=121) (actual time=13.620..17.714 rows=10001 loops=1)   ->  Sort  (cost=1259.19..1284.36 rows=10068 width=121) (actual time=13.618..14.396 rows=10001 loops=1)         Sort Key: c         Sort Method: quicksort  Memory: 1713kB         ->  Index Scan using sbtest1_pkey on sbtest1  (cost=0.44..589.80 rows=10068 width=121) (actual time=0.024..3.478 rows=10001 loops=1)               Index Cond: ((id >= 10000000) AND (id <= 10010000)) Planning Time: 0.039 ms Execution Time: 18.265 ms-- Mark Callaghanmdcallag@gmail.com", "msg_date": "Mon, 22 May 2023 12:40:25 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": true, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Results for 16 beta1 are similar to what I shared above:\n* no regressions\n* a few queries are >= 1.5 times faster which make the read-only\ntransaction >= 1.5 times faster\n\nhttp://smalldatum.blogspot.com/2023/05/postgres-16beta1-looks-good-vs-sysbench.html\n\n-- \nMark Callaghan\nmdcallag@gmail.com\n\nResults for 16 beta1 are similar to what I shared above:* no regressions* a few queries are >= 1.5 times faster which make the read-only transaction >= 1.5 times fasterhttp://smalldatum.blogspot.com/2023/05/postgres-16beta1-looks-good-vs-sysbench.html-- Mark Callaghanmdcallag@gmail.com", "msg_date": "Sat, 27 May 2023 17:15:08 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": true, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Tue, 23 May 2023 at 07:40, MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n\n(pg15)\n\n> --- Q2.10k : explain analyze SELECT c FROM sbtest1 WHERE id BETWEEN 10000000 AND 10010000 order by c;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=1114.85..1137.28 rows=8971 width=121) (actual time=36.561..37.429 rows=10001 loops=1)\n> Sort Key: c\n> Sort Method: quicksort Memory: 2025kB\n> -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..525.86 rows=8971 width=121) (actual time=0.022..3.776 rows=10001 loops=1)\n> Index Cond: ((id >= 10000000) AND (id <= 10010000))\n> Planning Time: 0.059 ms\n> Execution Time: 38.224 ms\n\n(pg16 b1)\n\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=1259.19..1284.36 rows=10068 width=121) (actual time=14.419..15.042 rows=10001 loops=1)\n> Sort Key: c\n> Sort Method: quicksort Memory: 1713kB\n> -> Index Scan using sbtest1_pkey on sbtest1 (cost=0.44..589.80 rows=10068 width=121) (actual time=0.023..3.473 rows=10001 loops=1)\n> Index Cond: ((id >= 10000000) AND (id <= 10010000))\n> Planning Time: 0.049 ms\n> Execution Time: 15.700 ms\n\nIt looks like the improvements here are due to qsort being faster on\nv16. To get an idea of the time taken to perform the actual qsort,\nyou can't use the first row time vs last row time in the Sort node, as\nwe must (obviously) have performed the sort before outputting the\nfirst row. I think you could get a decent idea of the time taken to\nperform the qsort by subtracting the actual time for the final Index\nscan row from the actual time for the Sort's first row. That's 36.561\n- 3.776 = 32.785 ms for pg15's plan, and 14.419 - 3.473 = 10.946 ms\npg16 b1's\n\nc6e0fe1f2 might have helped improve some of that performance, but I\nsuspect there must be something else as ~3x seems much more than I'd\nexpect from reducing the memory overheads. Testing versions before\nand after that commit might give a better indication.\n\nDavid\n\n\n", "msg_date": "Mon, 29 May 2023 09:42:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "On Sun, May 28, 2023 at 2:42 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> c6e0fe1f2 might have helped improve some of that performance, but I\n> suspect there must be something else as ~3x seems much more than I'd\n> expect from reducing the memory overheads. Testing versions before\n> and after that commit might give a better indication.\n\nI'm virtually certain that this is due to the change in default\ncollation provider, from libc to ICU. Mostly due to the fact that ICU\nis capable of using abbreviated keys, and the system libc isn't\n(unless you go out of your way to define TRUST_STRXFRM when building\nPostgres).\n\nMany individual test cases involving larger non-C collation text sorts\nshowed similar improvements back when I worked on this. Offhand, I\nbelieve that 3x - 3.5x improvements in execution times were common\nwith high entropy abbreviated keys on high cardinality input columns\nat that time (this was with glibc). Low cardinality inputs were more\nlike 2.5x.\n\nI believe that ICU is faster than glibc in general -- even with\nTRUST_STRXFRM enabled. But the TRUST_STRXFRM thing is bound to be the\nmost important factor here, by far.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 29 May 2023 09:55:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Do you want me to try PG 16 without ICU or PG 15 with ICU?\nI can do that, but it will take a few days before the server is available.\n\nOn Mon, May 29, 2023 at 9:55 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Sun, May 28, 2023 at 2:42 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > c6e0fe1f2 might have helped improve some of that performance, but I\n> > suspect there must be something else as ~3x seems much more than I'd\n> > expect from reducing the memory overheads. Testing versions before\n> > and after that commit might give a better indication.\n>\n> I'm virtually certain that this is due to the change in default\n> collation provider, from libc to ICU. Mostly due to the fact that ICU\n> is capable of using abbreviated keys, and the system libc isn't\n> (unless you go out of your way to define TRUST_STRXFRM when building\n> Postgres).\n>\n> Many individual test cases involving larger non-C collation text sorts\n> showed similar improvements back when I worked on this. Offhand, I\n> believe that 3x - 3.5x improvements in execution times were common\n> with high entropy abbreviated keys on high cardinality input columns\n> at that time (this was with glibc). Low cardinality inputs were more\n> like 2.5x.\n>\n> I believe that ICU is faster than glibc in general -- even with\n> TRUST_STRXFRM enabled. But the TRUST_STRXFRM thing is bound to be the\n> most important factor here, by far.\n>\n> --\n> Peter Geoghegan\n>\n\n\n-- \nMark Callaghan\nmdcallag@gmail.com\n\nDo you want me to try PG 16 without ICU or PG 15 with ICU?I can do that, but it will take a few days before the server is available.On Mon, May 29, 2023 at 9:55 AM Peter Geoghegan <pg@bowt.ie> wrote:On Sun, May 28, 2023 at 2:42 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> c6e0fe1f2 might have helped improve some of that performance, but I\n> suspect there must be something else as ~3x seems much more than I'd\n> expect from reducing the memory overheads.  Testing versions before\n> and after that commit might give a better indication.\n\nI'm virtually certain that this is due to the change in default\ncollation provider, from libc to ICU. Mostly due to the fact that ICU\nis capable of using abbreviated keys, and  the system libc isn't\n(unless you go out of your way to define TRUST_STRXFRM when building\nPostgres).\n\nMany individual test cases involving larger non-C collation text sorts\nshowed similar improvements back when I worked on this. Offhand, I\nbelieve that 3x - 3.5x improvements in execution times were common\nwith high entropy abbreviated keys on high cardinality input columns\nat that time (this was with glibc). Low cardinality inputs were more\nlike 2.5x.\n\nI believe that ICU is faster than glibc in general -- even with\nTRUST_STRXFRM enabled. But the TRUST_STRXFRM thing is bound to be the\nmost important factor here, by far.\n\n-- \nPeter Geoghegan\n-- Mark Callaghanmdcallag@gmail.com", "msg_date": "Tue, 30 May 2023 10:03:32 -0700", "msg_from": "MARK CALLAGHAN <mdcallag@gmail.com>", "msg_from_op": true, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hi Mark,\n\nOn Tue, May 30, 2023 at 1:03 PM MARK CALLAGHAN <mdcallag@gmail.com> wrote:\n> Do you want me to try PG 16 without ICU or PG 15 with ICU?\n> I can do that, but it will take a few days before the server is available.\n\nSorry for the late reply. Most of the Postgres developers (myself\nincluded) are attending pgCon right now.\n\nIt would be nice to ascertain just how much of a boost we're getting\nfrom our use of ICU as far as sysbench goes. I'd appreciate having\nthat information. We discussed the choice of ICU as default collation\nprovider at yesterday's developer meeting:\n\nhttps://wiki.postgresql.org/wiki/PgCon_2023_Developer_Meeting#High_level_thoughts_and_feedback_on_moving_toward_ICU_as_the_preferred_collation_provider\nhttps://wiki.postgresql.org/wiki/StateOfICU\n\nJust confirming my theory about abbreviated keys (without rerunning\nthe benchmark) should be simple - perhaps that would be a useful place\nto start. You could just rerun the two individual queries of interest\nfrom an interactive psql session. There are even low-level debug\nmessages available through the trace_sort GUC. From a psql session\nyou'd run something along the lines of \"set client_min_messages=log;\nset trace_sort=on; $QUERY\". You'll see lots of LOG messages with\nspecific information about the use of abbreviated keys and the\nprogress of each sort.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 31 May 2023 08:34:49 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" }, { "msg_contents": "Hello David,\n\n11.05.2023 16:00, Alexander Lakhin wrote:\n> Yeah, I see. It's also interesting to me, which tests perform better after\n> that commit. It takes several hours to run all tests, so I can't present\n> results quickly, but I'll try to collect this information next week.\n\nTo my regret, I could not find such tests that week, so I decided to try\nlater, after reviewing my benchmark running infrastructure.\n\nBut for now, as Postgres Pro company graciously shared the benchmarking\ninfrastructure project [1], it's possible for anyone to confirm or deny my\nresults. (You can also see which tests were performed and how.)\n\nHaving done one more round of comprehensive testing, I still couldn't find\nwinning tests for commit 3c6fc5820 (but reassured that\ntest s64da_tpcds.query87 loses a little (and also s64da_tpcds.query38)).\nSo it seems to me that the tests I performed or their parameters is not\nrepresentative enough for that improvement, unfortunately.\n\n[1] https://github.com/postgrespro/pg-mark\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 7 Jul 2023 16:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: benchmark results comparing versions 15.2 and 16" } ]
[ { "msg_contents": "Dear PostgreSQL Community,\n\nHope you all are doing very well.\nMy name is Francisco Arbelaez and I would like to introduce myself in behalf of Bancolombia, from Open Source Office.\n From the Bancolombia organization, we have the responsibility to migrate private licensed technology to Open Source. More specifically, database migration from Oracle to PostgreSQL.\nOn one of these migrations, we had the necessity to implement an automatic user lockout strategy for users who do not log into the database in a given number of days, similar to a functionality that already exists in the Oracle environment.\nBased on the official contribution documentation, we have reviewed the TODOs and the PostgreSQL archives and have not found any related contribution. From the Open Source Office of Bancolombia, we have a possible contribution which could help to solve this need and to increase the value of PostgreSQL as an Open Source database engine.\n\nIf it matches your needs and objectives, I would like to receive more information related to the next steps of this contribution.\n\nHope to hear from you soon.\n\nBest regards,\nFrancisco Arbelaez\nOpen Source Office of Bancolombia\n\n\n\n[Logotipo Descripción generada automáticamente]\n\nFrancisco Luis Arbeláez López.\n\nIngeniero de Software Oficina Open Source.\n\nVicepresidencia Servicios de Tecnología\n\n4040000 - 41628\n\nMedellín – Colombia\n\nflarbela@bancolombia.com.co\n\n[Imagen que contiene Forma Descripción generada automáticamente]\n\n\n\n\n\n\n\n\n\n\nDear PostgreSQL Community, \n\n \n\nHope you all are doing very well. \n\nMy name is Francisco Arbelaez and I\n would like to introduce myself in behalf of Bancolombia, from Open Source Office.\n \n\nFrom the Bancolombia organization,\n we have the responsibility to migrate private licensed technology to Open Source. More specifically, database migration from Oracle to PostgreSQL.\n \n\nOn one of these migrations, we had\n the necessity to implement an automatic user lockout strategy for users who do not log into the database in a given\n number of days, similar to\n a functionality that already exists in the Oracle environment. \n \n\nBased on the official contribution\n documentation, we have reviewed the TODOs and the PostgreSQL archives and have not found any related contribution. From the Open Source Office of Bancolombia, we have a possible contribution which could help to solve this need and to increase the value of\n PostgreSQL as an Open Source database engine.  \n\n \n\nIf it matches your needs and objectives,\n I would like to receive more information related to the next steps of this contribution. \n\n \n\nHope to hear from you soon. \n\n \n\nBest regards, \n\nFrancisco Arbelaez \n\nOpen Source Office of Bancolombia \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrancisco Luis Arbeláez López.\nIngeniero de Software Oficina Open Source.\nVicepresidencia Servicios de Tecnología\n4040000 - 41628\nMedellín – Colombia\nflarbela@bancolombia.com.co", "msg_date": "Fri, 5 May 2023 19:54:10 +0000", "msg_from": "Francisco Luis Arbelaez Lopez <flarbela@bancolombia.com.co>", "msg_from_op": true, "msg_subject": "Bancolombia Open Source Program Office - Proposal of contribution on\n lock inactive users" }, { "msg_contents": "On 5/5/23 15:54, Francisco Luis Arbelaez Lopez wrote:\n> If it matches your needs and objectives, I would like to receive more \n> information related to the next steps of this contribution.\n\nIf you want to contribute a patch for consideration, you would start by \nsending it here (to this list) with discussion about why it is needed, \nwhat problem it solves, how it is designed, how it performs, etc. Also \nregister it for the next commitfest. See \"What is a CommitFest?\" here:\n\nhttps://www.postgresql.org/developer/\n\nBut first, I suggest you read through some or all of the following as well:\n\nhttps://wiki.postgresql.org/wiki/Development_information\n\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#Getting_Involved\n\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n\nhttps://momjian.us/main/writings/pgsql/company_contributions.html\n\nBeyond that, you and/or some of the folks on your team should follow the \ndiscussions on this list to become familiar with the people and the ways \nof the community.\n\nHope this helps,\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 6 May 2023 06:55:55 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Bancolombia Open Source Program Office - Proposal of contribution\n on lock inactive users" } ]
[ { "msg_contents": "Currently Window function nth_value is coded as following:\n\n\tnth = DatumGetInt32(WinGetFuncArgCurrent(winobj, 1, &isnull));\n\tif (isnull)\n\t\tPG_RETURN_NULL();\n\tconst_offset = get_fn_expr_arg_stable(fcinfo->flinfo, 1);\n\n\tif (nth <= 0)\n\t\tereport(ERROR,\n\t\t:\n\t\t:\n\nIs there any reason why argument 'nth' is not checked earlier?\nIMO, it is more natural \"if (nth <= 0)...\" is placed right after \"nth = DatumGetInt32...\".\n\nAttached is the patch which does this.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Sat, 06 May 2023 17:44:16 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Questionable coding in nth_value" }, { "msg_contents": "On Sat, May 6, 2023 at 4:44 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> Currently Window function nth_value is coded as following:\n>\n> nth = DatumGetInt32(WinGetFuncArgCurrent(winobj, 1, &isnull));\n> if (isnull)\n> PG_RETURN_NULL();\n> const_offset = get_fn_expr_arg_stable(fcinfo->flinfo, 1);\n>\n> if (nth <= 0)\n> ereport(ERROR,\n> :\n> :\n>\n> Is there any reason why argument 'nth' is not checked earlier?\n> IMO, it is more natural \"if (nth <= 0)...\" is placed right after \"nth =\n> DatumGetInt32...\".\n>\n> Attached is the patch which does this.\n\n\nHmm, shouldn't we check if the argument of nth_value is null before we\ncheck if it is greater than zero? So maybe we need to do this.\n\n--- a/src/backend/utils/adt/windowfuncs.c\n+++ b/src/backend/utils/adt/windowfuncs.c\n@@ -698,13 +698,14 @@ window_nth_value(PG_FUNCTION_ARGS)\n nth = DatumGetInt32(WinGetFuncArgCurrent(winobj, 1, &isnull));\n if (isnull)\n PG_RETURN_NULL();\n- const_offset = get_fn_expr_arg_stable(fcinfo->flinfo, 1);\n\n if (nth <= 0)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_ARGUMENT_FOR_NTH_VALUE),\n errmsg(\"argument of nth_value must be greater than\nzero\")));\n\n+ const_offset = get_fn_expr_arg_stable(fcinfo->flinfo, 1);\n+\n result = WinGetFuncArgInFrame(winobj, 0,\n nth - 1, WINDOW_SEEK_HEAD, const_offset,\n &isnull, NULL);\n\nThanks\nRichard\n\nOn Sat, May 6, 2023 at 4:44 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:Currently Window function nth_value is coded as following:\n\n        nth = DatumGetInt32(WinGetFuncArgCurrent(winobj, 1, &isnull));\n        if (isnull)\n                PG_RETURN_NULL();\n        const_offset = get_fn_expr_arg_stable(fcinfo->flinfo, 1);\n\n        if (nth <= 0)\n                ereport(ERROR,\n                :\n                :\n\nIs there any reason why argument 'nth' is not checked earlier?\nIMO, it is more natural \"if (nth <= 0)...\" is placed right after \"nth = DatumGetInt32...\".\n\nAttached is the patch which does this.Hmm, shouldn't we check if the argument of nth_value is null before wecheck if it is greater than zero?  So maybe we need to do this.--- a/src/backend/utils/adt/windowfuncs.c+++ b/src/backend/utils/adt/windowfuncs.c@@ -698,13 +698,14 @@ window_nth_value(PG_FUNCTION_ARGS)    nth = DatumGetInt32(WinGetFuncArgCurrent(winobj, 1, &isnull));    if (isnull)        PG_RETURN_NULL();-   const_offset = get_fn_expr_arg_stable(fcinfo->flinfo, 1);    if (nth <= 0)        ereport(ERROR,                (errcode(ERRCODE_INVALID_ARGUMENT_FOR_NTH_VALUE),                 errmsg(\"argument of nth_value must be greater than zero\")));+   const_offset = get_fn_expr_arg_stable(fcinfo->flinfo, 1);+    result = WinGetFuncArgInFrame(winobj, 0,                                  nth - 1, WINDOW_SEEK_HEAD, const_offset,                                  &isnull, NULL);ThanksRichard", "msg_date": "Sat, 6 May 2023 17:02:49 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questionable coding in nth_value" }, { "msg_contents": "> On Sat, May 6, 2023 at 4:44 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\r\n> \r\n>> Currently Window function nth_value is coded as following:\r\n>>\r\n>> nth = DatumGetInt32(WinGetFuncArgCurrent(winobj, 1, &isnull));\r\n>> if (isnull)\r\n>> PG_RETURN_NULL();\r\n>> const_offset = get_fn_expr_arg_stable(fcinfo->flinfo, 1);\r\n>>\r\n>> if (nth <= 0)\r\n>> ereport(ERROR,\r\n>> :\r\n>> :\r\n>>\r\n>> Is there any reason why argument 'nth' is not checked earlier?\r\n>> IMO, it is more natural \"if (nth <= 0)...\" is placed right after \"nth =\r\n>> DatumGetInt32...\".\r\n>>\r\n>> Attached is the patch which does this.\r\n> \r\n> \r\n> Hmm, shouldn't we check if the argument of nth_value is null before we\r\n> check if it is greater than zero? So maybe we need to do this.\r\n\r\nThat makes sense. I thought since this function is marked as strict,\r\nit would not be called if argument is NULL, but I was wrong.\r\n\r\nBest reagards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp\r\n", "msg_date": "Sat, 06 May 2023 19:04:30 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Questionable coding in nth_value" } ]
[ { "msg_contents": "Hi, hackers\n\nWhen executing \\du, you can see duplicates of the same role in 'member of'.\nThis happens when admin | inherit | set options are granted by another role.\n\n---\npostgres=# create role role_a login createrole;\nCREATE ROLE\npostgres=# \\du\n                                    List of roles\n  Role name | Attributes                         | Member of\n-----------+------------------------------------------------------------+-----------\n  role_a    | Create role                                                \n| {}\n  shinya    | Superuser, Create role, Create DB, Replication, Bypass RLS \n| {}\n\npostgres=# set role role_a;\nSET\npostgres=> create role role_b;\nCREATE ROLE\npostgres=> \\du\n                                    List of roles\n  Role name | Attributes                         | Member of\n-----------+------------------------------------------------------------+-----------\n  role_a    | Create role                                                \n| {role_b}\n  role_b    | Cannot login                                               \n| {}\n  shinya    | Superuser, Create role, Create DB, Replication, Bypass RLS \n| {}\n\npostgres=> grant role_b to role_a;\nGRANT ROLE\npostgres=> \\du\n                                       List of roles\n  Role name | Attributes                         |    Member of\n-----------+------------------------------------------------------------+-----------------\n  role_a    | Create role                                                \n| {role_b,role_b}\n  role_b    | Cannot login                                               \n| {}\n  shinya    | Superuser, Create role, Create DB, Replication, Bypass RLS \n| {}\n\npostgres=> select rolname, oid from pg_roles where rolname = 'role_b';\n  rolname |  oid\n---------+-------\n  role_b  | 16401\n(1 row)\n\npostgres=> select * from pg_auth_members where roleid = 16401;\n   oid  | roleid | member | grantor | admin_option | inherit_option | \nset_option\n-------+--------+--------+---------+--------------+----------------+------------\n  16402 |  16401 |  16400 |      10 | t            | f | f\n  16403 |  16401 |  16400 |   16400 | f            | t | t\n(2 rows)\n---\n\n\nAttached patch resolves this issue.\nDo you think?\n\nRegards,\nShinya Kato", "msg_date": "Sat, 6 May 2023 22:37:04 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Remove duplicates of membership from results of \\du" }, { "msg_contents": "On Sat, May 6, 2023 at 6:37 AM Shinya Kato <Shinya11.Kato@oss.nttdata.com>\nwrote:\n\n> Hi, hackers\n>\n> When executing \\du, you can see duplicates of the same role in 'member of'.\n> This happens when admin | inherit | set options are granted by another\n> role.\n>\n\nThere is already an ongoing patch discussing the needed changes to psql \\du\nbecause of this change in tracking membership grant attributes.\n\nhttps://www.postgresql.org/message-id/flat/b9be2d0e-a9bc-0a30-492f-a4f68e4f7740%40postgrespro.ru\n\nDavid J.\n\nOn Sat, May 6, 2023 at 6:37 AM Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:Hi, hackers\n\nWhen executing \\du, you can see duplicates of the same role in 'member of'.\nThis happens when admin | inherit | set options are granted by another role.There is already an ongoing patch discussing the needed changes to psql \\du because of this change in tracking membership grant attributes.https://www.postgresql.org/message-id/flat/b9be2d0e-a9bc-0a30-492f-a4f68e4f7740%40postgrespro.ruDavid J.", "msg_date": "Sat, 6 May 2023 10:52:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove duplicates of membership from results of \\du" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is a draft of the release announcement for the upcoming update \r\nrelease on May 11, 2023.\r\n\r\nPlease provide any suggestions, corrections, or notable omissions no \r\nlater than 2023-05-11 0:00 AoE.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sat, 6 May 2023 23:37:07 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2023-05-11 release announcement draft" }, { "msg_contents": "Op 5/7/23 om 05:37 schreef Jonathan S. Katz:\n> Attached is a draft of the release announcement for the upcoming update \n> release on May 11, 2023.\n> \n> Please provide any suggestions, corrections, or notable omissions no \n> later than 2023-05-11 0:00 AoE.\n\n'leak in within a' should be\n'leak within a'\n\nErik\n\n\n", "msg_date": "Sun, 7 May 2023 07:09:04 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: 2023-05-11 release announcement draft" }, { "msg_contents": "On 5/7/23 1:09 AM, Erik Rijkers wrote:\r\n> Op 5/7/23 om 05:37 schreef Jonathan S. Katz:\r\n>> Attached is a draft of the release announcement for the upcoming \r\n>> update release on May 11, 2023.\r\n>>\r\n>> Please provide any suggestions, corrections, or notable omissions no \r\n>> later than 2023-05-11 0:00 AoE.\r\n> \r\n> 'leak in within a'  should be\r\n> 'leak within a'\r\n\r\nThanks for that catch! Revision attached.\r\n\r\nJonathan", "msg_date": "Sun, 7 May 2023 20:48:25 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2023-05-11 release announcement draft" }, { "msg_contents": "Thanks for working on this.\n\nOn Sun, 7 May 2023 at 15:37, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Please provide any suggestions, corrections, or notable omissions no\n> later than 2023-05-11 0:00 AoE.\n\nFor this one:\n\n> * Fix partition pruning logic for partitioning on boolean columns when using a\n> `IS NOT TRUE` condition.\n\nJust to explain this a little further: Effectively the code thought\n\"IS NOT TRUE\" meant \"IS FALSE\", and \"IS NOT FALSE\" meant \"IS TRUE\".\nThat was wrong because each of the NOT cases should have allowed\nNULLs.\n\nMaybe the wording can be adjusted to mention NULLs. Maybe something\nalong the lines of:\n\n* Fix partition pruning bug with the boolean \"IS NOT TRUE\" and \"IS NOT\nFALSE\" conditions. NULL partitions were accidentally pruned when they\nshouldn't have been.\n\nDavid\n\n\n", "msg_date": "Mon, 8 May 2023 14:34:02 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2023-05-11 release announcement draft" }, { "msg_contents": "On 5/7/23 10:34 PM, David Rowley wrote:\r\n\r\n> * Fix partition pruning bug with the boolean \"IS NOT TRUE\" and \"IS NOT\r\n> FALSE\" conditions. NULL partitions were accidentally pruned when they\r\n> shouldn't have been.\r\n\r\nThanks for the additional explanation. I took your suggestion verbatim.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 10 May 2023 20:42:40 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2023-05-11 release announcement draft" } ]
[ { "msg_contents": "Hi\r\n\r\n\r\n\r\n\r\nI am writing to propose a prototype implementation that can greatly enhance the C/C++ interoperability in PostgreSQL. The implementation involves converting PG longjmp to&nbsp;[force unwind](https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#base-throw), which triggers the destruction of local variables on the stack. Additionally, it converts throw statements that are not associated with catch to PG longjmp, thereby avoiding the call to terminate.\r\n\r\n\r\n\r\n\r\nThe proposed implementation can significantly improve the interoperability between C and C++ code in PostgreSQL. It allows for seamless integration of C++ code with PostgreSQL, without the need for complex workarounds or modifications to the existing codebase.\r\n\r\n\r\n\r\n\r\nI have submitted the implementation on&nbsp;[GitHub](https://github.com/postgres/postgres/commit/1a9a2790430f256d9d0cc371249e43769d93eb8e#diff-6b6034caa00ddf38f641cbd10d5a5d1bb7135f8b23c5a879e9703bd11bd8240f). I would appreciate it if you could review the implementation and provide feedback.\r\n\r\n\r\n\r\n\r\nThank you for your time and consideration.\r\n\r\n\r\n\r\n\r\nBest regards,\r\n\r\n盏一\r\n----------\nHiI am writing to propose a prototype implementation that can greatly enhance the C/C++ interoperability in PostgreSQL. The implementation involves converting PG longjmp to [force unwind](https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#base-throw), which triggers the destruction of local variables on the stack. Additionally, it converts throw statements that are not associated with catch to PG longjmp, thereby avoiding the call to terminate.The proposed implementation can significantly improve the interoperability between C and C++ code in PostgreSQL. It allows for seamless integration of C++ code with PostgreSQL, without the need for complex workarounds or modifications to the existing codebase.I have submitted the implementation on [GitHub](https://github.com/postgres/postgres/commit/1a9a2790430f256d9d0cc371249e43769d93eb8e#diff-6b6034caa00ddf38f641cbd10d5a5d1bb7135f8b23c5a879e9703bd11bd8240f). I would appreciate it if you could review the implementation and provide feedback.Thank you for your time and consideration.Best regards,盏一----------", "msg_date": "Sun, 7 May 2023 14:48:14 +0800", "msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>", "msg_from_op": true, "msg_subject": "Proposal for Prototype Implementation to Enhance C/C++\n Interoperability in PostgreSQL" }, { "msg_contents": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com> writes:\n> The proposed implementation can significantly improve the interoperability between C and C++ code in PostgreSQL. It allows for seamless integration of C++ code with PostgreSQL, without the need for complex workarounds or modifications to the existing codebase.\n\nThat'd be nice to have, certainly ...\n\n> I have submitted the implementation on&nbsp;[GitHub](https://github.com/postgres/postgres/commit/1a9a2790430f256d9d0cc371249e43769d93eb8e#diff-6b6034caa00ddf38f641cbd10d5a5d1bb7135f8b23c5a879e9703bd11bd8240f). I would appreciate it if you could review the implementation and provide feedback.\n\n... but I think this patch has no hope of being adequately portable.\nIt seems extremely specific to one particular C++ implementation\n(unless you can show that every single thing you've used here is\nin the C++ standard), and then for good measure you've thrown in\na new dependency on pthreads. On top of that, doesn't this\nrequire us to move our minimum language requirement to C++-something?\nWe just barely got done deciding C99 was okay to use.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 May 2023 11:18:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal for Prototype Implementation to Enhance C/C++\n Interoperability in PostgreSQL" }, { "msg_contents": "&gt; It seems extremely specific to one particular C++ implementation\r\n\r\n\r\nTo perform a force unwind during longjmp, the _Unwind_ForcedUnwind function is used. This function is defined in the [Itanium C++ ABI Standard](https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#base-throw), which is followed by all C++ implementations. Additionally, the glibc [nptl/unwind.c](https://elixir.bootlin.com/glibc/latest/source/nptl/unwind.c#L130) file shows that on all platforms, pthread_exit is also implemented using _Unwind_ForcedUnwind.\r\n\r\n\r\nFurthermore, the Itanium C++ ABI specification also defines _Unwind_RaiseException as the entry point for all C++ exceptions thrown.\r\n\r\n\r\n&gt; you've thrown in a new dependency on pthreads\r\n\r\n\r\n\r\nThe reason for the dependence on pthread is due to the overloading of _Unwind_RaiseException, which serves as the entry point for all C++ throwing exceptions. Some third-party C++ libraries may create threads internally and throw exceptions.\r\n\r\n\r\nOverloading _Unwind_RaiseException is done to convert uncaught exceptions into elog(ERROR). If we require that all exceptions must be caught, we can remove the overloading of _Unwind_RaiseException and all pthread dependencies.\r\n\r\n\r\nThe overloading of _Unwind_RaiseException is just a fallback measure to prevent uncaught exceptions from terminating the process. In our code, this path is rarely taken, and once we encounter an exception that is not caught, we will fix the code to catch the exception.\r\n\r\n\r\n&gt; doesn't this require us to move our minimum language requirement to C++-something?\r\n\r\n\r\n\r\nNo, all code has no dependency on C++.\r\n&nbsp;\r\n------------------&nbsp;Original&nbsp;------------------\r\nFrom: &nbsp;\"tgl@sss.pgh.pa.us\"<tgl@sss.pgh.pa.us&gt;;\r\nDate: &nbsp;Sun, May 7, 2023 11:35 PM\r\nTo: &nbsp;\"盏一\"<w@hidva.com&gt;; \r\nCc: &nbsp;\"pgsql-hackers\"<pgsql-hackers@postgresql.org&gt;; \r\nSubject: &nbsp;Re: Proposal for Prototype Implementation to Enhance C/C++ Interoperability in PostgreSQL\r\n\r\n&nbsp;\r\n\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com&gt; writes:\r\n&gt; The proposed implementation can significantly improve the interoperability between C and C++ code in PostgreSQL. It allows for seamless integration of C++ code with PostgreSQL, without the need for complex workarounds or modifications to the existing codebase.\r\n\r\nThat'd be nice to have, certainly ...\r\n\r\n&gt; I have submitted the implementation on&amp;nbsp;[GitHub](https://github.com/postgres/postgres/commit/1a9a2790430f256d9d0cc371249e43769d93eb8e#diff-6b6034caa00ddf38f641cbd10d5a5d1bb7135f8b23c5a879e9703bd11bd8240f). I would appreciate it if you could review the implementation and provide feedback.\r\n\r\n... but I think this patch has no hope of being adequately portable.\r\nIt seems extremely specific to one particular C++ implementation\r\n(unless you can show that every single thing you've used here is\r\nin the C++ standard), and then for good measure you've thrown in\r\na new dependency on pthreads.&nbsp; On top of that, doesn't this\r\nrequire us to move our minimum language requirement to C++-something?\r\nWe just barely got done deciding C99 was okay to use.\r\n\r\n regards, tom lane\n> It seems extremely specific to one particular C++ implementationTo perform a force unwind during longjmp, the _Unwind_ForcedUnwind function is used. This function is defined in the [Itanium C++ ABI Standard](https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#base-throw), which is followed by all C++ implementations. Additionally, the glibc [nptl/unwind.c](https://elixir.bootlin.com/glibc/latest/source/nptl/unwind.c#L130) file shows that on all platforms, pthread_exit is also implemented using _Unwind_ForcedUnwind.Furthermore, the Itanium C++ ABI specification also defines _Unwind_RaiseException as the entry point for all C++ exceptions thrown.> you've thrown in a new dependency on pthreadsThe reason for the dependence on pthread is due to the overloading of _Unwind_RaiseException, which serves as the entry point for all C++ throwing exceptions. Some third-party C++ libraries may create threads internally and throw exceptions.Overloading _Unwind_RaiseException is done to convert uncaught exceptions into elog(ERROR). If we require that all exceptions must be caught, we can remove the overloading of _Unwind_RaiseException and all pthread dependencies.The overloading of _Unwind_RaiseException is just a fallback measure to prevent uncaught exceptions from terminating the process. In our code, this path is rarely taken, and once we encounter an exception that is not caught, we will fix the code to catch the exception.> doesn't this require us to move our minimum language requirement to C++-something?No, all code has no dependency on C++. ------------------ Original ------------------From:  \"tgl@sss.pgh.pa.us\"<tgl@sss.pgh.pa.us>;Date:  Sun, May 7, 2023 11:35 PMTo:  \"盏一\"<w@hidva.com>; Cc:  \"pgsql-hackers\"<pgsql-hackers@postgresql.org>; Subject:  Re: Proposal for Prototype Implementation to Enhance C/C++ Interoperability in PostgreSQL \"=?utf-8?B?55uP5LiA?=\" <w@hidva.com> writes:> The proposed implementation can significantly improve the interoperability between C and C++ code in PostgreSQL. It allows for seamless integration of C++ code with PostgreSQL, without the need for complex workarounds or modifications to the existing codebase.That'd be nice to have, certainly ...> I have submitted the implementation on&nbsp;[GitHub](https://github.com/postgres/postgres/commit/1a9a2790430f256d9d0cc371249e43769d93eb8e#diff-6b6034caa00ddf38f641cbd10d5a5d1bb7135f8b23c5a879e9703bd11bd8240f). I would appreciate it if you could review the implementation and provide feedback.... but I think this patch has no hope of being adequately portable.It seems extremely specific to one particular C++ implementation(unless you can show that every single thing you've used here isin the C++ standard), and then for good measure you've thrown ina new dependency on pthreads.  On top of that, doesn't thisrequire us to move our minimum language requirement to C++-something?We just barely got done deciding C99 was okay to use. regards, tom lane", "msg_date": "Mon, 8 May 2023 10:38:29 +0800", "msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>", "msg_from_op": true, "msg_subject": "Re: Proposal for Prototype Implementation to Enhance C/C++\n Interoperability in PostgreSQL" }, { "msg_contents": "(Sorry, there was a problem with the format of the previous email content. I will send it in plain text format this time\r\n\r\n> It seems extremely specific to one particular C++ implementation\r\n\r\nTo perform a force unwind during longjmp, the _Unwind_ForcedUnwind function is used. This function is defined in the [Itanium C++ ABI Standard](https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#base-throw), which is followed by all C++ implementations. Additionally, the glibc [nptl/unwind.c](https://elixir.bootlin.com/glibc/latest/source/nptl/unwind.c#L130) file shows that on all platforms, pthread_exit is also implemented using _Unwind_ForcedUnwind.\r\n\r\nFurthermore, the Itanium C++ ABI specification also defines _Unwind_RaiseException as the entry point for all C++ exceptions thrown.\r\n\r\n> you've thrown in a new dependency on pthreads\r\n\r\nThe reason for the dependence on pthread is due to the overloading of _Unwind_RaiseException, which serves as the entry point for all C++ throwing exceptions. Some third-party C++ libraries may create threads internally and throw exceptions.\r\n\r\nOverloading _Unwind_RaiseException is done to convert uncaught exceptions into elog(ERROR). If we require that all exceptions must be caught, we can remove the overloading of _Unwind_RaiseException and all pthread dependencies.\r\n\r\nThe overloading of _Unwind_RaiseException is just a fallback measure to prevent uncaught exceptions from terminating the process. In our code, this path is rarely taken, and once we encounter an exception that is not caught, we will fix the code to catch the exception.\r\n\r\n> doesn't this require us to move our minimum language requirement to C++-something?\r\n\r\nNo, all code has no dependency on C++.\r\n\r\nregards, 盏一", "msg_date": "Mon, 8 May 2023 10:56:51 +0800", "msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>", "msg_from_op": true, "msg_subject": "Re: Proposal for Prototype Implementation to Enhance C/C++\n Interoperability in PostgreSQL" }, { "msg_contents": "On 08.05.23 04:38, 盏一 wrote:\n> > It seems extremely specific to one particular C++ implementation\n> \n> To perform a force unwind during longjmp, the _Unwind_ForcedUnwind \n> function is used. This function is defined in the [Itanium C++ ABI \n> Standard](https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#base-throw), which is followed by all C++ implementations. Additionally, the glibc [nptl/unwind.c](https://elixir.bootlin.com/glibc/latest/source/nptl/unwind.c#L130) file shows that on all platforms, pthread_exit is also implemented using _Unwind_ForcedUnwind.\n> \n> Furthermore, the Itanium C++ ABI specification also defines \n> _Unwind_RaiseException as the entry point for all C++ exceptions thrown.\n\nI ran your patch through Cirrus CI, and it passed on Linux but failed on \nFreeBSD, macOS, and Windows. You should fix that if you want to \nalleviate the concerns about the portability of this approach.\n\n\n\n", "msg_date": "Mon, 8 May 2023 18:24:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Proposal for Prototype Implementation to Enhance C/C++\n Interoperability in PostgreSQL" }, { "msg_contents": "I apologize for my previous hasty conclusion. I have conducted further testing on different platforms and would like to share my findings.\r\n\r\n> FreeBSD\r\n\r\nBased on my tests, it appears that FreeBSD follows the Itanium C++ ABI specification. The previous test failed because the C++ compiler was not used when linking libpgsjlj.so.\r\n\r\n> macOS, M1\r\n\r\nMy tests show that macOS M1 roughly follows the Itanium C++ ABI specification, with only slight differences, such as the parameters accepted by the _Unwind_Stop_Fn function.\r\n\r\n> macOS, x86\r\n\r\nI don't have the resources to do the testing, but from a code perspective, it appears that macOS x86 follows the Itanium C++ ABI specification.\r\n\r\n> Windows\r\n\r\nIt seems that Windows does not follow the Itanium C++ ABI specification at all. If we compile the program using the `/EHsc` option, longjmp will also trigger forced unwinding. However, unlike the Itanium C++ ABI, the forced unwinding triggered here cannot be captured by a C++ catch statement.", "msg_date": "Tue, 9 May 2023 22:12:28 +0800", "msg_from": "\"=?utf-8?B?55uP5LiA?=\" <w@hidva.com>", "msg_from_op": true, "msg_subject": "Re: Proposal for Prototype Implementation to Enhance C/C++\n Interoperability in PostgreSQL" } ]
[ { "msg_contents": "Hi,\n\nWe call pgstat_drop_subscription() at the end of DropSubscription()\nbut we could leave from this function earlier e.g. when no slot is\nassociated with the subscription. In this case, the statistics entry\nfor the subscription remains. To fix it, I think we need to call it\nearlier, just after removing the catalog tuple. There is a chance the\ntransaction dropping the subscription fails due to network error etc\nbut we don't need to worry about it as reporting the subscription drop\nis transactional.\n\nI've attached the patch. Feedback is very welcome.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 8 May 2023 16:23:15 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Subscription statistics are not dropped at DROP SUBSCRIPTION in some\n cases" }, { "msg_contents": "On Mon, May 08, 2023 at 04:23:15PM +0900, Masahiko Sawada wrote:\n> We call pgstat_drop_subscription() at the end of DropSubscription()\n> but we could leave from this function earlier e.g. when no slot is\n> associated with the subscription. In this case, the statistics entry\n> for the subscription remains. To fix it, I think we need to call it\n> earlier, just after removing the catalog tuple. There is a chance the\n> transaction dropping the subscription fails due to network error etc\n> but we don't need to worry about it as reporting the subscription drop\n> is transactional.\n\nLooks reasonable to me. IIUC calling pgstat_drop_subscription() earlier\nmakes no real difference (besides avoiding this bug) because it is uѕing\npgstat_drop_transactional() behind the scenes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 May 2023 17:07:21 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "Dear Sawada-san,\r\n\r\nThank you for giving the patch! I confirmed that the problem you raised could be\r\noccurred on the HEAD, and the test you added could reproduce that. When the stats\r\nentry has been removed but pg_stat_get_subscription_stats() is called, the returned\r\nvalues are set as 0x0.\r\nAdditionally, I have checked other pgstat_drop_* functions, and I could not find\r\nany similar problems.\r\n\r\nA comment:\r\n\r\n```\r\n+ /*\r\n+ * Tell the cumulative stats system that the subscription is getting\r\n+ * dropped.\r\n+ */\r\n+ pgstat_drop_subscription(subid);\r\n```\r\n\r\nIsn't it better to write down something you said as comment? Or is it quite trivial?\r\n\r\n> There is a chance the\r\n> transaction dropping the subscription fails due to network error etc\r\n> but we don't need to worry about it as reporting the subscription drop\r\n> is transactional.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 9 May 2023 04:51:05 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "Hi,\n\nMasahiko Sawada <sawada.mshk@gmail.com>, 8 May 2023 Pzt, 10:24 tarihinde\nşunu yazdı:\n\n> I've attached the patch. Feedback is very welcome.\n>\n\nThanks for the patch, nice catch.\nI can confirm that the issue exists on HEAD and gets resolved by this\npatch. Also it looks like stats are really not affected if transaction\nfails for some reason, as you explained.\nIMO, the patch will be OK after commit message is added.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Masahiko Sawada <sawada.mshk@gmail.com>, 8 May 2023 Pzt, 10:24 tarihinde şunu yazdı:\nI've attached the patch. Feedback is very welcome.Thanks for the patch, nice catch.I can confirm that the issue exists on HEAD and gets resolved by this patch. Also it looks like stats are really not affected if transaction fails for some reason, as you explained.IMO, the patch will be OK after commit message is added.Thanks,-- Melih MutluMicrosoft", "msg_date": "Wed, 10 May 2023 14:58:26 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Wed, May 10, 2023 at 8:58 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Masahiko Sawada <sawada.mshk@gmail.com>, 8 May 2023 Pzt, 10:24 tarihinde şunu yazdı:\n>>\n>> I've attached the patch. Feedback is very welcome.\n>\n>\n> Thanks for the patch, nice catch.\n> I can confirm that the issue exists on HEAD and gets resolved by this patch. Also it looks like stats are really not affected if transaction fails for some reason, as you explained.\n> IMO, the patch will be OK after commit message is added.\n\nThank you for reviewing the patch. I'll push the patch early next\nweek, barring any objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 May 2023 17:12:38 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Thu, May 11, 2023 at 5:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 10, 2023 at 8:58 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Masahiko Sawada <sawada.mshk@gmail.com>, 8 May 2023 Pzt, 10:24 tarihinde şunu yazdı:\n> >>\n> >> I've attached the patch. Feedback is very welcome.\n> >\n> >\n> > Thanks for the patch, nice catch.\n> > I can confirm that the issue exists on HEAD and gets resolved by this patch. Also it looks like stats are really not affected if transaction fails for some reason, as you explained.\n> > IMO, the patch will be OK after commit message is added.\n>\n> Thank you for reviewing the patch. I'll push the patch early next\n> week, barring any objections.\n\nAfter thinking more about it, I realized that this is not a problem\nspecific to HEAD. ISTM the problem is that by commit 7b64e4b3, we drop\nthe stats entry of subscription that is not associated with a\nreplication slot for apply worker, but we missed the case where the\nsubscription is not associated with both replication slots for apply\nand tablesync. So IIUC we should backpatch it down to 15.\n\nSince in pg15, since we don't create the subscription stats at CREATE\nSUBSCRIPTION time but do when the first error is reported, we cannot\nrely on the regression test suite. Also, to check if the subscription\nstats is surely removed, using pg_stat_have_stats() is clearer. So I\nadded a test case to TAP tests (026_stats.pl).\n\nOn Tue, May 9, 2023 at 1:51 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> A comment:\n>\n> ```\n> + /*\n> + * Tell the cumulative stats system that the subscription is getting\n> + * dropped.\n> + */\n> + pgstat_drop_subscription(subid);\n> ```\n>\n> Isn't it better to write down something you said as comment? Or is it quite trivial?\n>\n> > There is a chance the\n> > transaction dropping the subscription fails due to network error etc\n> > but we don't need to worry about it as reporting the subscription drop\n> > is transactional.\n\nI'm not sure it's worth mentioning as we don't have such a comment\naround other pgstat_drop_XXX functions.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 16 May 2023 23:30:05 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Tue, May 16, 2023 at 8:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 11, 2023 at 5:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> After thinking more about it, I realized that this is not a problem\n> specific to HEAD. ISTM the problem is that by commit 7b64e4b3, we drop\n> the stats entry of subscription that is not associated with a\n> replication slot for apply worker, but we missed the case where the\n> subscription is not associated with both replication slots for apply\n> and tablesync. So IIUC we should backpatch it down to 15.\n>\n\nI agree that it should be backpatched to 15.\n\n> Since in pg15, since we don't create the subscription stats at CREATE\n> SUBSCRIPTION time but do when the first error is reported,\n>\n\nAFAICS, the call to pgstat_create_subscription() is present in\nCreateSubscription() in 15 as well, so, I don't get your point.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 17 Jun 2023 15:14:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Sat, Jun 17, 2023 at 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 16, 2023 at 8:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, May 11, 2023 at 5:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > After thinking more about it, I realized that this is not a problem\n> > specific to HEAD. ISTM the problem is that by commit 7b64e4b3, we drop\n> > the stats entry of subscription that is not associated with a\n> > replication slot for apply worker, but we missed the case where the\n> > subscription is not associated with both replication slots for apply\n> > and tablesync. So IIUC we should backpatch it down to 15.\n> >\n>\n> I agree that it should be backpatched to 15.\n>\n> > Since in pg15, since we don't create the subscription stats at CREATE\n> > SUBSCRIPTION time but do when the first error is reported,\n> >\n>\n> AFAICS, the call to pgstat_create_subscription() is present in\n> CreateSubscription() in 15 as well, so, I don't get your point.\n\nIIUC in 15, pgstat_create_subscription() doesn't create the stats\nentry. See commit e0b01429590.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Jun 2023 10:19:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Mon, Jun 19, 2023 at 6:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Jun 17, 2023 at 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 16, 2023 at 8:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, May 11, 2023 at 5:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > After thinking more about it, I realized that this is not a problem\n> > > specific to HEAD. ISTM the problem is that by commit 7b64e4b3, we drop\n> > > the stats entry of subscription that is not associated with a\n> > > replication slot for apply worker, but we missed the case where the\n> > > subscription is not associated with both replication slots for apply\n> > > and tablesync. So IIUC we should backpatch it down to 15.\n> > >\n> >\n> > I agree that it should be backpatched to 15.\n> >\n> > > Since in pg15, since we don't create the subscription stats at CREATE\n> > > SUBSCRIPTION time but do when the first error is reported,\n> > >\n> >\n> > AFAICS, the call to pgstat_create_subscription() is present in\n> > CreateSubscription() in 15 as well, so, I don't get your point.\n>\n> IIUC in 15, pgstat_create_subscription() doesn't create the stats\n> entry. See commit e0b01429590.\n>\n\nThanks for the clarification. Your changes looks good to me though I\nhaven't tested it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Jun 2023 09:07:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Mon, Jun 19, 2023 at 12:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 19, 2023 at 6:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Jun 17, 2023 at 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, May 16, 2023 at 8:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, May 11, 2023 at 5:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > After thinking more about it, I realized that this is not a problem\n> > > > specific to HEAD. ISTM the problem is that by commit 7b64e4b3, we drop\n> > > > the stats entry of subscription that is not associated with a\n> > > > replication slot for apply worker, but we missed the case where the\n> > > > subscription is not associated with both replication slots for apply\n> > > > and tablesync. So IIUC we should backpatch it down to 15.\n> > > >\n> > >\n> > > I agree that it should be backpatched to 15.\n> > >\n> > > > Since in pg15, since we don't create the subscription stats at CREATE\n> > > > SUBSCRIPTION time but do when the first error is reported,\n> > > >\n> > >\n> > > AFAICS, the call to pgstat_create_subscription() is present in\n> > > CreateSubscription() in 15 as well, so, I don't get your point.\n> >\n> > IIUC in 15, pgstat_create_subscription() doesn't create the stats\n> > entry. See commit e0b01429590.\n> >\n>\n> Thanks for the clarification. Your changes looks good to me though I\n> haven't tested it.\n\nThanks for reviewing the patch. Pushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Jul 2023 15:53:13 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Wednesday, July 5, 2023 2:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n\r\nHi,\r\n\r\n> Thanks for reviewing the patch. Pushed.\r\n\r\nMy colleague Vignesh noticed that the newly added test cases were failing in BF animal sungazer[1].\r\n\r\nThe test failed to drop the slot which is active on publisher.\r\n\r\n--error-log--\r\nThis failure is because pg_drop_replication_slot fails with \"replication slot \"test_tab2_sub\" is active for PID 55771638\":\r\n2023-09-02 09:00:04.806 UTC [12910732:4] 026_stats.pl LOG: statement: SELECT pg_drop_replication_slot('test_tab2_sub')\r\n2023-09-02 09:00:04.807 UTC [12910732:5] 026_stats.pl ERROR: replication slot \"test_tab2_sub\" is active for PID 55771638\r\n2023-09-02 09:00:04.807 UTC [12910732:6] 026_stats.pl STATEMENT: SELECT pg_drop_replication_slot('test_tab2_sub')\r\n-------------\r\n\r\nI the reason is that the test DISABLEd the subscription before dropping the\r\nslot, while \"ALTER SUBSCRIPTION DISABLE\" doesn't wait for the walsender to\r\nrelease the slot, so it's possible that the walsender is still alive when calling\r\npg_drop_replication_slot() to drop the slot on publisher(pg_drop_xxxslot() \r\ndoesn't wait for slot to be released).\r\n\r\nI think we can wait for the slot to become inactive before dropping like:\r\n\t$node_primary->poll_query_until('otherdb',\r\n\t\t\"SELECT NOT EXISTS (SELECT 1 FROM pg_replication_slots WHERE active_pid IS NOT NULL)\"\r\n\t)\r\n\r\nOr we can just don't drop the slot as it’s the last testcase.\r\n\r\nAnother thing might be worth considering is we can add one parameter in\r\npg_drop_replication_slot() to let user control whether to wait or not, and the\r\ntest can be fixed as well by passing nowait=false to the func.\r\n\r\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2023-09-02%2002%3A17%3A01\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 4 Sep 2023 12:38:46 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 4, 2023 at 9:38 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, July 5, 2023 2:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> > Thanks for reviewing the patch. Pushed.\n>\n> My colleague Vignesh noticed that the newly added test cases were failing in BF animal sungazer[1].\n\nThank you for reporting!\n\n>\n> The test failed to drop the slot which is active on publisher.\n>\n> --error-log--\n> This failure is because pg_drop_replication_slot fails with \"replication slot \"test_tab2_sub\" is active for PID 55771638\":\n> 2023-09-02 09:00:04.806 UTC [12910732:4] 026_stats.pl LOG: statement: SELECT pg_drop_replication_slot('test_tab2_sub')\n> 2023-09-02 09:00:04.807 UTC [12910732:5] 026_stats.pl ERROR: replication slot \"test_tab2_sub\" is active for PID 55771638\n> 2023-09-02 09:00:04.807 UTC [12910732:6] 026_stats.pl STATEMENT: SELECT pg_drop_replication_slot('test_tab2_sub')\n> -------------\n>\n> I the reason is that the test DISABLEd the subscription before dropping the\n> slot, while \"ALTER SUBSCRIPTION DISABLE\" doesn't wait for the walsender to\n> release the slot, so it's possible that the walsender is still alive when calling\n> pg_drop_replication_slot() to drop the slot on publisher(pg_drop_xxxslot()\n> doesn't wait for slot to be released).\n\nI agree with your analysis.\n\n>\n> I think we can wait for the slot to become inactive before dropping like:\n> $node_primary->poll_query_until('otherdb',\n> \"SELECT NOT EXISTS (SELECT 1 FROM pg_replication_slots WHERE active_pid IS NOT NULL)\"\n> )\n>\n\nI prefer this approach but it would be better to specify the slot name\nin the query.\n\n> Or we can just don't drop the slot as it’s the last testcase.\n\nSince we might add other tests after that in the future, I think it's\nbetter to drop the replication slot (and subscription).\n\n>\n> Another thing might be worth considering is we can add one parameter in\n> pg_drop_replication_slot() to let user control whether to wait or not, and the\n> test can be fixed as well by passing nowait=false to the func.\n\nWhile it could be useful in general we cannot use this approach for\nthis issue since it cannot be backpatched to older branches. We might\nwant to discuss it on a new thread.\n\nI've attached a draft patch to fix this issue. Feedback is very welcome.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Sep 2023 23:42:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Monday, September 4, 2023 10:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n\r\nHi,\r\n\r\n> On Mon, Sep 4, 2023 at 9:38 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Wednesday, July 5, 2023 2:53 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > > Thanks for reviewing the patch. Pushed.\r\n> >\r\n> > My colleague Vignesh noticed that the newly added test cases were failing in\r\n> BF animal sungazer[1].\r\n> \r\n> Thank you for reporting!\r\n> \r\n> >\r\n> > The test failed to drop the slot which is active on publisher.\r\n> >\r\n> > --error-log--\r\n> > This failure is because pg_drop_replication_slot fails with \"replication slot\r\n> \"test_tab2_sub\" is active for PID 55771638\":\r\n> > 2023-09-02 09:00:04.806 UTC [12910732:4] 026_stats.pl LOG: statement:\r\n> > SELECT pg_drop_replication_slot('test_tab2_sub')\r\n> > 2023-09-02 09:00:04.807 UTC [12910732:5] 026_stats.pl ERROR:\r\n> > replication slot \"test_tab2_sub\" is active for PID 55771638\r\n> > 2023-09-02 09:00:04.807 UTC [12910732:6] 026_stats.pl STATEMENT:\r\n> > SELECT pg_drop_replication_slot('test_tab2_sub')\r\n> > -------------\r\n> >\r\n> > I the reason is that the test DISABLEd the subscription before\r\n> > dropping the slot, while \"ALTER SUBSCRIPTION DISABLE\" doesn't wait for\r\n> > the walsender to release the slot, so it's possible that the walsender\r\n> > is still alive when calling\r\n> > pg_drop_replication_slot() to drop the slot on\r\n> > publisher(pg_drop_xxxslot() doesn't wait for slot to be released).\r\n> \r\n> I agree with your analysis.\r\n> \r\n> >\r\n> > I think we can wait for the slot to become inactive before dropping like:\r\n> > $node_primary->poll_query_until('otherdb',\r\n> > \"SELECT NOT EXISTS (SELECT 1 FROM pg_replication_slots\r\n> WHERE active_pid IS NOT NULL)\"\r\n> > )\r\n> >\r\n> \r\n> I prefer this approach but it would be better to specify the slot name in the\r\n> query.\r\n> \r\n> > Or we can just don't drop the slot as it’s the last testcase.\r\n> \r\n> Since we might add other tests after that in the future, I think it's better to drop\r\n> the replication slot (and subscription).\r\n> \r\n> >\r\n> > Another thing might be worth considering is we can add one parameter\r\n> > in\r\n> > pg_drop_replication_slot() to let user control whether to wait or not,\r\n> > and the test can be fixed as well by passing nowait=false to the func.\r\n> \r\n> While it could be useful in general we cannot use this approach for this issue\r\n> since it cannot be backpatched to older branches. We might want to discuss it\r\n> on a new thread.\r\n> \r\n> I've attached a draft patch to fix this issue. Feedback is very welcome.\r\n\r\nThanks, it looks good to me.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 5 Sep 2023 02:31:59 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Tue, Sep 5, 2023 at 11:32 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, September 4, 2023 10:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> > On Mon, Sep 4, 2023 at 9:38 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\n> > wrote:\n> > >\n> > > On Wednesday, July 5, 2023 2:53 PM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > >\n> > > > Thanks for reviewing the patch. Pushed.\n> > >\n> > > My colleague Vignesh noticed that the newly added test cases were failing in\n> > BF animal sungazer[1].\n> >\n> > Thank you for reporting!\n> >\n> > >\n> > > The test failed to drop the slot which is active on publisher.\n> > >\n> > > --error-log--\n> > > This failure is because pg_drop_replication_slot fails with \"replication slot\n> > \"test_tab2_sub\" is active for PID 55771638\":\n> > > 2023-09-02 09:00:04.806 UTC [12910732:4] 026_stats.pl LOG: statement:\n> > > SELECT pg_drop_replication_slot('test_tab2_sub')\n> > > 2023-09-02 09:00:04.807 UTC [12910732:5] 026_stats.pl ERROR:\n> > > replication slot \"test_tab2_sub\" is active for PID 55771638\n> > > 2023-09-02 09:00:04.807 UTC [12910732:6] 026_stats.pl STATEMENT:\n> > > SELECT pg_drop_replication_slot('test_tab2_sub')\n> > > -------------\n> > >\n> > > I the reason is that the test DISABLEd the subscription before\n> > > dropping the slot, while \"ALTER SUBSCRIPTION DISABLE\" doesn't wait for\n> > > the walsender to release the slot, so it's possible that the walsender\n> > > is still alive when calling\n> > > pg_drop_replication_slot() to drop the slot on\n> > > publisher(pg_drop_xxxslot() doesn't wait for slot to be released).\n> >\n> > I agree with your analysis.\n> >\n> > >\n> > > I think we can wait for the slot to become inactive before dropping like:\n> > > $node_primary->poll_query_until('otherdb',\n> > > \"SELECT NOT EXISTS (SELECT 1 FROM pg_replication_slots\n> > WHERE active_pid IS NOT NULL)\"\n> > > )\n> > >\n> >\n> > I prefer this approach but it would be better to specify the slot name in the\n> > query.\n> >\n> > > Or we can just don't drop the slot as it’s the last testcase.\n> >\n> > Since we might add other tests after that in the future, I think it's better to drop\n> > the replication slot (and subscription).\n> >\n> > >\n> > > Another thing might be worth considering is we can add one parameter\n> > > in\n> > > pg_drop_replication_slot() to let user control whether to wait or not,\n> > > and the test can be fixed as well by passing nowait=false to the func.\n> >\n> > While it could be useful in general we cannot use this approach for this issue\n> > since it cannot be backpatched to older branches. We might want to discuss it\n> > on a new thread.\n> >\n> > I've attached a draft patch to fix this issue. Feedback is very welcome.\n>\n> Thanks, it looks good to me.\n\nThank you for reviewing the patch.\n\nI'll push the attached patch to master, v16, and v15 tomorrow, barring\nany objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 7 Sep 2023 22:22:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" }, { "msg_contents": "On Thu, Sep 7, 2023 at 10:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Sep 5, 2023 at 11:32 AM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Monday, September 4, 2023 10:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > > On Mon, Sep 4, 2023 at 9:38 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\n> > > wrote:\n> > > >\n> > > > On Wednesday, July 5, 2023 2:53 PM Masahiko Sawada\n> > > <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > > Thanks for reviewing the patch. Pushed.\n> > > >\n> > > > My colleague Vignesh noticed that the newly added test cases were failing in\n> > > BF animal sungazer[1].\n> > >\n> > > Thank you for reporting!\n> > >\n> > > >\n> > > > The test failed to drop the slot which is active on publisher.\n> > > >\n> > > > --error-log--\n> > > > This failure is because pg_drop_replication_slot fails with \"replication slot\n> > > \"test_tab2_sub\" is active for PID 55771638\":\n> > > > 2023-09-02 09:00:04.806 UTC [12910732:4] 026_stats.pl LOG: statement:\n> > > > SELECT pg_drop_replication_slot('test_tab2_sub')\n> > > > 2023-09-02 09:00:04.807 UTC [12910732:5] 026_stats.pl ERROR:\n> > > > replication slot \"test_tab2_sub\" is active for PID 55771638\n> > > > 2023-09-02 09:00:04.807 UTC [12910732:6] 026_stats.pl STATEMENT:\n> > > > SELECT pg_drop_replication_slot('test_tab2_sub')\n> > > > -------------\n> > > >\n> > > > I the reason is that the test DISABLEd the subscription before\n> > > > dropping the slot, while \"ALTER SUBSCRIPTION DISABLE\" doesn't wait for\n> > > > the walsender to release the slot, so it's possible that the walsender\n> > > > is still alive when calling\n> > > > pg_drop_replication_slot() to drop the slot on\n> > > > publisher(pg_drop_xxxslot() doesn't wait for slot to be released).\n> > >\n> > > I agree with your analysis.\n> > >\n> > > >\n> > > > I think we can wait for the slot to become inactive before dropping like:\n> > > > $node_primary->poll_query_until('otherdb',\n> > > > \"SELECT NOT EXISTS (SELECT 1 FROM pg_replication_slots\n> > > WHERE active_pid IS NOT NULL)\"\n> > > > )\n> > > >\n> > >\n> > > I prefer this approach but it would be better to specify the slot name in the\n> > > query.\n> > >\n> > > > Or we can just don't drop the slot as it’s the last testcase.\n> > >\n> > > Since we might add other tests after that in the future, I think it's better to drop\n> > > the replication slot (and subscription).\n> > >\n> > > >\n> > > > Another thing might be worth considering is we can add one parameter\n> > > > in\n> > > > pg_drop_replication_slot() to let user control whether to wait or not,\n> > > > and the test can be fixed as well by passing nowait=false to the func.\n> > >\n> > > While it could be useful in general we cannot use this approach for this issue\n> > > since it cannot be backpatched to older branches. We might want to discuss it\n> > > on a new thread.\n> > >\n> > > I've attached a draft patch to fix this issue. Feedback is very welcome.\n> >\n> > Thanks, it looks good to me.\n>\n> Thank you for reviewing the patch.\n>\n> I'll push the attached patch to master, v16, and v15 tomorrow, barring\n> any objections.\n\nPushed.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 9 Sep 2023 10:33:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Subscription statistics are not dropped at DROP SUBSCRIPTION in\n some cases" } ]
[ { "msg_contents": "Hi\n\nI try run make check-world. Now I have problems with tests of psql\n\nI had to cancel tests\n\nlog:\n\n[08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction and\nmultiple -c switches\n[08:46:49.860](0.033s) ok 64 - client-side error commits transaction, no\nON_ERROR_STOP and multiple -c switches\n[08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0\n[08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr\n[08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matches\ndeath by signal at\n/home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm\nline 3042.\n# Postmaster PID for node \"main\" is 157863\n### Stopping node \"main\" using mode immediate\n# Running: pg_ctl -D\n/home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata\n-m immediate stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"main\"\n[08:47:30.361](40.431s) # Tests were run but no plan was declared and\ndone_testing() was not seen.\n[08:47:30.362](0.001s) # Looks like your test exited with 4 just after 67.\nWarning: unable to close filehandle $orig_stderr properly: Broken pipe\nduring global destruction.\n\nI use Fedora 38\n\nRegards\n\nPavel\n\nHiI try run make check-world. Now I have problems with tests of psqlI had to cancel testslog:[08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction and multiple -c switches[08:46:49.860](0.033s) ok 64 - client-side error commits transaction, no ON_ERROR_STOP and multiple -c switches[08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0[08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr[08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matchesdeath by signal at /home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 3042.# Postmaster PID for node \"main\" is 157863### Stopping node \"main\" using mode immediate# Running: pg_ctl -D /home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata -m immediate stopwaiting for server to shut down.... doneserver stopped# No postmaster PID for node \"main\"[08:47:30.361](40.431s) # Tests were run but no plan was declared and done_testing() was not seen.[08:47:30.362](0.001s) # Looks like your test exited with 4 just after 67.Warning: unable to close filehandle $orig_stderr properly: Broken pipe during global destruction.I use Fedora 38RegardsPavel", "msg_date": "Tue, 9 May 2023 08:52:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "psql tests hangs" }, { "msg_contents": "> On 9 May 2023, at 08:52, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> Hi\n> \n> I try run make check-world. Now I have problems with tests of psql\n> \n> I had to cancel tests\n> \n> log:\n> \n> [08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction and multiple -c switches\n> [08:46:49.860](0.033s) ok 64 - client-side error commits transaction, no ON_ERROR_STOP and multiple -c switches\n> [08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0\n> [08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr\n> [08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matches\n> death by signal at /home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 3042.\n> # Postmaster PID for node \"main\" is 157863\n> ### Stopping node \"main\" using mode immediate\n> # Running: pg_ctl -D /home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata -m immediate stop\n> waiting for server to shut down.... done\n> server stopped\n> # No postmaster PID for node \"main\"\n> [08:47:30.361](40.431s) # Tests were run but no plan was declared and done_testing() was not seen.\n> [08:47:30.362](0.001s) # Looks like your test exited with 4 just after 67.\n> Warning: unable to close filehandle $orig_stderr properly: Broken pipe during global destruction.\n\nI'm unable to reproduce, and this clearly works in the buildfarm and CI. Did\nyou run out of disk on the volume during the test or something similar?\nAnything interesting in the serverlogs from the tmp_check install?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 9 May 2023 10:48:08 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "út 9. 5. 2023 v 10:48 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n\n> > On 9 May 2023, at 08:52, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> > Hi\n> >\n> > I try run make check-world. Now I have problems with tests of psql\n> >\n> > I had to cancel tests\n> >\n> > log:\n> >\n> > [08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction\n> and multiple -c switches\n> > [08:46:49.860](0.033s) ok 64 - client-side error commits transaction, no\n> ON_ERROR_STOP and multiple -c switches\n> > [08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0\n> > [08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr\n> > [08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matches\n> > death by signal at\n> /home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm\n> line 3042.\n> > # Postmaster PID for node \"main\" is 157863\n> > ### Stopping node \"main\" using mode immediate\n> > # Running: pg_ctl -D\n> /home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata\n> -m immediate stop\n> > waiting for server to shut down.... done\n> > server stopped\n> > # No postmaster PID for node \"main\"\n> > [08:47:30.361](40.431s) # Tests were run but no plan was declared and\n> done_testing() was not seen.\n> > [08:47:30.362](0.001s) # Looks like your test exited with 4 just after\n> 67.\n> > Warning: unable to close filehandle $orig_stderr properly: Broken pipe\n> during global destruction.\n>\n> I'm unable to reproduce, and this clearly works in the buildfarm and CI.\n> Did\n> you run out of disk on the volume during the test or something similar?\n> Anything interesting in the serverlogs from the tmp_check install?\n>\n\nI have enough free space on disc\n\nI don't see nothing interesting in log (it is another run)\n\n2023-05-09 08:50:04.839 CEST [158930] 001_basic.pl LOG: statement: COPY\n copy_default FROM STDIN with (format 'csv', default 'placeholder');\n2023-05-09 08:50:04.841 CEST [158930] 001_basic.pl LOG: statement: SELECT\n* FROM copy_default\n2023-05-09 08:50:04.879 CEST [158932] 001_basic.pl LOG: statement: SELECT\n1.\n2023-05-09 08:50:04.888 CEST [158932] 001_basic.pl LOG: statement: SELECT\n1.\n2023-05-09 08:50:04.898 CEST [158932] 001_basic.pl LOG: statement: SELECT\n1.\n2023-05-09 08:50:28.375 CEST [158862] LOG: received immediate shutdown\nrequest\n2023-05-09 08:50:28.385 CEST [158862] LOG: database system is shut down\n\nbacktrace from perl\n\nProgram received signal SIGINT, Interrupt.\n0x00007f387ecc1ade in select () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007f387ecc1ade in select () from /lib64/libc.so.6\n#1 0x00007f387e97363b in Perl_pp_sselect () from /lib64/libperl.so.5.36\n#2 0x00007f387e917958 in Perl_runops_standard () from\n/lib64/libperl.so.5.36\n#3 0x00007f387e88259d in perl_run () from /lib64/libperl.so.5.36\n#4 0x00005588bceb234a in main ()\n\nRegards\n\nPavel\n\n\n\n 1.\n\n\n\n\n\n\n>\n> --\n> Daniel Gustafsson\n>\n>\n\nút 9. 5. 2023 v 10:48 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 9 May 2023, at 08:52, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> Hi\n> \n> I try run make check-world. Now I have problems with tests of psql\n> \n> I had to cancel tests\n> \n> log:\n> \n> [08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction and multiple -c switches\n> [08:46:49.860](0.033s) ok 64 - client-side error commits transaction, no ON_ERROR_STOP and multiple -c switches\n> [08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0\n> [08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr\n> [08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matches\n> death by signal at /home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 3042.\n> # Postmaster PID for node \"main\" is 157863\n> ### Stopping node \"main\" using mode immediate\n> # Running: pg_ctl -D /home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata -m immediate stop\n> waiting for server to shut down.... done\n> server stopped\n> # No postmaster PID for node \"main\"\n> [08:47:30.361](40.431s) # Tests were run but no plan was declared and done_testing() was not seen.\n> [08:47:30.362](0.001s) # Looks like your test exited with 4 just after 67.\n> Warning: unable to close filehandle $orig_stderr properly: Broken pipe during global destruction.\n\nI'm unable to reproduce, and this clearly works in the buildfarm and CI.  Did\nyou run out of disk on the volume during the test or something similar?\nAnything interesting in the serverlogs from the tmp_check install?I have enough free space on discI don't see nothing interesting in log (it is another run)2023-05-09 08:50:04.839 CEST [158930] 001_basic.pl LOG:  statement: COPY  copy_default FROM STDIN with (format 'csv', default 'placeholder');2023-05-09 08:50:04.841 CEST [158930] 001_basic.pl LOG:  statement: SELECT * FROM copy_default2023-05-09 08:50:04.879 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:04.888 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:04.898 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:28.375 CEST [158862] LOG:  received immediate shutdown request2023-05-09 08:50:28.385 CEST [158862] LOG:  database system is shut downbacktrace from perlProgram received signal SIGINT, Interrupt.0x00007f387ecc1ade in select () from /lib64/libc.so.6(gdb) bt#0  0x00007f387ecc1ade in select () from /lib64/libc.so.6#1  0x00007f387e97363b in Perl_pp_sselect () from /lib64/libperl.so.5.36#2  0x00007f387e917958 in Perl_runops_standard () from /lib64/libperl.so.5.36#3  0x00007f387e88259d in perl_run () from /lib64/libperl.so.5.36#4  0x00005588bceb234a in main ()RegardsPavel \n\n--\nDaniel Gustafsson", "msg_date": "Tue, 9 May 2023 11:07:02 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "út 9. 5. 2023 v 11:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 9. 5. 2023 v 10:48 odesílatel Daniel Gustafsson <daniel@yesql.se>\n> napsal:\n>\n>> > On 9 May 2023, at 08:52, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> >\n>> > Hi\n>> >\n>> > I try run make check-world. Now I have problems with tests of psql\n>> >\n>> > I had to cancel tests\n>> >\n>> > log:\n>> >\n>> > [08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction\n>> and multiple -c switches\n>> > [08:46:49.860](0.033s) ok 64 - client-side error commits transaction,\n>> no ON_ERROR_STOP and multiple -c switches\n>> > [08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0\n>> > [08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr\n>> > [08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matches\n>> > death by signal at\n>> /home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm\n>> line 3042.\n>> > # Postmaster PID for node \"main\" is 157863\n>> > ### Stopping node \"main\" using mode immediate\n>> > # Running: pg_ctl -D\n>> /home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata\n>> -m immediate stop\n>> > waiting for server to shut down.... done\n>> > server stopped\n>> > # No postmaster PID for node \"main\"\n>> > [08:47:30.361](40.431s) # Tests were run but no plan was declared and\n>> done_testing() was not seen.\n>> > [08:47:30.362](0.001s) # Looks like your test exited with 4 just after\n>> 67.\n>> > Warning: unable to close filehandle $orig_stderr properly: Broken pipe\n>> during global destruction.\n>>\n>> I'm unable to reproduce, and this clearly works in the buildfarm and CI.\n>> Did\n>> you run out of disk on the volume during the test or something similar?\n>> Anything interesting in the serverlogs from the tmp_check install?\n>>\n>\n> I have enough free space on disc\n>\n> I don't see nothing interesting in log (it is another run)\n>\n> 2023-05-09 08:50:04.839 CEST [158930] 001_basic.pl LOG: statement: COPY\n> copy_default FROM STDIN with (format 'csv', default 'placeholder');\n> 2023-05-09 08:50:04.841 CEST [158930] 001_basic.pl LOG: statement:\n> SELECT * FROM copy_default\n> 2023-05-09 08:50:04.879 CEST [158932] 001_basic.pl LOG: statement:\n> SELECT 1.\n> 2023-05-09 08:50:04.888 CEST [158932] 001_basic.pl LOG: statement:\n> SELECT 1.\n> 2023-05-09 08:50:04.898 CEST [158932] 001_basic.pl LOG: statement:\n> SELECT 1.\n> 2023-05-09 08:50:28.375 CEST [158862] LOG: received immediate shutdown\n> request\n> 2023-05-09 08:50:28.385 CEST [158862] LOG: database system is shut down\n>\n> backtrace from perl\n>\n> Program received signal SIGINT, Interrupt.\n> 0x00007f387ecc1ade in select () from /lib64/libc.so.6\n> (gdb) bt\n> #0 0x00007f387ecc1ade in select () from /lib64/libc.so.6\n> #1 0x00007f387e97363b in Perl_pp_sselect () from /lib64/libperl.so.5.36\n> #2 0x00007f387e917958 in Perl_runops_standard () from\n> /lib64/libperl.so.5.36\n> #3 0x00007f387e88259d in perl_run () from /lib64/libperl.so.5.36\n> #4 0x00005588bceb234a in main ()\n>\n> Regards\n>\n\nI repeated another build with the same result.\n\nTested REL_15_STABLE branch without any problems.\n\nRegards\n\nPavel\n\n\n\n\n>\n> Pavel\n>\n>\n>\n> 1.\n>\n>\n>\n>\n>\n>\n>>\n>> --\n>> Daniel Gustafsson\n>>\n>>\n\nút 9. 5. 2023 v 11:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 9. 5. 2023 v 10:48 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 9 May 2023, at 08:52, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> Hi\n> \n> I try run make check-world. Now I have problems with tests of psql\n> \n> I had to cancel tests\n> \n> log:\n> \n> [08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction and multiple -c switches\n> [08:46:49.860](0.033s) ok 64 - client-side error commits transaction, no ON_ERROR_STOP and multiple -c switches\n> [08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0\n> [08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr\n> [08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matches\n> death by signal at /home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 3042.\n> # Postmaster PID for node \"main\" is 157863\n> ### Stopping node \"main\" using mode immediate\n> # Running: pg_ctl -D /home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata -m immediate stop\n> waiting for server to shut down.... done\n> server stopped\n> # No postmaster PID for node \"main\"\n> [08:47:30.361](40.431s) # Tests were run but no plan was declared and done_testing() was not seen.\n> [08:47:30.362](0.001s) # Looks like your test exited with 4 just after 67.\n> Warning: unable to close filehandle $orig_stderr properly: Broken pipe during global destruction.\n\nI'm unable to reproduce, and this clearly works in the buildfarm and CI.  Did\nyou run out of disk on the volume during the test or something similar?\nAnything interesting in the serverlogs from the tmp_check install?I have enough free space on discI don't see nothing interesting in log (it is another run)2023-05-09 08:50:04.839 CEST [158930] 001_basic.pl LOG:  statement: COPY  copy_default FROM STDIN with (format 'csv', default 'placeholder');2023-05-09 08:50:04.841 CEST [158930] 001_basic.pl LOG:  statement: SELECT * FROM copy_default2023-05-09 08:50:04.879 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:04.888 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:04.898 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:28.375 CEST [158862] LOG:  received immediate shutdown request2023-05-09 08:50:28.385 CEST [158862] LOG:  database system is shut downbacktrace from perlProgram received signal SIGINT, Interrupt.0x00007f387ecc1ade in select () from /lib64/libc.so.6(gdb) bt#0  0x00007f387ecc1ade in select () from /lib64/libc.so.6#1  0x00007f387e97363b in Perl_pp_sselect () from /lib64/libperl.so.5.36#2  0x00007f387e917958 in Perl_runops_standard () from /lib64/libperl.so.5.36#3  0x00007f387e88259d in perl_run () from /lib64/libperl.so.5.36#4  0x00005588bceb234a in main ()RegardsI repeated another build with the same result.Tested REL_15_STABLE branch without any problems.RegardsPavel Pavel \n\n--\nDaniel Gustafsson", "msg_date": "Tue, 9 May 2023 13:53:07 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Hi\n\n\nút 9. 5. 2023 v 13:53 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 9. 5. 2023 v 11:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> út 9. 5. 2023 v 10:48 odesílatel Daniel Gustafsson <daniel@yesql.se>\n>> napsal:\n>>\n>>> > On 9 May 2023, at 08:52, Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>> >\n>>> > Hi\n>>> >\n>>> > I try run make check-world. Now I have problems with tests of psql\n>>> >\n>>> > I had to cancel tests\n>>> >\n>>> > log:\n>>> >\n>>> > [08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction\n>>> and multiple -c switches\n>>> > [08:46:49.860](0.033s) ok 64 - client-side error commits transaction,\n>>> no ON_ERROR_STOP and multiple -c switches\n>>> > [08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0\n>>> > [08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr\n>>> > [08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matches\n>>> > death by signal at\n>>> /home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm\n>>> line 3042.\n>>> > # Postmaster PID for node \"main\" is 157863\n>>> > ### Stopping node \"main\" using mode immediate\n>>> > # Running: pg_ctl -D\n>>> /home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata\n>>> -m immediate stop\n>>> > waiting for server to shut down.... done\n>>> > server stopped\n>>> > # No postmaster PID for node \"main\"\n>>> > [08:47:30.361](40.431s) # Tests were run but no plan was declared and\n>>> done_testing() was not seen.\n>>> > [08:47:30.362](0.001s) # Looks like your test exited with 4 just after\n>>> 67.\n>>> > Warning: unable to close filehandle $orig_stderr properly: Broken pipe\n>>> during global destruction.\n>>>\n>>> I'm unable to reproduce, and this clearly works in the buildfarm and\n>>> CI. Did\n>>> you run out of disk on the volume during the test or something similar?\n>>> Anything interesting in the serverlogs from the tmp_check install?\n>>>\n>>\n>> I have enough free space on disc\n>>\n>> I don't see nothing interesting in log (it is another run)\n>>\n>> 2023-05-09 08:50:04.839 CEST [158930] 001_basic.pl LOG: statement: COPY\n>> copy_default FROM STDIN with (format 'csv', default 'placeholder');\n>> 2023-05-09 08:50:04.841 CEST [158930] 001_basic.pl LOG: statement:\n>> SELECT * FROM copy_default\n>> 2023-05-09 08:50:04.879 CEST [158932] 001_basic.pl LOG: statement:\n>> SELECT 1.\n>> 2023-05-09 08:50:04.888 CEST [158932] 001_basic.pl LOG: statement:\n>> SELECT 1.\n>> 2023-05-09 08:50:04.898 CEST [158932] 001_basic.pl LOG: statement:\n>> SELECT 1.\n>> 2023-05-09 08:50:28.375 CEST [158862] LOG: received immediate shutdown\n>> request\n>> 2023-05-09 08:50:28.385 CEST [158862] LOG: database system is shut down\n>>\n>> backtrace from perl\n>>\n>> Program received signal SIGINT, Interrupt.\n>> 0x00007f387ecc1ade in select () from /lib64/libc.so.6\n>> (gdb) bt\n>> #0 0x00007f387ecc1ade in select () from /lib64/libc.so.6\n>> #1 0x00007f387e97363b in Perl_pp_sselect () from /lib64/libperl.so.5.36\n>> #2 0x00007f387e917958 in Perl_runops_standard () from\n>> /lib64/libperl.so.5.36\n>> #3 0x00007f387e88259d in perl_run () from /lib64/libperl.so.5.36\n>> #4 0x00005588bceb234a in main ()\n>>\n>> Regards\n>>\n>\n> I repeated another build with the same result.\n>\n> Tested REL_15_STABLE branch without any problems.\n>\n\nThere is some dependence on locales\n\nfor commit 96c498d2f8ce5f0082c64793f94e2d0cfa7d7605\n\nwith my cs_CZ.utf8 locale\n\necho \"# +++ tap check in src/bin/psql +++\" && rm -rf\n'/home/pavel/src/postgresql.master/src/bin/psql'/tmp_check &&\n/usr/bin/mkdir -p\n'/home/pavel/src/postgresql.master/src/bin/psql'/tmp_check && cd . &&\nTESTLOGDIR='/home/pavel/src/postgresql.master/src/bin/psql/tmp_check/log'\nTESTDATADIR='/home/pavel/src/postgresql.master/src/bin/psql/tmp_check'\nPATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/bin:/home/pavel/src/postgresql.master/src/bin/psql:$PATH\"\nLD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/lib\"\n PGPORT='65432'\ntop_builddir='/home/pavel/src/postgresql.master/src/bin/psql/../../..'\nPG_REGRESS='/home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/regress/pg_regress'\n/usr/bin/prove -I ../../../src/test/perl/ -I . t/*.pl\n# +++ tap check in src/bin/psql +++\nt/001_basic.pl ........... 15/?\n# Failed test '\\watch with 3 iterations: exit code 0'\n# at t/001_basic.pl line 354.\n# got: '3'\n# expected: '0'\n\n# Failed test '\\watch with 3 iterations: no stderr'\n# at t/001_basic.pl line 354.\n# got: 'psql:<stdin>:1: error: \\watch: incorrect interval value\n\"0.01\"'\n# expected: ''\n\n# Failed test '\\watch with 3 iterations: matches'\n# at t/001_basic.pl line 354.\n# ''\n# doesn't match '(?^:1\\n1\\n1)'\n# Looks like you failed 3 tests of 80.\nt/001_basic.pl ........... Dubious, test returned 3 (wstat 768, 0x300)\nFailed 3/80 subtests\nt/010_tab_completion.pl .. ok\nt/020_cancel.pl .......... ok\n\nTest Summary Report\n-------------------\nt/001_basic.pl (Wstat: 768 (exited 3) Tests: 80 Failed: 3)\n Failed tests: 68-70\n Non-zero exit status: 3\nFiles=3, Tests=170, 4 wallclock secs ( 0.09 usr 0.01 sys + 2.43 cusr\n 1.24 csys = 3.77 CPU)\nResult: FAIL\nmake: *** [Makefile:87: check] Chyba 1\n\nwith C lokale it hangs\n\nIt is broken from\n\ncommit 00beecfe839c878abb366b68272426ed5296bc2b (HEAD)\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Apr 6 13:18:14 2023 -0400\n\n psql: add an optional execution-count limit to \\watch.\n\n \\watch can now be told to stop after N executions of the query.\n\n With the idea that we might want to add more options to \\watch\n in future, this patch generalizes the command's syntax to a list\n of name=value options, with the interval allowed to omit the name\n for backwards compatibility.\n\n Andrey Borodin, reviewed by Kyotaro Horiguchi, Nathan Bossart,\n Michael Paquier, Yugo Nagata, and myself\n\n Discussion:\nhttps://postgr.es/m/CAAhFRxiZ2-n_L1ErMm9AZjgmUK=qS6VHb+0SaMn8sqqbhF7How@mail.gmail.com\n\n Discussion:\nhttp://postgr.es/m/CAPmGK15FuPVGx3TGHKShsbPKKtF1y58-ZLcKoxfN-nqLj1dZ%3Dg%40mail.gmail.com\n[pavel@localhost postgresql.master]$ uname -a\nLinux localhost.localdomain 6.2.14-300.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC\nMon May 1 00:55:28 UTC 2023 x86_64 GNU/Linux\n[pavel@localhost postgresql.master]$ gcc --version\ngcc (GCC) 13.1.1 20230426 (Red Hat 13.1.1-1)\nCopyright (C) 2023 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\nProbably the locale problem was fixed - because test on master hangs always\nwithout dependency on locale\n\nRegards\n\nPavel\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>>\n>> Pavel\n>>\n>>\n>>\n>> 1.\n>>\n>>\n>>\n>>\n>>\n>>\n>>>\n>>> --\n>>> Daniel Gustafsson\n>>>\n>>>\n\nHiút 9. 5. 2023 v 13:53 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 9. 5. 2023 v 11:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 9. 5. 2023 v 10:48 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 9 May 2023, at 08:52, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> Hi\n> \n> I try run make check-world. Now I have problems with tests of psql\n> \n> I had to cancel tests\n> \n> log:\n> \n> [08:46:49.828](0.038s) ok 63 - no ON_ERROR_STOP, --single-transaction and multiple -c switches\n> [08:46:49.860](0.033s) ok 64 - client-side error commits transaction, no ON_ERROR_STOP and multiple -c switches\n> [08:46:49.928](0.067s) ok 65 - \\copy from with DEFAULT: exit code 0\n> [08:46:49.929](0.001s) ok 66 - \\copy from with DEFAULT: no stderr\n> [08:46:49.930](0.001s) ok 67 - \\copy from with DEFAULT: matches\n> death by signal at /home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 3042.\n> # Postmaster PID for node \"main\" is 157863\n> ### Stopping node \"main\" using mode immediate\n> # Running: pg_ctl -D /home/pavel/src/postgresql.master/src/bin/psql/tmp_check/t_001_basic_main_data/pgdata -m immediate stop\n> waiting for server to shut down.... done\n> server stopped\n> # No postmaster PID for node \"main\"\n> [08:47:30.361](40.431s) # Tests were run but no plan was declared and done_testing() was not seen.\n> [08:47:30.362](0.001s) # Looks like your test exited with 4 just after 67.\n> Warning: unable to close filehandle $orig_stderr properly: Broken pipe during global destruction.\n\nI'm unable to reproduce, and this clearly works in the buildfarm and CI.  Did\nyou run out of disk on the volume during the test or something similar?\nAnything interesting in the serverlogs from the tmp_check install?I have enough free space on discI don't see nothing interesting in log (it is another run)2023-05-09 08:50:04.839 CEST [158930] 001_basic.pl LOG:  statement: COPY  copy_default FROM STDIN with (format 'csv', default 'placeholder');2023-05-09 08:50:04.841 CEST [158930] 001_basic.pl LOG:  statement: SELECT * FROM copy_default2023-05-09 08:50:04.879 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:04.888 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:04.898 CEST [158932] 001_basic.pl LOG:  statement: SELECT 1.2023-05-09 08:50:28.375 CEST [158862] LOG:  received immediate shutdown request2023-05-09 08:50:28.385 CEST [158862] LOG:  database system is shut downbacktrace from perlProgram received signal SIGINT, Interrupt.0x00007f387ecc1ade in select () from /lib64/libc.so.6(gdb) bt#0  0x00007f387ecc1ade in select () from /lib64/libc.so.6#1  0x00007f387e97363b in Perl_pp_sselect () from /lib64/libperl.so.5.36#2  0x00007f387e917958 in Perl_runops_standard () from /lib64/libperl.so.5.36#3  0x00007f387e88259d in perl_run () from /lib64/libperl.so.5.36#4  0x00005588bceb234a in main ()RegardsI repeated another build with the same result.Tested REL_15_STABLE branch without any problems.There is some dependence on localesfor commit  96c498d2f8ce5f0082c64793f94e2d0cfa7d7605with my cs_CZ.utf8 localeecho \"# +++ tap check in src/bin/psql +++\" && rm -rf '/home/pavel/src/postgresql.master/src/bin/psql'/tmp_check && /usr/bin/mkdir -p '/home/pavel/src/postgresql.master/src/bin/psql'/tmp_check && cd . && TESTLOGDIR='/home/pavel/src/postgresql.master/src/bin/psql/tmp_check/log' TESTDATADIR='/home/pavel/src/postgresql.master/src/bin/psql/tmp_check' PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/bin:/home/pavel/src/postgresql.master/src/bin/psql:$PATH\" LD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/lib\"  PGPORT='65432' top_builddir='/home/pavel/src/postgresql.master/src/bin/psql/../../..' PG_REGRESS='/home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/regress/pg_regress' /usr/bin/prove -I ../../../src/test/perl/ -I .  t/*.pl# +++ tap check in src/bin/psql +++t/001_basic.pl ........... 15/? #   Failed test '\\watch with 3 iterations: exit code 0'#   at t/001_basic.pl line 354.#          got: '3'#     expected: '0'#   Failed test '\\watch with 3 iterations: no stderr'#   at t/001_basic.pl line 354.#          got: 'psql:<stdin>:1: error: \\watch: incorrect interval value \"0.01\"'#     expected: ''#   Failed test '\\watch with 3 iterations: matches'#   at t/001_basic.pl line 354.#                   ''#     doesn't match '(?^:1\\n1\\n1)'# Looks like you failed 3 tests of 80.t/001_basic.pl ........... Dubious, test returned 3 (wstat 768, 0x300)Failed 3/80 subtests t/010_tab_completion.pl .. ok    t/020_cancel.pl .......... ok   Test Summary Report-------------------t/001_basic.pl         (Wstat: 768 (exited 3) Tests: 80 Failed: 3)  Failed tests:  68-70  Non-zero exit status: 3Files=3, Tests=170,  4 wallclock secs ( 0.09 usr  0.01 sys +  2.43 cusr  1.24 csys =  3.77 CPU)Result: FAILmake: *** [Makefile:87: check] Chyba 1with C lokale it hangsIt is broken fromcommit 00beecfe839c878abb366b68272426ed5296bc2b (HEAD)Author: Tom Lane <tgl@sss.pgh.pa.us>Date:   Thu Apr 6 13:18:14 2023 -0400    psql: add an optional execution-count limit to \\watch.        \\watch can now be told to stop after N executions of the query.        With the idea that we might want to add more options to \\watch    in future, this patch generalizes the command's syntax to a list    of name=value options, with the interval allowed to omit the name    for backwards compatibility.        Andrey Borodin, reviewed by Kyotaro Horiguchi, Nathan Bossart,    Michael Paquier, Yugo Nagata, and myself        Discussion: https://postgr.es/m/CAAhFRxiZ2-n_L1ErMm9AZjgmUK=qS6VHb+0SaMn8sqqbhF7How@mail.gmail.com    Discussion: http://postgr.es/m/CAPmGK15FuPVGx3TGHKShsbPKKtF1y58-ZLcKoxfN-nqLj1dZ%3Dg%40mail.gmail.com[pavel@localhost postgresql.master]$ uname -aLinux localhost.localdomain 6.2.14-300.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon May  1 00:55:28 UTC 2023 x86_64 GNU/Linux[pavel@localhost postgresql.master]$ gcc --versiongcc (GCC) 13.1.1 20230426 (Red Hat 13.1.1-1)Copyright (C) 2023 Free Software Foundation, Inc.This is free software; see the source for copying conditions.  There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.Probably the locale problem was fixed - because test on master hangs always without dependency on localeRegardsPavelRegardsPavel Pavel \n\n--\nDaniel Gustafsson", "msg_date": "Tue, 9 May 2023 20:31:23 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Hi\n\nWhen I remove this test, then all tests passed\n\ndiff --git a/src/bin/psql/t/001_basic.pl b/src/bin/psql/t/001_basic.pl\nindex 596746de17..631a1a7335 100644\n--- a/src/bin/psql/t/001_basic.pl\n+++ b/src/bin/psql/t/001_basic.pl\n@@ -353,11 +353,6 @@ psql_like(\n\n # Check \\watch\n # Note: the interval value is parsed with locale-aware strtod()\n-psql_like(\n- $node,\n- sprintf('SELECT 1 \\watch c=3 i=%g', 0.01),\n- qr/1\\n1\\n1/,\n- '\\watch with 3 iterations');\n\n # Check \\watch errors\n psql_fails_like(\n\nCan somebody repeat this testing of FC38?\n\nRegards\n\nPavel\n\nHiWhen I remove this test, then all tests passeddiff --git a/src/bin/psql/t/001_basic.pl b/src/bin/psql/t/001_basic.plindex 596746de17..631a1a7335 100644--- a/src/bin/psql/t/001_basic.pl+++ b/src/bin/psql/t/001_basic.pl@@ -353,11 +353,6 @@ psql_like(  # Check \\watch # Note: the interval value is parsed with locale-aware strtod()-psql_like(-   $node,-   sprintf('SELECT 1 \\watch c=3 i=%g', 0.01),-   qr/1\\n1\\n1/,-   '\\watch with 3 iterations');  # Check \\watch errors psql_fails_like(Can somebody repeat this testing of FC38?RegardsPavel", "msg_date": "Wed, 10 May 2023 06:58:22 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "\n\n> On 10 May 2023, at 09:58, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> When I remove this test, then all tests passed\n\nHi Pavel!\n\nCan you plz share how sprintf('SELECT 1 \\watch c=3 i=%g', 0.01) is formatting 0.01 on your system?\nAnd try to run that string against psql.\n\nAs an alternative I propose to use \"i=0”. I hope 0 is more locale-independent (but I’m not sure)… But the test's coverage will decrease.\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Thu, 11 May 2023 14:36:37 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Hi\n\n\nčt 11. 5. 2023 v 11:36 odesílatel Andrey M. Borodin <x4mmm@yandex-team.ru>\nnapsal:\n\n>\n>\n> > On 10 May 2023, at 09:58, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> > When I remove this test, then all tests passed\n>\n> Hi Pavel!\n>\n> Can you plz share how sprintf('SELECT 1 \\watch c=3 i=%g', 0.01) is\n> formatting 0.01 on your system?\n> And try to run that string against psql.\n>\n> As an alternative I propose to use \"i=0”. I hope 0 is more\n> locale-independent (but I’m not sure)… But the test's coverage will\n> decrease.\n>\n> Best regards, Andrey Borodin.\n>\n>\n[pavel@localhost psql]$ cat test.pl\nuse locale;\nmy $result = sprintf('SELECT 1 \\watch c=3 i=%g', 0.01);\nprint \">>$result<<\\n\";\n\n[pavel@localhost psql]$ perl test.pl\n>>SELECT 1 \\watch c=3 i=0,01<<\n[pavel@localhost psql]$ LANG=C perl test.pl\n>>SELECT 1 \\watch c=3 i=0.01<<\n\nRegards\n\nPavel\n\nHičt 11. 5. 2023 v 11:36 odesílatel Andrey M. Borodin <x4mmm@yandex-team.ru> napsal:\n\n> On 10 May 2023, at 09:58, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> When I remove this test, then all tests passed\n\nHi Pavel!\n\nCan you plz share how sprintf('SELECT 1 \\watch c=3 i=%g', 0.01) is formatting 0.01 on your system?\nAnd try to run that string against psql.\n\nAs an alternative I propose to use \"i=0”. I hope 0 is more locale-independent (but I’m not sure)… But the test's coverage will decrease.\n\nBest regards, Andrey Borodin.\n[pavel@localhost psql]$ cat test.pluse locale;my $result = sprintf('SELECT 1 \\watch c=3 i=%g', 0.01);print \">>$result<<\\n\";[pavel@localhost psql]$ perl test.pl >>SELECT 1 \\watch c=3 i=0,01<<[pavel@localhost psql]$ LANG=C perl test.pl >>SELECT 1 \\watch c=3 i=0.01<<RegardsPavel", "msg_date": "Thu, 11 May 2023 14:59:43 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> When I remove this test, then all tests passed\n\nThis works fine for me on Fedora 37:\n\n$ cd src/bin/psql\n$ LANG=cs_CZ.utf8 make installcheck\nmake -C ../../../src/backend generated-headers\nmake[1]: Vstupuje se do adresáře „/home/tgl/pgsql/src/backend“\n...\n# +++ tap install-check in src/bin/psql +++\nt/001_basic.pl ........... ok \nt/010_tab_completion.pl .. ok \nt/020_cancel.pl .......... ok \nAll tests successful.\nFiles=3, Tests=169, 6 wallclock secs ( 0.06 usr 0.02 sys + 2.64 cusr 0.99 csys = 3.71 CPU)\nResult: PASS\n\nI wonder if you have something inconsistent in your locale\nconfiguration. What do you see from\n\n$ env | grep '^L[CA]'\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 May 2023 14:44:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "čt 11. 5. 2023 v 20:44 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > When I remove this test, then all tests passed\n>\n> This works fine for me on Fedora 37:\n>\n\nI have Fedora 38\n\n>\n> $ cd src/bin/psql\n> $ LANG=cs_CZ.utf8 make installcheck\n> make -C ../../../src/backend generated-headers\n> make[1]: Vstupuje se do adresáře „/home/tgl/pgsql/src/backend“\n> ...\n> # +++ tap install-check in src/bin/psql +++\n> t/001_basic.pl ........... ok\n> t/010_tab_completion.pl .. ok\n> t/020_cancel.pl .......... ok\n> All tests successful.\n> Files=3, Tests=169, 6 wallclock secs ( 0.06 usr 0.02 sys + 2.64 cusr\n> 0.99 csys = 3.71 CPU)\n> Result: PASS\n>\n> I wonder if you have something inconsistent in your locale\n> configuration. What do you see from\n>\n> $ env | grep '^L[CA]'\n>\n\n [pavel@localhost psql]$ env | grep '^L[CA]'\nLANG=cs_CZ.UTF-8\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n\nčt 11. 5. 2023 v 20:44 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> When I remove this test, then all tests passed\n\nThis works fine for me on Fedora 37:I have Fedora 38 \n\n$ cd src/bin/psql\n$ LANG=cs_CZ.utf8 make installcheck\nmake -C ../../../src/backend generated-headers\nmake[1]: Vstupuje se do adresáře „/home/tgl/pgsql/src/backend“\n...\n# +++ tap install-check in src/bin/psql +++\nt/001_basic.pl ........... ok    \nt/010_tab_completion.pl .. ok    \nt/020_cancel.pl .......... ok   \nAll tests successful.\nFiles=3, Tests=169,  6 wallclock secs ( 0.06 usr  0.02 sys +  2.64 cusr  0.99 csys =  3.71 CPU)\nResult: PASS\n\nI wonder if you have something inconsistent in your locale\nconfiguration.  What do you see from\n\n$ env | grep '^L[CA]' [pavel@localhost psql]$ env | grep '^L[CA]'LANG=cs_CZ.UTF-8RegardsPavel\n\n                        regards, tom lane", "msg_date": "Thu, 11 May 2023 21:06:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "On Thu, May 11, 2023 at 3:06 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> čt 11. 5. 2023 v 20:44 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > When I remove this test, then all tests passed\n>>\n>> This works fine for me on Fedora 37:\n>>\n>\n> I have Fedora 38\n>\n>>\n>> $ cd src/bin/psql\n>> $ LANG=cs_CZ.utf8 make installcheck\n>> make -C ../../../src/backend generated-headers\n>> make[1]: Vstupuje se do adresáře „/home/tgl/pgsql/src/backend“\n>> ...\n>> # +++ tap install-check in src/bin/psql +++\n>> t/001_basic.pl ........... ok\n>> t/010_tab_completion.pl .. ok\n>> t/020_cancel.pl .......... ok\n>> All tests successful.\n>> Files=3, Tests=169, 6 wallclock secs ( 0.06 usr 0.02 sys + 2.64 cusr\n>> 0.99 csys = 3.71 CPU)\n>> Result: PASS\n>>\n>> I wonder if you have something inconsistent in your locale\n>> configuration. What do you see from\n>>\n>> $ env | grep '^L[CA]'\n>>\n>\n> [pavel@localhost psql]$ env | grep '^L[CA]'\n> LANG=cs_CZ.UTF-8\n>\n> Regards\n>\n> Pavel\n>\n\nStranger things, but is LANG case sensitive, or formatted differently?\n\ntom> $ LANG=cs_CZ.utf8 make installcheck\n you> LANG=cs_CZ.*UTF-8*\n\n\n\n>\n>\n>> regards, tom lane\n>>\n>\n\nOn Thu, May 11, 2023 at 3:06 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 11. 5. 2023 v 20:44 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> When I remove this test, then all tests passed\n\nThis works fine for me on Fedora 37:I have Fedora 38 \n\n$ cd src/bin/psql\n$ LANG=cs_CZ.utf8 make installcheck\nmake -C ../../../src/backend generated-headers\nmake[1]: Vstupuje se do adresáře „/home/tgl/pgsql/src/backend“\n...\n# +++ tap install-check in src/bin/psql +++\nt/001_basic.pl ........... ok    \nt/010_tab_completion.pl .. ok    \nt/020_cancel.pl .......... ok   \nAll tests successful.\nFiles=3, Tests=169,  6 wallclock secs ( 0.06 usr  0.02 sys +  2.64 cusr  0.99 csys =  3.71 CPU)\nResult: PASS\n\nI wonder if you have something inconsistent in your locale\nconfiguration.  What do you see from\n\n$ env | grep '^L[CA]' [pavel@localhost psql]$ env | grep '^L[CA]'LANG=cs_CZ.UTF-8RegardsPavelStranger things, but is LANG case sensitive, or formatted differently?tom> $ LANG=cs_CZ.utf8 make installcheck you> LANG=cs_CZ.UTF-8\n\n                        regards, tom lane", "msg_date": "Thu, 11 May 2023 15:15:12 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Hi\n\n>\n>\n> Stranger things, but is LANG case sensitive, or formatted differently?\n>\n> tom> $ LANG=cs_CZ.utf8 make installcheck\n> you> LANG=cs_CZ.*UTF-8*\n>\n>\nI don't think so encoding is case sensitive - I am not sure, but minimally\nncurses applications works without any problems, and ncurses is very locale\nsensitive\n\n$ LANG=cs_CZ.utf8 make check\n\ndoesn't help\n\nRegards\n\nPavel\n\n\n>\n>\n>>\n>>\n>>> regards, tom lane\n>>>\n>>\n\nHiStranger things, but is LANG case sensitive, or formatted differently?tom> $ LANG=cs_CZ.utf8 make installcheck you> LANG=cs_CZ.UTF-8I don't think so encoding is case sensitive - I am not sure, but minimally ncurses applications works without any problems, and ncurses is very locale sensitive$ LANG=cs_CZ.utf8 make check  doesn't helpRegardsPavel \n\n                        regards, tom lane", "msg_date": "Thu, 11 May 2023 21:24:36 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> Stranger things, but is LANG case sensitive, or formatted differently?\n\n> I don't think so encoding is case sensitive - I am not sure, but minimally\n> ncurses applications works without any problems, and ncurses is very locale\n> sensitive\n> $ LANG=cs_CZ.utf8 make check\n> doesn't help\n\nRight, glibc is pretty forgiving about the spelling of the encoding\npart of a locale identifier. I did try Pavel's spelling cs_CZ.UTF-8\non my box, and that also works fine here.\n\nIt's hard to believe that any meaningful changes were made in this\narea between F37 and F38, though. I'm now wondering about relevant\npackages being installed on one box and not the other...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 May 2023 15:30:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "On Wed, May 10, 2023 at 12:59 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> When I remove this test, then all tests passed\n>\n> diff --git a/src/bin/psql/t/001_basic.pl b/src/bin/psql/t/001_basic.pl\n> index 596746de17..631a1a7335 100644\n> --- a/src/bin/psql/t/001_basic.pl\n> +++ b/src/bin/psql/t/001_basic.pl\n> @@ -353,11 +353,6 @@ psql_like(\n>\n> # Check \\watch\n> # Note: the interval value is parsed with locale-aware strtod()\n> -psql_like(\n> - $node,\n> - sprintf('SELECT 1 \\watch c=3 i=%g', 0.01),\n> - qr/1\\n1\\n1/,\n> - '\\watch with 3 iterations');\n>\n> # Check \\watch errors\n> psql_fails_like(\n>\n> Can somebody repeat this testing of FC38?\n>\n> Regards\n>\n> Pavel\n>\n> Can you change the 0.01 to just 1 or 0?\n\nI assume it will work then! (and better than a full removal)?\n\nOn Wed, May 10, 2023 at 12:59 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiWhen I remove this test, then all tests passeddiff --git a/src/bin/psql/t/001_basic.pl b/src/bin/psql/t/001_basic.plindex 596746de17..631a1a7335 100644--- a/src/bin/psql/t/001_basic.pl+++ b/src/bin/psql/t/001_basic.pl@@ -353,11 +353,6 @@ psql_like(  # Check \\watch # Note: the interval value is parsed with locale-aware strtod()-psql_like(-   $node,-   sprintf('SELECT 1 \\watch c=3 i=%g', 0.01),-   qr/1\\n1\\n1/,-   '\\watch with 3 iterations');  # Check \\watch errors psql_fails_like(Can somebody repeat this testing of FC38?RegardsPavelCan you change the 0.01 to just 1 or 0? I assume it will work then! (and better than a full removal)?", "msg_date": "Thu, 11 May 2023 16:37:52 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n> Can you change the 0.01 to just 1 or 0?\n> I assume it will work then! (and better than a full removal)?\n\nIMO the point of that test is largely to exercise this locale-dependent\nbehavior, so I'm very unwilling to dumb it down to that extent.\n\nWhat seems to be happening is that the spawned psql process is making\na different choice about what the LC_NUMERIC locale is than its parent\nperl process did. That seems like it might be a bug in itself, since\nPOSIX is pretty clear about how you're supposed to derive the locale\nfrom the relevant environment variables. But maybe it's Perl's bug?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 May 2023 20:08:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "On Thu, May 11, 2023 at 8:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Kirk Wolak <wolakk@gmail.com> writes:\n> > Can you change the 0.01 to just 1 or 0?\n> > I assume it will work then! (and better than a full removal)?\n>\n> IMO the point of that test is largely to exercise this locale-dependent\n> behavior, so I'm very unwilling to dumb it down to that extent.\n>\n> Sorry, I meant simply as opposed to deleting the test to get it to pass.\n\n\n> What seems to be happening is that the spawned psql process is making\n> a different choice about what the LC_NUMERIC locale is than its parent\n> perl process did. That seems like it might be a bug in itself, since\n> POSIX is pretty clear about how you're supposed to derive the locale\n> from the relevant environment variables. But maybe it's Perl's bug?\n>\n> regards, tom lane\n>\n\nDid you try the print statement that Andrey asked Pavel to try?\nBecause it gave 2 different results for Pavel. And Pavel's system has the\nproblem, but yours does not.\n\ncat test.pl\nuse locale;\nmy $result = sprintf('SELECT 1 \\watch c=3 i=%g', 0.01);\nprint \">>$result<<\\n\";\n\nand when Pavel ran it, he got:\n\n[pavel@localhost psql]$ perl test.pl\n>>SELECT 1 \\watch c=3 i=0,01<<\n[pavel@localhost psql]$ LANG=C perl test.pl\n>>SELECT 1 \\watch c=3 i=0.01<<\n\nNow I am curious what you get?\n\nBecause yours works. This should identify the difference.\n\nKirk...\n\nOn Thu, May 11, 2023 at 8:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Kirk Wolak <wolakk@gmail.com> writes:\n> Can you change the 0.01 to just 1 or 0?\n> I assume it will work then! (and better than a full removal)?\n\nIMO the point of that test is largely to exercise this locale-dependent\nbehavior, so I'm very unwilling to dumb it down to that extent.\nSorry, I meant simply as opposed to deleting the test to get it to pass. \nWhat seems to be happening is that the spawned psql process is making\na different choice about what the LC_NUMERIC locale is than its parent\nperl process did.  That seems like it might be a bug in itself, since\nPOSIX is pretty clear about how you're supposed to derive the locale\nfrom the relevant environment variables.  But maybe it's Perl's bug?\n\n                        regards, tom laneDid you try the print statement that Andrey asked Pavel to try?Because it gave 2 different results for Pavel.  And Pavel's system has the problem, but yours does not.cat test.pluse locale;my $result = sprintf('SELECT 1 \\watch c=3 i=%g', 0.01);print \">>$result<<\\n\";and when Pavel ran it, he got:[pavel@localhost psql]$ perl test.pl>>SELECT 1 \\watch c=3 i=0,01<<[pavel@localhost psql]$ LANG=C perl test.pl>>SELECT 1 \\watch c=3 i=0.01<<Now I am curious what you get?Because yours works.  This should identify the difference.Kirk...", "msg_date": "Fri, 12 May 2023 00:02:34 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n> Did you try the print statement that Andrey asked Pavel to try?\n\nYeah, and I get exactly the results I expect:\n\n$ cat test.pl\nuse locale;\nmy $result = sprintf('SELECT 1 \\watch c=3 i=%g', 0.01);\nprint \">>$result<<\\n\";\n$ LANG=cs_CZ.utf8 perl test.pl\n>>SELECT 1 \\watch c=3 i=0,01<<\n$ LANG=C perl test.pl\n>>SELECT 1 \\watch c=3 i=0.01<<\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 May 2023 00:14:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Kirk Wolak <wolakk@gmail.com> writes:\n> > Did you try the print statement that Andrey asked Pavel to try?\n>\n> Yeah, and I get exactly the results I expect:\n>\n> Your results MATCHED Pavels (Hmm). Piping ONE of those into psql should\nfail, and the other one should work, right?\n\nI know Pavel is Czech... So I have to Wonder...\nAre both of you using the same Collation inside of PG? (Or did I miss that\nthe testing forces that setting?)\n\nKirk...\n\nOn Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Kirk Wolak <wolakk@gmail.com> writes:\n> Did you try the print statement that Andrey asked Pavel to try?\n\nYeah, and I get exactly the results I expect:Your results MATCHED Pavels  (Hmm).  Piping ONE of those into psql should fail, and the other one should work, right?I know Pavel is Czech... So I have to Wonder... Are both of you using the same Collation inside of PG? (Or did I miss that the testing forces that setting?)Kirk...", "msg_date": "Fri, 12 May 2023 00:50:20 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "pá 12. 5. 2023 v 6:50 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Kirk Wolak <wolakk@gmail.com> writes:\n>> > Did you try the print statement that Andrey asked Pavel to try?\n>>\n>> Yeah, and I get exactly the results I expect:\n>>\n>> Your results MATCHED Pavels (Hmm). Piping ONE of those into psql should\n> fail, and the other one should work, right?\n>\n> I know Pavel is Czech... So I have to Wonder...\n> Are both of you using the same Collation inside of PG? (Or did I miss that\n> the testing forces that setting?)\n>\n\nThe strange thing is hanging. Broken tests depending on locale are usual.\nBut I didn't remember hanging.\n\nRegards\n\nPavel\n\n\n\n> Kirk...\n>\n\npá 12. 5. 2023 v 6:50 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Kirk Wolak <wolakk@gmail.com> writes:\n> Did you try the print statement that Andrey asked Pavel to try?\n\nYeah, and I get exactly the results I expect:Your results MATCHED Pavels  (Hmm).  Piping ONE of those into psql should fail, and the other one should work, right?I know Pavel is Czech... So I have to Wonder... Are both of you using the same Collation inside of PG? (Or did I miss that the testing forces that setting?)The strange thing is hanging. Broken tests depending on locale are usual. But I didn't remember hanging. RegardsPavelKirk...", "msg_date": "Fri, 12 May 2023 07:45:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "On Fri, May 12, 2023 at 1:46 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> pá 12. 5. 2023 v 6:50 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>\n>> On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>>> Kirk Wolak <wolakk@gmail.com> writes:\n>>> > Did you try the print statement that Andrey asked Pavel to try?\n>>> ...\n>>\n>>\n> The strange thing is hanging. Broken tests depending on locale are usual.\n> But I didn't remember hanging.\n>\n> Regards\n>\n> Pavel\n>\n\nSo, if you do psql -c \"...\"\nwith both of those \\watch instructions, do either one hang? (I am now\nguessing \"no\")\n\nI know that perl is using a special library to \"remote control psql\" (like\na pseudo terminal, I guess).\n[I had to abort some of the perl testing in Windows because that perl\nlibrary didn't work with my psql in Windows]\n\nNext, can you detect which process is hanging? (is it perl, the library,\npsql, ?).\n\nI would be curious now about the details of your perl install, and your\nperl libraries...\n\nOn Fri, May 12, 2023 at 1:46 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 12. 5. 2023 v 6:50 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Kirk Wolak <wolakk@gmail.com> writes:\n> Did you try the print statement that Andrey asked Pavel to try?...The strange thing is hanging. Broken tests depending on locale are usual. But I didn't remember hanging. RegardsPavelSo, if you do psql -c \"...\"with both of those \\watch  instructions, do either one hang? (I am now guessing \"no\")I know that perl is using a special library to \"remote control psql\" (like a pseudo terminal, I guess).[I had to abort some of the perl testing in Windows because that perl library didn't work with my psql in Windows]Next, can you detect which process is hanging? (is it perl, the library, psql, ?).I would be curious now about the details of your perl install, and your perl libraries...", "msg_date": "Fri, 12 May 2023 02:19:54 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "pá 12. 5. 2023 v 8:20 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> On Fri, May 12, 2023 at 1:46 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> pá 12. 5. 2023 v 6:50 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>>\n>>> On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>\n>>>> Kirk Wolak <wolakk@gmail.com> writes:\n>>>> > Did you try the print statement that Andrey asked Pavel to try?\n>>>> ...\n>>>\n>>>\n>> The strange thing is hanging. Broken tests depending on locale are usual.\n>> But I didn't remember hanging.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n> So, if you do psql -c \"...\"\n> with both of those \\watch instructions, do either one hang? (I am now\n> guessing \"no\")\n>\n> I know that perl is using a special library to \"remote control psql\" (like\n> a pseudo terminal, I guess).\n> [I had to abort some of the perl testing in Windows because that perl\n> library didn't work with my psql in Windows]\n>\n> Next, can you detect which process is hanging? (is it perl, the library,\n> psql, ?).\n>\n\nIt hangs in perl\n\nbut now I found there is dependency on PSQL_PAGER setting\n\nit started pager in background, I had lot of zombie pspg processes\n\nUnfortunately, when I unset this variable, the test hangs still\n\nhere is backtrace\n\nMissing separate debuginfos, use: dnf debuginfo-install\nperl-interpreter-5.36.1-496.fc38.x86_64\n(gdb) bt\n#0 0x00007fbbc1129ade in select () from /lib64/libc.so.6\n#1 0x00007fbbc137363b in Perl_pp_sselect () from /lib64/libperl.so.5.36\n#2 0x00007fbbc1317958 in Perl_runops_standard () from\n/lib64/libperl.so.5.36\n#3 0x00007fbbc128259d in perl_run () from /lib64/libperl.so.5.36\n#4 0x000056392bd9034a in main ()\n\nIt is waiting on reading from pipe probably\n\npsql is living too, and it is waiting too\n\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\n0x00007f071740bc37 in wait4 () from /lib64/libc.so.6\nMissing separate debuginfos, use: dnf debuginfo-install\nglibc-2.37-4.fc38.x86_64 ncurses-libs-6.4-3.20230114.fc38.x86_64\nreadline-8.2-3.fc38.x86_64\n(gdb) bt\n#0 0x00007f071740bc37 in wait4 () from /lib64/libc.so.6\n#1 0x00007f07173a9a10 in _IO_proc_close@@GLIBC_2.2.5 () from\n/lib64/libc.so.6\n#2 0x00007f07173b51e9 in __GI__IO_file_close_it () from /lib64/libc.so.6\n#3 0x00007f07173a79fb in fclose@@GLIBC_2.2.5 () from /lib64/libc.so.6\n#4 0x0000000000406be4 in do_watch (query_buf=query_buf@entry=0x5ae540,\nsleep=sleep@entry=0.01, iter=0, iter@entry=3) at command.c:5348\n#5 0x00000000004087a5 in exec_command_watch\n(scan_state=scan_state@entry=0x5ae490,\nactive_branch=active_branch@entry=true, query_buf=query_buf@entry=0x5ae540,\nprevious_buf=previous_buf@entry=0x5ae560) at command.c:2875\n#6 0x000000000040d4ba in exec_command (previous_buf=0x5ae560,\nquery_buf=0x5ae540, cstack=0x5ae520, scan_state=0x5ae490, cmd=0x5ae9a0\n\"watch\") at command.c:413\n#7 HandleSlashCmds (scan_state=scan_state@entry=0x5ae490,\ncstack=cstack@entry=0x5ae520, query_buf=0x5ae540, previous_buf=0x5ae560) at\ncommand.c:230\n\nI am not sure, it is still doesn't work but probably there are some\ndependencies on my setting\n\nPSQL_PAGER and PSQL_WATCH_PAGER\n\nso this tests fails due my setting\n\n[pavel@localhost postgresql.master]$ set |grep PSQL\nPSQL_PAGER='pspg -X'\nPSQL_WATCH_PAGER='pspg -X --stream'\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n\n\n\n>\n> I would be curious now about the details of your perl install, and your\n> perl libraries...\n>\n>\n>\n>\n\npá 12. 5. 2023 v 8:20 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, May 12, 2023 at 1:46 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 12. 5. 2023 v 6:50 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Kirk Wolak <wolakk@gmail.com> writes:\n> Did you try the print statement that Andrey asked Pavel to try?...The strange thing is hanging. Broken tests depending on locale are usual. But I didn't remember hanging. RegardsPavelSo, if you do psql -c \"...\"with both of those \\watch  instructions, do either one hang? (I am now guessing \"no\")I know that perl is using a special library to \"remote control psql\" (like a pseudo terminal, I guess).[I had to abort some of the perl testing in Windows because that perl library didn't work with my psql in Windows]Next, can you detect which process is hanging? (is it perl, the library, psql, ?).It hangs in perlbut now I found there is dependency on PSQL_PAGER settingit started pager in background, I had lot of zombie pspg processes Unfortunately, when I unset this variable, the test hangs stillhere is backtraceMissing separate debuginfos, use: dnf debuginfo-install perl-interpreter-5.36.1-496.fc38.x86_64(gdb) bt#0  0x00007fbbc1129ade in select () from /lib64/libc.so.6#1  0x00007fbbc137363b in Perl_pp_sselect () from /lib64/libperl.so.5.36#2  0x00007fbbc1317958 in Perl_runops_standard () from /lib64/libperl.so.5.36#3  0x00007fbbc128259d in perl_run () from /lib64/libperl.so.5.36#4  0x000056392bd9034a in main ()It is waiting on reading from pipe probablypsql is living too, and it is waiting tooUsing host libthread_db library \"/lib64/libthread_db.so.1\".0x00007f071740bc37 in wait4 () from /lib64/libc.so.6Missing separate debuginfos, use: dnf debuginfo-install glibc-2.37-4.fc38.x86_64 ncurses-libs-6.4-3.20230114.fc38.x86_64 readline-8.2-3.fc38.x86_64(gdb) bt#0  0x00007f071740bc37 in wait4 () from /lib64/libc.so.6#1  0x00007f07173a9a10 in _IO_proc_close@@GLIBC_2.2.5 () from /lib64/libc.so.6#2  0x00007f07173b51e9 in __GI__IO_file_close_it () from /lib64/libc.so.6#3  0x00007f07173a79fb in fclose@@GLIBC_2.2.5 () from /lib64/libc.so.6#4  0x0000000000406be4 in do_watch (query_buf=query_buf@entry=0x5ae540, sleep=sleep@entry=0.01, iter=0, iter@entry=3) at command.c:5348#5  0x00000000004087a5 in exec_command_watch (scan_state=scan_state@entry=0x5ae490, active_branch=active_branch@entry=true, query_buf=query_buf@entry=0x5ae540, previous_buf=previous_buf@entry=0x5ae560) at command.c:2875#6  0x000000000040d4ba in exec_command (previous_buf=0x5ae560, query_buf=0x5ae540, cstack=0x5ae520, scan_state=0x5ae490, cmd=0x5ae9a0 \"watch\") at command.c:413#7  HandleSlashCmds (scan_state=scan_state@entry=0x5ae490, cstack=cstack@entry=0x5ae520, query_buf=0x5ae540, previous_buf=0x5ae560) at command.c:230I am not sure, it is still doesn't work but probably there are some dependencies on my settingPSQL_PAGER and PSQL_WATCH_PAGERso this tests fails due my setting[pavel@localhost postgresql.master]$ set |grep PSQLPSQL_PAGER='pspg -X'PSQL_WATCH_PAGER='pspg -X --stream'RegardsPavel I would be curious now about the details of your perl install, and your perl libraries...", "msg_date": "Fri, 12 May 2023 08:40:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "On Fri, May 12, 2023 at 2:40 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> pá 12. 5. 2023 v 8:20 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>\n>> On Fri, May 12, 2023 at 1:46 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> pá 12. 5. 2023 v 6:50 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>>>\n>>>> On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>\n>>>>> Kirk Wolak <wolakk@gmail.com> writes:\n>>>>> > Did you try the print statement that Andrey asked Pavel to try?\n>>>>> ...\n>>>>\n>>>>\n>>> The strange thing is hanging. Broken tests depending on locale are\n>>> usual. But I didn't remember hanging.\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>\n>> So, if you do psql -c \"...\"\n>> with both of those \\watch instructions, do either one hang? (I am now\n>> guessing \"no\")\n>>\n>> I know that perl is using a special library to \"remote control psql\"\n>> (like a pseudo terminal, I guess).\n>> [I had to abort some of the perl testing in Windows because that perl\n>> library didn't work with my psql in Windows]\n>>\n>> Next, can you detect which process is hanging? (is it perl, the library,\n>> psql, ?).\n>>\n>\n> It hangs in perl\n>\n> but now I found there is dependency on PSQL_PAGER setting\n>\n> it started pager in background, I had lot of zombie pspg processes\n>\n> Unfortunately, when I unset this variable, the test hangs still\n>\n> here is backtrace\n>\n> Missing separate debuginfos, use: dnf debuginfo-install\n> perl-interpreter-5.36.1-496.fc38.x86_64\n> (gdb) bt\n> #0 0x00007fbbc1129ade in select () from /lib64/libc.so.6\n> #1 0x00007fbbc137363b in Perl_pp_sselect () from /lib64/libperl.so.5.36\n> #2 0x00007fbbc1317958 in Perl_runops_standard () from\n> /lib64/libperl.so.5.36\n> #3 0x00007fbbc128259d in perl_run () from /lib64/libperl.so.5.36\n> #4 0x000056392bd9034a in main ()\n>\n> It is waiting on reading from pipe probably\n>\n> psql is living too, and it is waiting too\n>\n> Using host libthread_db library \"/lib64/libthread_db.so.1\".\n> 0x00007f071740bc37 in wait4 () from /lib64/libc.so.6\n> Missing separate debuginfos, use: dnf debuginfo-install\n> glibc-2.37-4.fc38.x86_64 ncurses-libs-6.4-3.20230114.fc38.x86_64\n> readline-8.2-3.fc38.x86_64\n> (gdb) bt\n> #0 0x00007f071740bc37 in wait4 () from /lib64/libc.so.6\n> #1 0x00007f07173a9a10 in _IO_proc_close@@GLIBC_2.2.5 () from\n> /lib64/libc.so.6\n> #2 0x00007f07173b51e9 in __GI__IO_file_close_it () from /lib64/libc.so.6\n> #3 0x00007f07173a79fb in fclose@@GLIBC_2.2.5 () from /lib64/libc.so.6\n> #4 0x0000000000406be4 in do_watch (query_buf=query_buf@entry=0x5ae540,\n> sleep=sleep@entry=0.01, iter=0, iter@entry=3) at command.c:5348\n> #5 0x00000000004087a5 in exec_command_watch (scan_state=scan_state@entry=0x5ae490,\n> active_branch=active_branch@entry=true, query_buf=query_buf@entry=0x5ae540,\n> previous_buf=previous_buf@entry=0x5ae560) at command.c:2875\n> #6 0x000000000040d4ba in exec_command (previous_buf=0x5ae560,\n> query_buf=0x5ae540, cstack=0x5ae520, scan_state=0x5ae490, cmd=0x5ae9a0\n> \"watch\") at command.c:413\n> #7 HandleSlashCmds (scan_state=scan_state@entry=0x5ae490,\n> cstack=cstack@entry=0x5ae520, query_buf=0x5ae540, previous_buf=0x5ae560)\n> at command.c:230\n>\n> I am not sure, it is still doesn't work but probably there are some\n> dependencies on my setting\n>\n> PSQL_PAGER and PSQL_WATCH_PAGER\n>\n> so this tests fails due my setting\n>\n> [pavel@localhost postgresql.master]$ set |grep PSQL\n> PSQL_PAGER='pspg -X'\n> PSQL_WATCH_PAGER='pspg -X --stream'\n>\n> Regards\n>\n> Pavel\n>\n>\nUmmm... We are testing PSQL \\watch and you potentially have a\nPSQL_WATCH_PAGER that is kicking in?\nBy chance does that attempt to read/process/understand the \\watch ?\nAlso, if it is interfering with the stream, that would explain it. The\nperl library is trying to \"control\" psql.\nIf it ends up talking to you instead... All bets are off, imo. I don't\nknow enough about PSQL_WATCH_PAGER.\n\nNow I would be curious if you changed the test from\nSELECT 1 \\watch c=3 0.01\n\nto\nSELECT 1 \\watch 0.01\n\nbecause that should work. Then I would test\nSELECT \\watch 0.01 c=3\n\nIf you are trying to parse the watch at all, that could break. Then your\ncode might be trying to \"complain\",\nand then that is screwing up the planned interaction (Just Guessing).\n\nKirk...\n\nOn Fri, May 12, 2023 at 2:40 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 12. 5. 2023 v 8:20 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, May 12, 2023 at 1:46 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 12. 5. 2023 v 6:50 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, May 12, 2023 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Kirk Wolak <wolakk@gmail.com> writes:\n> Did you try the print statement that Andrey asked Pavel to try?...The strange thing is hanging. Broken tests depending on locale are usual. But I didn't remember hanging. RegardsPavelSo, if you do psql -c \"...\"with both of those \\watch  instructions, do either one hang? (I am now guessing \"no\")I know that perl is using a special library to \"remote control psql\" (like a pseudo terminal, I guess).[I had to abort some of the perl testing in Windows because that perl library didn't work with my psql in Windows]Next, can you detect which process is hanging? (is it perl, the library, psql, ?).It hangs in perlbut now I found there is dependency on PSQL_PAGER settingit started pager in background, I had lot of zombie pspg processes Unfortunately, when I unset this variable, the test hangs stillhere is backtraceMissing separate debuginfos, use: dnf debuginfo-install perl-interpreter-5.36.1-496.fc38.x86_64(gdb) bt#0  0x00007fbbc1129ade in select () from /lib64/libc.so.6#1  0x00007fbbc137363b in Perl_pp_sselect () from /lib64/libperl.so.5.36#2  0x00007fbbc1317958 in Perl_runops_standard () from /lib64/libperl.so.5.36#3  0x00007fbbc128259d in perl_run () from /lib64/libperl.so.5.36#4  0x000056392bd9034a in main ()It is waiting on reading from pipe probablypsql is living too, and it is waiting tooUsing host libthread_db library \"/lib64/libthread_db.so.1\".0x00007f071740bc37 in wait4 () from /lib64/libc.so.6Missing separate debuginfos, use: dnf debuginfo-install glibc-2.37-4.fc38.x86_64 ncurses-libs-6.4-3.20230114.fc38.x86_64 readline-8.2-3.fc38.x86_64(gdb) bt#0  0x00007f071740bc37 in wait4 () from /lib64/libc.so.6#1  0x00007f07173a9a10 in _IO_proc_close@@GLIBC_2.2.5 () from /lib64/libc.so.6#2  0x00007f07173b51e9 in __GI__IO_file_close_it () from /lib64/libc.so.6#3  0x00007f07173a79fb in fclose@@GLIBC_2.2.5 () from /lib64/libc.so.6#4  0x0000000000406be4 in do_watch (query_buf=query_buf@entry=0x5ae540, sleep=sleep@entry=0.01, iter=0, iter@entry=3) at command.c:5348#5  0x00000000004087a5 in exec_command_watch (scan_state=scan_state@entry=0x5ae490, active_branch=active_branch@entry=true, query_buf=query_buf@entry=0x5ae540, previous_buf=previous_buf@entry=0x5ae560) at command.c:2875#6  0x000000000040d4ba in exec_command (previous_buf=0x5ae560, query_buf=0x5ae540, cstack=0x5ae520, scan_state=0x5ae490, cmd=0x5ae9a0 \"watch\") at command.c:413#7  HandleSlashCmds (scan_state=scan_state@entry=0x5ae490, cstack=cstack@entry=0x5ae520, query_buf=0x5ae540, previous_buf=0x5ae560) at command.c:230I am not sure, it is still doesn't work but probably there are some dependencies on my settingPSQL_PAGER and PSQL_WATCH_PAGERso this tests fails due my setting[pavel@localhost postgresql.master]$ set |grep PSQLPSQL_PAGER='pspg -X'PSQL_WATCH_PAGER='pspg -X --stream'RegardsPavelUmmm... We are testing PSQL \\watch  and you potentially have a PSQL_WATCH_PAGER that is kicking in?By chance does that attempt to read/process/understand the \\watch ?Also, if it is interfering with the stream, that would explain it.  The perl library is trying to \"control\" psql.If it ends up talking to you instead... All bets are off, imo.  I don't know enough about PSQL_WATCH_PAGER.Now I would be curious if you changed the test fromSELECT 1 \\watch c=3 0.01 to SELECT 1 \\watch 0.01because that should work.  Then I would testSELECT \\watch 0.01 c=3If you are trying to parse the watch at all, that could break.  Then your code might be trying to \"complain\",and then that is screwing up the planned interaction (Just Guessing).Kirk...", "msg_date": "Fri, 12 May 2023 03:00:05 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "On 2023-May-12, Pavel Stehule wrote:\n\n> It hangs in perl\n\nI wonder if \"hanging\" actually means that it interpreted the sleep time\nas a very large integer, so it's just sleeping for a long time.\n\nAbout the server locale, note that the ->new() call explicitly requests\nthe C locale -- it's only psql that is using the Czech locale.\nSupposedly, the Perl code should also be using the Czech locale, so the\nsprintf('%g') should be consistent with what psql \\watch expects.\nHowever, you cannot ask the server to be consistent with that -- say, if\nyou hypothetically tried to use \"to_char(9D99)\" and \\gset that to use as\n\\watch argument, it wouldn't work, because that'd use the server's C\nlocale, not Czech. (I know because I tried.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Fri, 12 May 2023 09:46:51 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "pá 12. 5. 2023 v 9:46 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org>\nnapsal:\n\n> On 2023-May-12, Pavel Stehule wrote:\n>\n> > It hangs in perl\n>\n> I wonder if \"hanging\" actually means that it interpreted the sleep time\n> as a very large integer, so it's just sleeping for a long time.\n>\n\nThere is some interaction with pspg in stream mode\n\nThe probable scenario\n\nIt is starting pspg due to my setting PSQL_WATCH_PAGER. pspg is waiting on\nquit command, or on pipe ending. Quit command cannot to come because it is\nnot on tty, so it is dead lock\n\nI can write to safeguard the fast ending on pspg when it is in stream mode,\nand tty is not available.\n\nAnd generally, the root perl should to reset PSQL_WATCH_PAGER env variable\nbefore executing psql. Probably it does with PSQL_PAGER, and maybe with\nPAGER.\n\nRegards\n\nPavel\n\n\n>\n> About the server locale, note that the ->new() call explicitly requests\n> the C locale -- it's only psql that is using the Czech locale.\n> Supposedly, the Perl code should also be using the Czech locale, so the\n> sprintf('%g') should be consistent with what psql \\watch expects.\n> However, you cannot ask the server to be consistent with that -- say, if\n> you hypothetically tried to use \"to_char(9D99)\" and \\gset that to use as\n> \\watch argument, it wouldn't work, because that'd use the server's C\n> locale, not Czech. (I know because I tried.)\n>\n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n> \"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n>\n\npá 12. 5. 2023 v 9:46 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org> napsal:On 2023-May-12, Pavel Stehule wrote:\n\n> It hangs in perl\n\nI wonder if \"hanging\" actually means that it interpreted the sleep time\nas a very large integer, so it's just sleeping for a long time.There is some interaction with pspg in stream modeThe probable scenarioIt is starting pspg due to my setting PSQL_WATCH_PAGER. pspg is waiting on quit command, or on pipe ending. Quit command cannot to come because it is not on tty, so it is dead lockI can write to safeguard the fast ending on pspg when it is in stream mode, and tty is not available. And generally, the root perl should to reset PSQL_WATCH_PAGER env variable before executing psql. Probably it does with PSQL_PAGER, and maybe with PAGER.RegardsPavel \n\nAbout the server locale, note that the ->new() call explicitly requests\nthe C locale -- it's only psql that is using the Czech locale.\nSupposedly, the Perl code should also be using the Czech locale, so the\nsprintf('%g') should be consistent with what psql \\watch expects.\nHowever, you cannot ask the server to be consistent with that -- say, if\nyou hypothetically tried to use \"to_char(9D99)\" and \\gset that to use as\n\\watch argument, it wouldn't work, because that'd use the server's C\nlocale, not Czech.  (I know because I tried.)\n\n-- \nÁlvaro Herrera        Breisgau, Deutschland  —  https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"", "msg_date": "Fri, 12 May 2023 10:31:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "pá 12. 5. 2023 v 10:31 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 12. 5. 2023 v 9:46 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org>\n> napsal:\n>\n>> On 2023-May-12, Pavel Stehule wrote:\n>>\n>> > It hangs in perl\n>>\n>> I wonder if \"hanging\" actually means that it interpreted the sleep time\n>> as a very large integer, so it's just sleeping for a long time.\n>>\n>\n> There is some interaction with pspg in stream mode\n>\n> The probable scenario\n>\n> It is starting pspg due to my setting PSQL_WATCH_PAGER. pspg is waiting on\n> quit command, or on pipe ending. Quit command cannot to come because it is\n> not on tty, so it is dead lock\n>\n> I can write to safeguard the fast ending on pspg when it is in stream\n> mode, and tty is not available.\n>\n> And generally, the root perl should to reset PSQL_WATCH_PAGER env variable\n> before executing psql. Probably it does with PSQL_PAGER, and maybe with\n> PAGER.\n>\n\nwith last change in pspg, this tests fails as \"expected\"\n\naster/src/bin/psql/../../../src/test/regress/pg_regress' /usr/bin/prove -I\n../../../src/test/perl/ -I . t/*.pl\n# +++ tap check in src/bin/psql +++\nt/001_basic.pl ........... 59/?\n# Failed test '\\watch with 3 iterations: no stderr'\n# at t/001_basic.pl line 356.\n# got: 'stream mode can be used only in interactive mode (tty is\nnot available)'\n# expected: ''\n\n# Failed test '\\watch with 3 iterations: matches'\n# at t/001_basic.pl line 356.\n# ''\n# doesn't match '(?^l:1\\n1\\n1)'\n# Looks like you failed 2 tests of 80.\nt/001_basic.pl ........... Dubious, test returned 2 (wstat 512, 0x200)\nFailed 2/80 subtests\nt/010_tab_completion.pl .. ok\nt/020_cancel.pl .......... ok\n\nTest Summary Report\n-------------------\nt/001_basic.pl (Wstat: 512 (exited 2) Tests: 80 Failed: 2)\n Failed tests: 69-70\n Non-zero exit status: 2\nFiles=3, Tests=169, 7 wallclock secs ( 0.16 usr 0.03 sys + 3.31 cusr\n 1.31 csys = 4.81 CPU)\nResult: FAIL\nmake: *** [Makefile:87: check] Chyba 1\n\nRegards\n\nPavel\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> About the server locale, note that the ->new() call explicitly requests\n>> the C locale -- it's only psql that is using the Czech locale.\n>> Supposedly, the Perl code should also be using the Czech locale, so the\n>> sprintf('%g') should be consistent with what psql \\watch expects.\n>> However, you cannot ask the server to be consistent with that -- say, if\n>> you hypothetically tried to use \"to_char(9D99)\" and \\gset that to use as\n>> \\watch argument, it wouldn't work, because that'd use the server's C\n>> locale, not Czech. (I know because I tried.)\n>>\n>> --\n>> Álvaro Herrera Breisgau, Deutschland —\n>> https://www.EnterpriseDB.com/\n>> \"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n>>\n>\n\npá 12. 5. 2023 v 10:31 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:pá 12. 5. 2023 v 9:46 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org> napsal:On 2023-May-12, Pavel Stehule wrote:\n\n> It hangs in perl\n\nI wonder if \"hanging\" actually means that it interpreted the sleep time\nas a very large integer, so it's just sleeping for a long time.There is some interaction with pspg in stream modeThe probable scenarioIt is starting pspg due to my setting PSQL_WATCH_PAGER. pspg is waiting on quit command, or on pipe ending. Quit command cannot to come because it is not on tty, so it is dead lockI can write to safeguard the fast ending on pspg when it is in stream mode, and tty is not available. And generally, the root perl should to reset PSQL_WATCH_PAGER env variable before executing psql. Probably it does with PSQL_PAGER, and maybe with PAGER.with last change in pspg, this tests fails as \"expected\"aster/src/bin/psql/../../../src/test/regress/pg_regress' /usr/bin/prove -I ../../../src/test/perl/ -I .  t/*.pl# +++ tap check in src/bin/psql +++t/001_basic.pl ........... 59/? #   Failed test '\\watch with 3 iterations: no stderr'#   at t/001_basic.pl line 356.#          got: 'stream mode can be used only in interactive mode (tty is not available)'#     expected: ''#   Failed test '\\watch with 3 iterations: matches'#   at t/001_basic.pl line 356.#                   ''#     doesn't match '(?^l:1\\n1\\n1)'# Looks like you failed 2 tests of 80.t/001_basic.pl ........... Dubious, test returned 2 (wstat 512, 0x200)Failed 2/80 subtests t/010_tab_completion.pl .. ok    t/020_cancel.pl .......... ok   Test Summary Report-------------------t/001_basic.pl         (Wstat: 512 (exited 2) Tests: 80 Failed: 2)  Failed tests:  69-70  Non-zero exit status: 2Files=3, Tests=169,  7 wallclock secs ( 0.16 usr  0.03 sys +  3.31 cusr  1.31 csys =  4.81 CPU)Result: FAILmake: *** [Makefile:87: check] Chyba 1RegardsPavelRegardsPavel \n\nAbout the server locale, note that the ->new() call explicitly requests\nthe C locale -- it's only psql that is using the Czech locale.\nSupposedly, the Perl code should also be using the Czech locale, so the\nsprintf('%g') should be consistent with what psql \\watch expects.\nHowever, you cannot ask the server to be consistent with that -- say, if\nyou hypothetically tried to use \"to_char(9D99)\" and \\gset that to use as\n\\watch argument, it wouldn't work, because that'd use the server's C\nlocale, not Czech.  (I know because I tried.)\n\n-- \nÁlvaro Herrera        Breisgau, Deutschland  —  https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"", "msg_date": "Fri, 12 May 2023 14:39:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> And generally, the root perl should to reset PSQL_WATCH_PAGER env variable\n> before executing psql. Probably it does with PSQL_PAGER, and maybe with\n> PAGER.\n\nOh! AFAICS, we don't do any of those things, but I agree it seems like\na good idea. Can you confirm that if you unset PSQL_WATCH_PAGER then\nthe test passes for you?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 May 2023 09:26:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "pá 12. 5. 2023 v 15:26 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > And generally, the root perl should to reset PSQL_WATCH_PAGER env\n> variable\n> > before executing psql. Probably it does with PSQL_PAGER, and maybe with\n> > PAGER.\n>\n> Oh! AFAICS, we don't do any of those things, but I agree it seems like\n> a good idea. Can you confirm that if you unset PSQL_WATCH_PAGER then\n> the test passes for you?\n>\n\nyes, I tested it now, and unset PSQL_WATCH_PAGER fixed this issue.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\npá 12. 5. 2023 v 15:26 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> And generally, the root perl should to reset PSQL_WATCH_PAGER env variable\n> before executing psql. Probably it does with PSQL_PAGER, and maybe with\n> PAGER.\n\nOh!  AFAICS, we don't do any of those things, but I agree it seems like\na good idea.  Can you confirm that if you unset PSQL_WATCH_PAGER then\nthe test passes for you?yes, I tested it now, and unset  PSQL_WATCH_PAGER fixed this issue.RegardsPavel\n\n                        regards, tom lane", "msg_date": "Fri, 12 May 2023 17:18:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 12. 5. 2023 v 15:26 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Oh! AFAICS, we don't do any of those things, but I agree it seems like\n>> a good idea. Can you confirm that if you unset PSQL_WATCH_PAGER then\n>> the test passes for you?\n\n> yes, I tested it now, and unset PSQL_WATCH_PAGER fixed this issue.\n\nOK. So after looking at this a bit, the reason PAGER and PSQL_PAGER\ndon't cause us any problems in the test environment is that they are\nnot honored unless isatty(fileno(stdin)) && isatty(fileno(stdout)).\nIt seems to me that it's a bug that there is no such check before\nusing PSQL_WATCH_PAGER. Is there actually any defensible reason\nfor that?\n\nI think we do need to clear out all three variables in\nCluster::interactive_psql. But our regular psql tests shouldn't\nbe at risk here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 May 2023 11:50:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "pá 12. 5. 2023 v 17:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > pá 12. 5. 2023 v 15:26 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> Oh! AFAICS, we don't do any of those things, but I agree it seems like\n> >> a good idea. Can you confirm that if you unset PSQL_WATCH_PAGER then\n> >> the test passes for you?\n>\n> > yes, I tested it now, and unset PSQL_WATCH_PAGER fixed this issue.\n>\n> OK. So after looking at this a bit, the reason PAGER and PSQL_PAGER\n> don't cause us any problems in the test environment is that they are\n> not honored unless isatty(fileno(stdin)) && isatty(fileno(stdout)).\n> It seems to me that it's a bug that there is no such check before\n> using PSQL_WATCH_PAGER. Is there actually any defensible reason\n> for that?\n>\n\nTheoretically, we can write tests for these features, and then stdout,\nstdin may not be tty.\n\nExcept for testing, using pager in non-interactive mode makes no sense.\n\nRegards\n\nPavel\n\n\n\n> I think we do need to clear out all three variables in\n> Cluster::interactive_psql. But our regular psql tests shouldn't\n> be at risk here.\n>\n> regards, tom lane\n>\n\npá 12. 5. 2023 v 17:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 12. 5. 2023 v 15:26 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Oh!  AFAICS, we don't do any of those things, but I agree it seems like\n>> a good idea.  Can you confirm that if you unset PSQL_WATCH_PAGER then\n>> the test passes for you?\n\n> yes, I tested it now, and unset  PSQL_WATCH_PAGER fixed this issue.\n\nOK.  So after looking at this a bit, the reason PAGER and PSQL_PAGER\ndon't cause us any problems in the test environment is that they are\nnot honored unless isatty(fileno(stdin)) && isatty(fileno(stdout)).\nIt seems to me that it's a bug that there is no such check before\nusing PSQL_WATCH_PAGER.  Is there actually any defensible reason\nfor that?Theoretically, we can write tests for these features, and then stdout, stdin may not be tty.Except for testing, using pager in non-interactive mode makes no sense.RegardsPavel\n\nI think we do need to clear out all three variables in\nCluster::interactive_psql.  But our regular psql tests shouldn't\nbe at risk here.\n\n                        regards, tom lane", "msg_date": "Fri, 12 May 2023 18:12:56 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 12. 5. 2023 v 17:50 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> OK. So after looking at this a bit, the reason PAGER and PSQL_PAGER\n>> don't cause us any problems in the test environment is that they are\n>> not honored unless isatty(fileno(stdin)) && isatty(fileno(stdout)).\n>> It seems to me that it's a bug that there is no such check before\n>> using PSQL_WATCH_PAGER. Is there actually any defensible reason\n>> for that?\n\n> Theoretically, we can write tests for these features, and then stdout,\n> stdin may not be tty.\n\nWell, you'd test using pty's, so that psql thinks it's talking to a\nterminal. That's what we're doing now to test tab completion,\nfor example.\n\n> Except for testing, using pager in non-interactive mode makes no sense.\n\nAgreed. Let's solve this by inserting isatty tests in psql, rather\nthan hacking the test environment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 May 2023 13:28:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "I wrote:\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> Except for testing, using pager in non-interactive mode makes no sense.\n\n> Agreed. Let's solve this by inserting isatty tests in psql, rather\n> than hacking the test environment.\n\nHere's a proposed patch for this. I noticed that another memo the\nPSQL_WATCH_PAGER patch had not gotten was the lesson learned in\ncommit 18f8f784c, namely that it's a good idea to ignore empty\nor all-blank settings.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 12 May 2023 15:08:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tests hangs" }, { "msg_contents": "pá 12. 5. 2023 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I wrote:\n> > Pavel Stehule <pavel.stehule@gmail.com> writes:\n> >> Except for testing, using pager in non-interactive mode makes no sense.\n>\n> > Agreed. Let's solve this by inserting isatty tests in psql, rather\n> > than hacking the test environment.\n>\n> Here's a proposed patch for this. I noticed that another memo the\n> PSQL_WATCH_PAGER patch had not gotten was the lesson learned in\n> commit 18f8f784c, namely that it's a good idea to ignore empty\n> or all-blank settings.\n>\n\n+1\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\npá 12. 5. 2023 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I wrote:\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> Except for testing, using pager in non-interactive mode makes no sense.\n\n> Agreed.  Let's solve this by inserting isatty tests in psql, rather\n> than hacking the test environment.\n\nHere's a proposed patch for this.  I noticed that another memo the\nPSQL_WATCH_PAGER patch had not gotten was the lesson learned in\ncommit 18f8f784c, namely that it's a good idea to ignore empty\nor all-blank settings.+1Pavel\n\n                        regards, tom lane", "msg_date": "Fri, 12 May 2023 21:17:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tests hangs" } ]
[ { "msg_contents": "Where in the code is written the mechanism used for isolation when drop\ntable is executed in a transaction\n\nThanks for your help\n\nFabrice\n\nWhere in the code is written the mechanism used for isolation when drop table is executed in a transaction Thanks for your helpFabrice", "msg_date": "Tue, 9 May 2023 12:42:29 +0200", "msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>", "msg_from_op": true, "msg_subject": "drop table in transaction" }, { "msg_contents": "On Tue, May 9, 2023, at 7:42 AM, Fabrice Chapuis wrote:\n> Where in the code is written the mechanism used for isolation when drop table is executed in a transaction \n\nRemoveRelations() in src/backend/commands/tablecmds.c\n\nIf you are looking for a previous layer, check ExecDropStmt().\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, May 9, 2023, at 7:42 AM, Fabrice Chapuis wrote:Where in the code is written the mechanism used for isolation when drop table is executed in a transaction RemoveRelations() in src/backend/commands/tablecmds.cIf you are looking for a previous layer, check ExecDropStmt().--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 09 May 2023 09:25:00 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: drop table in transaction" } ]
[ { "msg_contents": "Hi,\r\n\r\nThe release date for PostgreSQL 16 Beta 1 is scheduled for May 25, 2023.\r\n\r\nPlease ensure you have committed any work for Beta 1 released committed \r\nby May 21, 2023 AoE.\r\n\r\nThank you for your efforts with resolving open items[2] as we work to\r\nstabilize PostgreSQL 16 for GA!\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth\r\n[2] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items", "msg_date": "Tue, 9 May 2023 10:19:46 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 16 Beta 1 release date" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> The release date for PostgreSQL 16 Beta 1 is scheduled for May 25, 2023.\n\n> Please ensure you have committed any work for Beta 1 released committed \n> by May 21, 2023 AoE.\n\nBTW, pursuant to this schedule I intend to run renumber_oids.pl, pgindent,\npgperltidy, etc (cf [1]) this Friday, May 19. Any large patches that\naren't in by then risk merge problems.\n\n\t\t\tregards, tom lane\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/tools/RELEASE_CHANGES;h=73b02fa2a4007c95d598c0d647294d2e0ef7a211;hb=HEAD#l64\n\n\n", "msg_date": "Wed, 17 May 2023 15:50:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 16 Beta 1 release date" } ]
[ { "msg_contents": "Hi,\n\nUnfortunately I have found the following commit to have caused a performance\nregression:\n\ncommit e101dfac3a53c20bfbf1ca85d30a368c2954facf\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2023-04-08 00:24:24 -0700\n\n For cascading replication, wake physical and logical walsenders separately\n\n Physical walsenders can't send data until it's been flushed; logical\n walsenders can't decode and send data until it's been applied. On the\n standby, the WAL is flushed first, which will only wake up physical\n walsenders; and then applied, which will only wake up logical\n walsenders.\n\n Previously, all walsenders were awakened when the WAL was flushed. That\n was fine for logical walsenders on the primary; but on the standby the\n flushed WAL would have been not applied yet, so logical walsenders were\n awakened too early.\n\n Per idea from Jeff Davis and Amit Kapila.\n\n Author: \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>\n Reviewed-By: Jeff Davis <pgsql@j-davis.com>\n Reviewed-By: Robert Haas <robertmhaas@gmail.com>\n Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>\n Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>\n Discussion: https://postgr.es/m/CAA4eK1+zO5LUeisabX10c81LU-fWMKO4M9Wyg1cdkbW7Hqh6vQ@mail.gmail.com\n\nThe problem is that, on a standby, after the change - as needed to for the\napproach to work - the call to WalSndWakeup() in ApplyWalRecord() happens for\nevery record, instead of only happening when the timeline is changed (or WAL\nis flushed or ...).\n\nWalSndWakeup() iterates over all walsender slots, regardless of whether in\nuse. For each of the walsender slots it acquires a spinlock.\n\nWhen replaying a lot of small-ish WAL records I found the startup process to\nspend the majority of the time in WalSndWakeup(). I've not measured it very\nprecisely yet, but the overhead is significant (~35% slowdown), even with the\ndefault max_wal_senders. If that's increased substantially, it obviously gets\nworse.\n\nThe only saving grace is that this is not an issue on the primary.\n\n\nI unfortunately spent less time on this commit of the\nlogical-decoding-on-standby series than on the others. There were so many\nother senior contributors discussing it, that I \"relaxed\" a bit too much.\n\n\nI don't think the approach of not having any sort of \"registry\" of whether\nanybody is waiting for the replay position to be updated is\nfeasible. Iterating over all walsenders slots is just too expensive -\nWalSndWakeup() shows up even if I remove the spinlock (which we likely could,\nespecially when just checking if the the walsender is connected).\n\nMy current guess is that mis-using a condition variable is the best bet. I\nthink it should work to use ConditionVariablePrepareToSleep() before a\nWalSndWait(), and then ConditionVariableCancelSleep(). I.e. to never use\nConditionVariableSleep(). The latch set from ConditionVariableBroadcast()\nwould still cause the necessary wakeup.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 May 2023 12:02:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "walsender performance regression due to logical decoding on standby\n changes" }, { "msg_contents": "On Tue, 2023-05-09 at 12:02 -0700, Andres Freund wrote:\n> I don't think the approach of not having any sort of \"registry\" of\n> whether\n> anybody is waiting for the replay position to be updated is\n> feasible. Iterating over all walsenders slots is just too expensive -\n\nWould it work to use a shared counter for the waiters (or, two\ncounters, one for physical and one for logical), and just early exit if\nthe count is zero?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 09 May 2023 13:38:24 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 2023-05-09 13:38:24 -0700, Jeff Davis wrote:\n> On Tue, 2023-05-09 at 12:02 -0700, Andres Freund wrote:\n> > I don't think the approach of not having any sort of \"registry\" of\n> > whether\n> > anybody is waiting for the replay position to be updated is\n> > feasible. Iterating over all walsenders slots is just too expensive -\n> \n> Would it work to use a shared counter for the waiters (or, two\n> counters, one for physical and one for logical), and just early exit if\n> the count is zero?\n\nThat doesn't really fix the problem - once you have a single walsender\nconnected, performance is bad again.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 May 2023 14:00:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Wed, May 10, 2023 at 12:33 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Unfortunately I have found the following commit to have caused a performance\n> regression:\n>\n> commit e101dfac3a53c20bfbf1ca85d30a368c2954facf\n>\n> The problem is that, on a standby, after the change - as needed to for the\n> approach to work - the call to WalSndWakeup() in ApplyWalRecord() happens for\n> every record, instead of only happening when the timeline is changed (or WAL\n> is flushed or ...).\n>\n> WalSndWakeup() iterates over all walsender slots, regardless of whether in\n> use. For each of the walsender slots it acquires a spinlock.\n>\n> When replaying a lot of small-ish WAL records I found the startup process to\n> spend the majority of the time in WalSndWakeup(). I've not measured it very\n> precisely yet, but the overhead is significant (~35% slowdown), even with the\n> default max_wal_senders. If that's increased substantially, it obviously gets\n> worse.\n\nI played it with a simple primary -> standby1 -> standby2 setup. I ran\na pgbench script [1] on primary and counted the number of times\nWalSndWakeup() gets called from ApplyWalRecord() and the number of\ntimes spinlock is acquired/released in WalSndWakeup(). It's a whopping\n21 million times spinlock is acquired/released on the standby 1 and\nstandby 2 for just a < 5min of pgbench run on the primary:\n\nstandby 1:\n2023-05-10 05:32:43.249 UTC [1595600] LOG: FOO WalSndWakeup() in\nApplyWalRecord() was called 2176352 times\n2023-05-10 05:32:43.249 UTC [1595600] LOG: FOO spinlock\nacquisition/release count in WalSndWakeup() is 21763530\n\nstandby 2:\n2023-05-10 05:32:43.249 UTC [1595625] LOG: FOO WalSndWakeup() in\nApplyWalRecord() was called 2176352 times\n2023-05-10 05:32:43.249 UTC [1595625] LOG: FOO spinlock\nacquisition/release count in WalSndWakeup() is 21763530\n\nIn this case, there is no timeline switch or no logical decoding on\nthe standby or such, but WalSndWakeup() gets called because the\nstandby can't make out if the slot is for logical or physical\nreplication unless spinlock is acquired. Before e101dfac3a,\nWalSndWakeup() was getting called only when there was a timeline\nswitch.\n\n> The only saving grace is that this is not an issue on the primary.\n\nYeah.\n\n> I don't think the approach of not having any sort of \"registry\" of whether\n> anybody is waiting for the replay position to be updated is\n> feasible. Iterating over all walsenders slots is just too expensive -\n> WalSndWakeup() shows up even if I remove the spinlock (which we likely could,\n> especially when just checking if the the walsender is connected).\n\nRight.\n\n> My current guess is that mis-using a condition variable is the best bet. I\n> think it should work to use ConditionVariablePrepareToSleep() before a\n> WalSndWait(), and then ConditionVariableCancelSleep(). I.e. to never use\n> ConditionVariableSleep(). The latch set from ConditionVariableBroadcast()\n> would still cause the necessary wakeup.\n\nHow about something like the attached? Recovery and subscription tests\ndon't complain with the patch.\n\n[1]\n./pg_ctl -D data -l logfile stop\n./pg_ctl -D sbdata1 -l logfile1 stop\n./pg_ctl -D sbdata2 -l logfile2 stop\n\ncd /home/ubuntu/postgres\nrm -rf inst\nmake distclean\n./configure --prefix=$PWD/inst/ CFLAGS=\"-O3\" > install.log && make -j\n8 install > install.log 2>&1 &\n\nprimary -> standby1 -> standby2\n\nfree -m\nsudo su -c 'sync; echo 3 > /proc/sys/vm/drop_caches'\nfree -m\n\ncd inst/bin\n./initdb -D data\nmkdir archived_wal\n\ncat << EOF >> data/postgresql.conf\nshared_buffers = '8GB'\nwal_buffers = '1GB'\nmax_wal_size = '16GB'\nmax_connections = '5000'\narchive_mode = 'on'\narchive_command='cp %p /home/ubuntu/postgres/inst/bin/archived_wal/%f'\nEOF\n\n./pg_ctl -D data -l logfile start\n./pg_basebackup -D sbdata1\n./psql -c \"select pg_create_physical_replication_slot('sb1_slot',\ntrue, false)\" postgres\n\ncat << EOF >> sbdata1/postgresql.conf\nport=5433\nprimary_conninfo='host=localhost port=5432 dbname=postgres user=ubuntu\napplication_name=sb1'\nprimary_slot_name='sb1_slot'\nrestore_command='cp /home/ubuntu/postgres/inst/bin/archived_wal/%f %p'\nEOF\n\ntouch sbdata1/standby.signal\n\n./pg_ctl -D sbdata1 -l logfile1 start\n./pg_basebackup -D sbdata2 -p 5433\n./psql -p 5433 -c \"select\npg_create_physical_replication_slot('sb2_slot', true, false)\" postgres\n\ncat << EOF >> sbdata2/postgresql.conf\nport=5434\nprimary_conninfo='host=localhost port=5433 dbname=postgres user=ubuntu\napplication_name=sb2'\nprimary_slot_name='sb2_slot'\nrestore_command='cp /home/ubuntu/postgres/inst/bin/archived_wal/%f %p'\nEOF\n\ntouch sbdata2/standby.signal\n\n./pg_ctl -D sbdata2 -l logfile2 start\n./psql -c \"select pid, backend_type from pg_stat_activity where\nbackend_type in ('walreceiver', 'walsender');\" postgres\n./psql -p 5433 -c \"select pid, backend_type from pg_stat_activity\nwhere backend_type in ('walreceiver', 'walsender');\" postgres\n./psql -p 5434 -c \"select pid, backend_type from pg_stat_activity\nwhere backend_type in ('walreceiver', 'walsender');\" postgres\n\nulimit -S -n 5000\n./pgbench --initialize --scale=300 postgres\nfor c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do echo -n\n\"$c \";./pgbench -n -M prepared -U ubuntu postgres -b tpcb-like -c$c\n-j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 10 May 2023 12:06:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 5/9/23 11:00 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-05-09 13:38:24 -0700, Jeff Davis wrote:\n>> On Tue, 2023-05-09 at 12:02 -0700, Andres Freund wrote:\n>>> I don't think the approach of not having any sort of \"registry\" of\n>>> whether\n>>> anybody is waiting for the replay position to be updated is\n>>> feasible. Iterating over all walsenders slots is just too expensive -\n>>\n>> Would it work to use a shared counter for the waiters (or, two\n>> counters, one for physical and one for logical), and just early exit if\n>> the count is zero?\n> \n> That doesn't really fix the problem - once you have a single walsender\n> connected, performance is bad again.\n> \n\nJust to clarify, do you mean that if there is only one remaining active walsender that, say,\nhas been located at slot n, then we’d still have to loop from 0 to n in WalSndWakeup()?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 May 2023 08:39:08 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 5/10/23 8:36 AM, Bharath Rupireddy wrote:\n> On Wed, May 10, 2023 at 12:33 AM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Unfortunately I have found the following commit to have caused a performance\n>> regression:\n>>\n>> commit e101dfac3a53c20bfbf1ca85d30a368c2954facf\n>>\n>> The problem is that, on a standby, after the change - as needed to for the\n>> approach to work - the call to WalSndWakeup() in ApplyWalRecord() happens for\n>> every record, instead of only happening when the timeline is changed (or WAL\n>> is flushed or ...).\n>>\n>> WalSndWakeup() iterates over all walsender slots, regardless of whether in\n>> use. For each of the walsender slots it acquires a spinlock.\n>>\n>> When replaying a lot of small-ish WAL records I found the startup process to\n>> spend the majority of the time in WalSndWakeup(). I've not measured it very\n>> precisely yet, but the overhead is significant (~35% slowdown), even with the\n>> default max_wal_senders. If that's increased substantially, it obviously gets\n>> worse.\n> \n\nThanks Andres for the call out! I do agree that this is a concern.\n\n>> The only saving grace is that this is not an issue on the primary.\n> \n> Yeah.\n\n+1\n\n> \n>> My current guess is that mis-using a condition variable is the best bet. I\n>> think it should work to use ConditionVariablePrepareToSleep() before a\n>> WalSndWait(), and then ConditionVariableCancelSleep(). I.e. to never use\n>> ConditionVariableSleep(). The latch set from ConditionVariableBroadcast()\n>> would still cause the necessary wakeup.\n\nYeah, I think that \"mis-using\" a condition variable is a valid option. Unless I'm missing\nsomething, I don't think there is anything wrong with using a CV that way (aka not using\nConditionVariableTimedSleep() or ConditionVariableSleep() in this particular case).\n\n> \n> How about something like the attached? Recovery and subscription tests\n> don't complain with the patch.\n\nThanks Bharath for looking at it!\n\nI launched a full Cirrus CI test with it but it failed on one environment (did not look in details,\njust sharing this here): https://cirrus-ci.com/task/6570140767092736\n\nAlso I have a few comments:\n\n@@ -1958,7 +1959,7 @@ ApplyWalRecord(XLogReaderState *xlogreader, XLogRecord *record, TimeLineID *repl\n * ------\n */\n if (AllowCascadeReplication())\n- WalSndWakeup(switchedTLI, true);\n+ ConditionVariableBroadcast(&WalSndCtl->cv);\n\nI think the comment above this change needs to be updated.\n\n\n@@ -3368,9 +3370,13 @@ WalSndWait(uint32 socket_events, long timeout, uint32 wait_event)\n WaitEvent event;\n\n ModifyWaitEvent(FeBeWaitSet, FeBeWaitSetSocketPos, socket_events, NULL);\n+\n+ ConditionVariablePrepareToSleep(&WalSndCtl->cv);\n if (WaitEventSetWait(FeBeWaitSet, timeout, &event, 1, wait_event) == 1 &&\n (event.events & WL_POSTMASTER_DEATH))\n proc_exit(1);\n+\n+ ConditionVariableCancelSleep();\n\nMay be worth to update the comment above WalSndWait() too? (to explain why a CV handling is part of it).\n\n\n@@ -108,6 +109,8 @@ typedef struct\n */\n bool sync_standbys_defined;\n\n+ ConditionVariable cv;\n\nWorth to give it a more meaning full name? (to give a clue as when it is used)\n\n\nI think we also need to update the comment above WalSndWakeup():\n\n\"\n * For cascading replication we need to wake up physical walsenders separately\n * from logical walsenders (see the comment before calling WalSndWakeup() in\n * ApplyWalRecord() for more details).\n\"\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 May 2023 11:52:35 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "\r\nOn Wednesday, May 10, 2023 2:36 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> \r\n> On Wed, May 10, 2023 at 12:33 AM Andres Freund <andres@anarazel.de>\r\n> wrote:\r\n> >\r\n> > Unfortunately I have found the following commit to have caused a\r\n> performance\r\n> > regression:\r\n> >\r\n> > commit e101dfac3a53c20bfbf1ca85d30a368c2954facf\r\n> >\r\n> > The problem is that, on a standby, after the change - as needed to for the\r\n> > approach to work - the call to WalSndWakeup() in ApplyWalRecord()\r\n> happens for\r\n> > every record, instead of only happening when the timeline is changed (or\r\n> WAL\r\n> > is flushed or ...).\r\n> >\r\n> > WalSndWakeup() iterates over all walsender slots, regardless of whether in\r\n> > use. For each of the walsender slots it acquires a spinlock.\r\n> >\r\n> > When replaying a lot of small-ish WAL records I found the startup process to\r\n> > spend the majority of the time in WalSndWakeup(). I've not measured it very\r\n> > precisely yet, but the overhead is significant (~35% slowdown), even with the\r\n> > default max_wal_senders. If that's increased substantially, it obviously gets\r\n> > worse.\r\n> \r\n> I played it with a simple primary -> standby1 -> standby2 setup. I ran\r\n> a pgbench script [1] on primary and counted the number of times\r\n> WalSndWakeup() gets called from ApplyWalRecord() and the number of\r\n> times spinlock is acquired/released in WalSndWakeup(). It's a whopping\r\n> 21 million times spinlock is acquired/released on the standby 1 and\r\n> standby 2 for just a < 5min of pgbench run on the primary:\r\n> \r\n> standby 1:\r\n> 2023-05-10 05:32:43.249 UTC [1595600] LOG: FOO WalSndWakeup() in\r\n> ApplyWalRecord() was called 2176352 times\r\n> 2023-05-10 05:32:43.249 UTC [1595600] LOG: FOO spinlock\r\n> acquisition/release count in WalSndWakeup() is 21763530\r\n> \r\n> standby 2:\r\n> 2023-05-10 05:32:43.249 UTC [1595625] LOG: FOO WalSndWakeup() in\r\n> ApplyWalRecord() was called 2176352 times\r\n> 2023-05-10 05:32:43.249 UTC [1595625] LOG: FOO spinlock\r\n> acquisition/release count in WalSndWakeup() is 21763530\r\n> \r\n> In this case, there is no timeline switch or no logical decoding on\r\n> the standby or such, but WalSndWakeup() gets called because the\r\n> standby can't make out if the slot is for logical or physical\r\n> replication unless spinlock is acquired. Before e101dfac3a,\r\n> WalSndWakeup() was getting called only when there was a timeline\r\n> switch.\r\n> \r\n> > The only saving grace is that this is not an issue on the primary.\r\n> \r\n> Yeah.\r\n> \r\n> > I don't think the approach of not having any sort of \"registry\" of whether\r\n> > anybody is waiting for the replay position to be updated is\r\n> > feasible. Iterating over all walsenders slots is just too expensive -\r\n> > WalSndWakeup() shows up even if I remove the spinlock (which we likely\r\n> could,\r\n> > especially when just checking if the the walsender is connected).\r\n> \r\n> Right.\r\n> \r\n> > My current guess is that mis-using a condition variable is the best bet. I\r\n> > think it should work to use ConditionVariablePrepareToSleep() before a\r\n> > WalSndWait(), and then ConditionVariableCancelSleep(). I.e. to never use\r\n> > ConditionVariableSleep(). The latch set from ConditionVariableBroadcast()\r\n> > would still cause the necessary wakeup.\r\n> \r\n> How about something like the attached? Recovery and subscription tests\r\n> don't complain with the patch.\r\n\r\nThanks for the patch. I noticed one place where the logic is different from before and\r\njust to confirm:\r\n\r\n\tif (AllowCascadeReplication())\r\n-\t\tWalSndWakeup(switchedTLI, true);\r\n+\t\tConditionVariableBroadcast(&WalSndCtl->cv);\r\n\r\nAfter the change, we wakeup physical walsender regardless of switchedTLI flag.\r\nIs this intentional ? if so, I think It would be better to update the comments above this.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Wed, 10 May 2023 10:11:26 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Wed, May 10, 2023 at 3:41 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, May 10, 2023 2:36 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> >\n> > > My current guess is that mis-using a condition variable is the best bet. I\n> > > think it should work to use ConditionVariablePrepareToSleep() before a\n> > > WalSndWait(), and then ConditionVariableCancelSleep(). I.e. to never use\n> > > ConditionVariableSleep(). The latch set from ConditionVariableBroadcast()\n> > > would still cause the necessary wakeup.\n> >\n> > How about something like the attached? Recovery and subscription tests\n> > don't complain with the patch.\n>\n> Thanks for the patch. I noticed one place where the logic is different from before and\n> just to confirm:\n>\n> if (AllowCascadeReplication())\n> - WalSndWakeup(switchedTLI, true);\n> + ConditionVariableBroadcast(&WalSndCtl->cv);\n>\n> After the change, we wakeup physical walsender regardless of switchedTLI flag.\n> Is this intentional ? if so, I think It would be better to update the comments above this.\n>\n\nThis raises the question of whether we need this condition variable\nlogic for physical walsenders?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 May 2023 16:03:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Wed, May 10, 2023 at 7:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 10, 2023 at 3:41 PM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Wednesday, May 10, 2023 2:36 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > >\n> > > > My current guess is that mis-using a condition variable is the best bet. I\n> > > > think it should work to use ConditionVariablePrepareToSleep() before a\n> > > > WalSndWait(), and then ConditionVariableCancelSleep(). I.e. to never use\n> > > > ConditionVariableSleep(). The latch set from ConditionVariableBroadcast()\n> > > > would still cause the necessary wakeup.\n> > >\n> > > How about something like the attached? Recovery and subscription tests\n> > > don't complain with the patch.\n> >\n> > Thanks for the patch. I noticed one place where the logic is different from before and\n> > just to confirm:\n> >\n> > if (AllowCascadeReplication())\n> > - WalSndWakeup(switchedTLI, true);\n> > + ConditionVariableBroadcast(&WalSndCtl->cv);\n> >\n> > After the change, we wakeup physical walsender regardless of switchedTLI flag.\n> > Is this intentional ? if so, I think It would be better to update the comments above this.\n> >\n>\n> This raises the question of whether we need this condition variable\n> logic for physical walsenders?\n\nIt sounds like a good idea. We can have two condition variables for\nlogical and physical walsenders, and selectively wake up walsenders\nsleeping on the condition variables. It should work, it seems like\nmuch of a hack, though.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/2d314c22b9e03415aa1c7d8fd1f698dae60effa7.camel%40j-davis.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 May 2023 13:56:52 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Wednesday, May 10, 2023 3:03 AM Andres Freund <andres@anarazel.de> wrote:\r\n> Hi,\r\n> \r\n> Unfortunately I have found the following commit to have caused a\r\n> performance\r\n> regression:\r\n> \r\n> commit e101dfac3a53c20bfbf1ca85d30a368c2954facf\r\n> Author: Andres Freund <andres@anarazel.de>\r\n> Date: 2023-04-08 00:24:24 -0700\r\n> \r\n> For cascading replication, wake physical and logical walsenders separately\r\n> \r\n> Physical walsenders can't send data until it's been flushed; logical\r\n> walsenders can't decode and send data until it's been applied. On the\r\n> standby, the WAL is flushed first, which will only wake up physical\r\n> walsenders; and then applied, which will only wake up logical\r\n> walsenders.\r\n> \r\n> Previously, all walsenders were awakened when the WAL was flushed. That\r\n> was fine for logical walsenders on the primary; but on the standby the\r\n> flushed WAL would have been not applied yet, so logical walsenders were\r\n> awakened too early.\r\n> \r\n> Per idea from Jeff Davis and Amit Kapila.\r\n> \r\n> Author: \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>\r\n> Reviewed-By: Jeff Davis <pgsql@j-davis.com>\r\n> Reviewed-By: Robert Haas <robertmhaas@gmail.com>\r\n> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>\r\n> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> Discussion:\r\n> https://postgr.es/m/CAA4eK1+zO5LUeisabX10c81LU-fWMKO4M9Wyg1cdkb\r\n> W7Hqh6vQ@mail.gmail.com\r\n> \r\n> The problem is that, on a standby, after the change - as needed to for the\r\n> approach to work - the call to WalSndWakeup() in ApplyWalRecord() happens\r\n> for\r\n> every record, instead of only happening when the timeline is changed (or WAL\r\n> is flushed or ...).\r\n> \r\n> WalSndWakeup() iterates over all walsender slots, regardless of whether in\r\n> use. For each of the walsender slots it acquires a spinlock.\r\n> \r\n> When replaying a lot of small-ish WAL records I found the startup process to\r\n> spend the majority of the time in WalSndWakeup(). I've not measured it very\r\n> precisely yet, but the overhead is significant (~35% slowdown), even with the\r\n> default max_wal_senders. If that's increased substantially, it obviously gets\r\n> worse.\r\n\r\nI did some simple tests for this to see the performance impact on\r\nthe streaming replication, just share it here for reference.\r\n\r\n1) sync primary-standby setup, load data on primary and count the time spent on\r\n replication. the degradation will be more obvious as the value of max_wal_senders\r\n increases.\r\n\r\nmax_wal_senders before(ms) after(ms) degradation\r\n100 13394.4013 14141.2615 5.58%\r\n200 13280.8507 14597.1173 9.91%\r\n300 13535.0232 16735.7379 23.65%\r\n\r\n2) Similar as 1) but count the time that the standby startup process spent on\r\n replaying WAL(via gprof).\r\n\r\n10 senders\r\n===========\r\nbefore\r\n % cumulative self self total \r\n time seconds seconds calls s/call s/call name \r\n 4.12 0.45 0.11 1 0.11 2.46 PerformWalRecovery\r\n\r\nafter\r\n % cumulative self self total \r\n time seconds seconds calls s/call s/call name \r\n 17.99 0.59 0.59 4027383 0.00 0.00 WalSndWakeup\r\n 8.23 0.86 0.27 1 0.27 3.11 PerformWalRecovery\r\n\r\n100 senders\r\n===========\r\nbefore\r\n % cumulative self self total \r\n time seconds seconds calls s/call s/call name \r\n 5.56 0.36 0.18 1 0.18 2.91 PerformWalRecovery\r\n\r\nafter\r\n % cumulative self self total \r\n time seconds seconds calls s/call s/call name \r\n 64.65 4.39 4.39 4027383 0.00 0.00 WalSndWakeup\r\n 2.95 4.59 0.20 1 0.20 6.62 PerformWalRecovery\r\n\r\nWill test after applying the latest patch in this thread later.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Thu, 11 May 2023 09:42:39 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Wed, May 10, 2023 at 3:23 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> >> My current guess is that mis-using a condition variable is the best bet. I\n> >> think it should work to use ConditionVariablePrepareToSleep() before a\n> >> WalSndWait(), and then ConditionVariableCancelSleep(). I.e. to never use\n> >> ConditionVariableSleep(). The latch set from ConditionVariableBroadcast()\n> >> would still cause the necessary wakeup.\n> >\n> > How about something like the attached? Recovery and subscription tests\n> > don't complain with the patch.\n>\n> I launched a full Cirrus CI test with it but it failed on one environment (did not look in details,\n> just sharing this here): https://cirrus-ci.com/task/6570140767092736\n\nYeah, v1 had ConditionVariableInit() such that the CV was getting\ninitialized for every backend as opposed to just once after the WAL\nsender shmem was created.\n\n> Also I have a few comments:\n\nIndeed, v1 was a WIP patch. Please have a look at the attached v2\npatch, which has comments and passing CI runs on all platforms -\nhttps://github.com/BRupireddy/postgres/tree/optimize_walsender_wakeup_logic_v2.\n\nOn Wed, May 10, 2023 at 3:41 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> if (AllowCascadeReplication())\n> - WalSndWakeup(switchedTLI, true);\n> + ConditionVariableBroadcast(&WalSndCtl->cv);\n>\n> After the change, we wakeup physical walsender regardless of switchedTLI flag.\n> Is this intentional ? if so, I think It would be better to update the comments above this.\n\nThat's not the case with the attached v2 patch. Please have a look.\n\nOn Thu, May 11, 2023 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> We can have two condition variables for\n> logical and physical walsenders, and selectively wake up walsenders\n> sleeping on the condition variables. It should work, it seems like\n> much of a hack, though.\n\nAndres, rightly put it - 'mis-using' CV infrastructure. It is simple,\nworks, and makes the WalSndWakeup() easy solving the performance\nregression.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 12 May 2023 17:28:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Friday, May 12, 2023 7:58 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> \r\n> On Wed, May 10, 2023 at 3:23 PM Drouvot, Bertrand\r\n> <bertranddrouvot.pg@gmail.com> wrote:\r\n> >\r\n> > >> My current guess is that mis-using a condition variable is the best\r\n> > >> bet. I think it should work to use\r\n> > >> ConditionVariablePrepareToSleep() before a WalSndWait(), and then\r\n> > >> ConditionVariableCancelSleep(). I.e. to never use\r\n> > >> ConditionVariableSleep(). The latch set from\r\n> ConditionVariableBroadcast() would still cause the necessary wakeup.\r\n> > >\r\n> > > How about something like the attached? Recovery and subscription\r\n> > > tests don't complain with the patch.\r\n> >\r\n> > I launched a full Cirrus CI test with it but it failed on one\r\n> > environment (did not look in details, just sharing this here):\r\n> > https://cirrus-ci.com/task/6570140767092736\r\n> \r\n> Yeah, v1 had ConditionVariableInit() such that the CV was getting initialized for\r\n> every backend as opposed to just once after the WAL sender shmem was\r\n> created.\r\n> \r\n> > Also I have a few comments:\r\n> \r\n> Indeed, v1 was a WIP patch. Please have a look at the attached v2 patch, which\r\n> has comments and passing CI runs on all platforms -\r\n> https://github.com/BRupireddy/postgres/tree/optimize_walsender_wakeup_\r\n> logic_v2.\r\n> \r\n> On Wed, May 10, 2023 at 3:41 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > if (AllowCascadeReplication())\r\n> > - WalSndWakeup(switchedTLI, true);\r\n> > + ConditionVariableBroadcast(&WalSndCtl->cv);\r\n> >\r\n> > After the change, we wakeup physical walsender regardless of switchedTLI\r\n> flag.\r\n> > Is this intentional ? if so, I think It would be better to update the comments\r\n> above this.\r\n> \r\n> That's not the case with the attached v2 patch. Please have a look.\r\n\r\nThanks for updating the patch. I did some simple primary->standby replication test for the\r\npatch and can see the degradation doesn't happen in the replication after applying it[1].\r\n\r\nOne nitpick in the comment:\r\n\r\n+\t * walsenders. It makes WalSndWakeup() callers life easy.\r\n\r\ncallers life easy => callers' life easy.\r\n\r\n\r\n[1]\r\nmax_wal_senders = 100\r\nbefore regression(ms) after regression(ms) v2 patch(ms)\r\n13394.4013 14141.2615 13455.2543\r\nCompared with before regression 5.58% 0.45%\r\n\r\nmax_wal_senders = 200\r\nbefore regression(ms) after regression(ms) v2 patch(ms)\r\n13280.8507 14597.1173 13632.0606\r\nCompared with before regression 9.91% 1.64%\r\n\r\nmax_wal_senders = 300\r\nbefore regression(ms) after regression(ms) v2 patch(ms)\r\n13535.0232 16735.7379 13705.7135\r\nCompared with before regression 23.65% 1.26%\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 15 May 2023 02:19:53 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Fri, May 12, 2023 at 11:58 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Andres, rightly put it - 'mis-using' CV infrastructure. It is simple,\n> works, and makes the WalSndWakeup() easy solving the performance\n> regression.\n\nYeah, this seems OK, and better than the complicated alternatives. If\none day we want to implement CVs some other way so that this\nI-know-that-CVs-are-really-made-out-of-latches abstraction leak\nbecomes a problem, and we still need this, well then we can make a\nseparate latch-wait-list thing.\n\n\n", "msg_date": "Mon, 15 May 2023 16:48:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 5/15/23 4:19 AM, Zhijie Hou (Fujitsu) wrote:\n> On Friday, May 12, 2023 7:58 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Wed, May 10, 2023 at 3:23 PM Drouvot, Bertrand\n>>\n>> That's not the case with the attached v2 patch. Please have a look.\n> \n\nThanks for V2! It does look good to me and I like the fact that\nWalSndWakeup() does not need to loop on all the Walsenders slot\nanymore (for both the physical and logical cases).\n\n> Thanks for updating the patch. I did some simple primary->standby replication test for the\n> patch and can see the degradation doesn't happen in the replication after applying it[1].\n> \n\nThanks for the performance regression measurement!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 May 2023 10:41:12 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Mon, May 15, 2023 at 1:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, May 12, 2023 at 11:58 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Andres, rightly put it - 'mis-using' CV infrastructure. It is simple,\n> > works, and makes the WalSndWakeup() easy solving the performance\n> > regression.\n>\n> Yeah, this seems OK, and better than the complicated alternatives. If\n> one day we want to implement CVs some other way so that this\n> I-know-that-CVs-are-really-made-out-of-latches abstraction leak\n> becomes a problem, and we still need this, well then we can make a\n> separate latch-wait-list thing.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 May 2023 21:43:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Mon, May 15, 2023 at 6:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 15, 2023 at 1:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Fri, May 12, 2023 at 11:58 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Andres, rightly put it - 'mis-using' CV infrastructure. It is simple,\n> > > works, and makes the WalSndWakeup() easy solving the performance\n> > > regression.\n> >\n> > Yeah, this seems OK, and better than the complicated alternatives. If\n> > one day we want to implement CVs some other way so that this\n> > I-know-that-CVs-are-really-made-out-of-latches abstraction leak\n> > becomes a problem, and we still need this, well then we can make a\n> > separate latch-wait-list thing.\n>\n> +1\n\nThanks for acknowledging the approach.\n\nOn Mon, May 15, 2023 at 2:11 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Thanks for V2! It does look good to me and I like the fact that\n> WalSndWakeup() does not need to loop on all the Walsenders slot\n> anymore (for both the physical and logical cases).\n\nIndeed, it doesn't have to.\n\nOn Mon, May 15, 2023 at 7:50 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Thanks for updating the patch. I did some simple primary->standby replication test for the\n> patch and can see the degradation doesn't happen in the replication after applying it[1].\n>\n> One nitpick in the comment:\n>\n> + * walsenders. It makes WalSndWakeup() callers life easy.\n>\n> callers life easy => callers' life easy.\n\nChanged.\n\n> [1]\n> max_wal_senders = 100\n> before regression(ms) after regression(ms) v2 patch(ms)\n> 13394.4013 14141.2615 13455.2543\n> Compared with before regression 5.58% 0.45%\n>\n> max_wal_senders = 200\n> before regression(ms) after regression(ms) v2 patch(ms)\n> 13280.8507 14597.1173 13632.0606\n> Compared with before regression 9.91% 1.64%\n>\n> max_wal_senders = 300\n> before regression(ms) after regression(ms) v2 patch(ms)\n> 13535.0232 16735.7379 13705.7135\n> Compared with before regression 23.65% 1.26%\n\nYes, the numbers with v2 patch look close to where we were before.\nThanks for confirming. Just wondering, where does this extra\n0.45%/1.64%/1.26% coming from?\n\nPlease find the attached v3 with the review comment addressed.\n\nDo we need to add an open item for this issue in\nhttps://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items? If yes, can\nanyone in this loop add one?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 15 May 2023 20:09:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 5/15/23 4:39 PM, Bharath Rupireddy wrote:\n> On Mon, May 15, 2023 at 6:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Mon, May 15, 2023 at 1:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> Do we need to add an open item for this issue in\n> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items? If yes, can\n> anyone in this loop add one?\n\nI do think we need one for this issue and then just added it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 May 2023 08:18:04 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 2023-05-10 08:39:08 +0200, Drouvot, Bertrand wrote:\n> On 5/9/23 11:00 PM, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-05-09 13:38:24 -0700, Jeff Davis wrote:\n> > > On Tue, 2023-05-09 at 12:02 -0700, Andres Freund wrote:\n> > > > I don't think the approach of not having any sort of \"registry\" of\n> > > > whether\n> > > > anybody is waiting for the replay position to be updated is\n> > > > feasible. Iterating over all walsenders slots is just too expensive -\n> > > \n> > > Would it work to use a shared counter for the waiters (or, two\n> > > counters, one for physical and one for logical), and just early exit if\n> > > the count is zero?\n> > \n> > That doesn't really fix the problem - once you have a single walsender\n> > connected, performance is bad again.\n> > \n> \n> Just to clarify, do you mean that if there is only one remaining active walsender that, say,\n> has been located at slot n, then we’d still have to loop from 0 to n in WalSndWakeup()?\n\nI understood Jeff's proposal to just have an early exit if there are no\nwalsenders connected at all. But yes, even if we stopped iterating after\nfinding the number of slots we needed to, having to iterate over empty slots\nwould be an issue.\n\nBut TBH, even if we only did work for connected walsenders, I think this would\nstill be a performance issue. Acquiring O(#connected-walsenders) spinlocks for\nevery record is just too expensive.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 May 2023 12:34:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 2023-05-11 09:42:39 +0000, Zhijie Hou (Fujitsu) wrote:\n> I did some simple tests for this to see the performance impact on\n> the streaming replication, just share it here for reference.\n> \n> 1) sync primary-standby setup, load data on primary and count the time spent on\n> replication. the degradation will be more obvious as the value of max_wal_senders\n> increases.\n\nFWIW, using syncrep likely under-estimates the overhead substantially, because\nthat includes a lot overhead on the WAL generating side. I saw well over 20%\noverhead for the default max_wal_senders=10.\n\nI just created a standby, shut it down, then ran a deterministically-sized\nworkload on the primary, started the standby, and measured how long it took to\ncatch up. I just used the log messages to measure the time.\n\n\n> 2) Similar as 1) but count the time that the standby startup process spent on\n> replaying WAL(via gprof).\n\nI don't think that's the case here, but IME gprof's overhead is so high, that\nit can move bottlenecks quite drastically. The problem is that it adds code to\nevery function enter/exit - for simple functions, that overhead is much higher\nthan the \"original\" cost of the function.\n\ngprof style instrumentation is good for things like code coverage, but for\nperformance evaluation it's normally better to use a sampling profiler like\nperf. That also causes slowdowns, but largely only in places that already take\nup substantial execution time.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 May 2023 12:43:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Wed, 2023-05-17 at 12:34 -0700, Andres Freund wrote:\n> I understood Jeff's proposal to just have an early exit if there are\n> no\n> walsenders connected at all.\n\nMy suggestion was we early exit unless there is at least one *waiting*\nwalsender of the appropriate type. In other words, increment the\ncounter on entry to WalSndWait() and decrement on exit. I didn't look\nin detail yet, so perhaps it's not easy/cheap to do that safely.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 17 May 2023 12:46:33 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nThanks for working on the patch!\n\nOn 2023-05-15 20:09:00 +0530, Bharath Rupireddy wrote:\n> > [1]\n> > max_wal_senders = 100\n> > before regression(ms) after regression(ms) v2 patch(ms)\n> > 13394.4013 14141.2615 13455.2543\n> > Compared with before regression 5.58% 0.45%\n> >\n> > max_wal_senders = 200\n> > before regression(ms) after regression(ms) v2 patch(ms)\n> > 13280.8507 14597.1173 13632.0606\n> > Compared with before regression 9.91% 1.64%\n> >\n> > max_wal_senders = 300\n> > before regression(ms) after regression(ms) v2 patch(ms)\n> > 13535.0232 16735.7379 13705.7135\n> > Compared with before regression 23.65% 1.26%\n> \n> Yes, the numbers with v2 patch look close to where we were before.\n> Thanks for confirming. Just wondering, where does this extra\n> 0.45%/1.64%/1.26% coming from?\n\nWe still do more work for each WAL record than before, so I'd expect something\nsmall. I'd say right now the main overhead with the patch comes from the\nspinlock acquisitions in ConditionVariableBroadcast(), which happen even when\nnobody is waiting.\n\nI'll try to come up with a benchmark without the issues I pointed out in\nhttps://postgr.es/m/20230517194331.ficfy5brpfq5lrmz%40awork3.anarazel.de\n\n\n> +\t\tConditionVariableInit(&WalSndCtl->physicalWALSndCV);\n> +\t\tConditionVariableInit(&WalSndCtl->logicalWALSndCV);\n\nIt's not obvious to me that it's worth having two CVs, because it's more\nexpensive to find no waiters in two CVs than to find no waiters in one CV.\n\n\n> +\t/*\n> +\t * We use condition variable (CV) to efficiently wake up walsenders in\n> +\t * WalSndWakeup().\n> +\t *\n> +\t * Every walsender prepares to sleep on a shared memory CV. Note that it\n> +\t * just prepares to sleep on the CV (i.e., adds itself to the CV's\n> +\t * waitlist), but not actually waits on the CV (IOW, it never calls\n> +\t * ConditionVariableSleep()). It still uses WaitEventSetWait() for waiting,\n> +\t * because CV infrastructure doesn't handle FeBe socket events currently.\n> +\t * The processes (startup process, walreceiver etc.) wanting to wake up\n> +\t * walsenders use ConditionVariableBroadcast(), which in turn calls\n> +\t * SetLatch(), helping walsenders come out of WaitEventSetWait().\n> +\t *\n> +\t * This approach is simple and efficient because, one doesn't have to loop\n> +\t * through all the walsenders slots, with a spinlock acquisition and\n> +\t * release for every iteration, just to wake up only the waiting\n> +\t * walsenders. It makes WalSndWakeup() callers' life easy.\n> +\t *\n> +\t * XXX: When available, WaitEventSetWait() can be replaced with its CV's\n> +\t * counterpart.\n\nI don't really understand that XXX - the potential bright future would be to\nadd support for CVs into wait event sets, not to replace WES with a CV?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 May 2023 12:53:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 2023-05-17 12:53:15 -0700, Andres Freund wrote:\n> I'll try to come up with a benchmark without the issues I pointed out in\n> https://postgr.es/m/20230517194331.ficfy5brpfq5lrmz%40awork3.anarazel.de\n\nHere we go:\n\nsetup:\n\ncreate primary\nSELECT pg_create_physical_replication_slot('reserve', true);\ncreate standby using pg_basebackup\n\ncreate WAL:\npsql -c CREATE TABLE testtable_logged(other_data int default 1);' && \\\n c=16; PGOPTIONS='-c synchronous_commit=off' /path-to-pgbench --random-seed=0 -n -c$c -j$c -t1000 -P1 -f <( echo \"INSERT INTO testtable_logged SELECT generate_series(1, 1000)\" ) && \\\n psql -c \"SELECT pg_create_restore_point('end');\"\n\nbenchmark:\nrm -rf /tmp/test && \\\n cp -ar /srv/dev/pgdev-dev-standby /tmp/test && \\\n cp -ar /srv/dev/pgdev-dev/pg_wal/* /tmp/test/pg_wal/ && \\\n sync && \\\n /usr/bin/time -f '%es' /path-to-postgres -D /tmp/test -c recovery_target_action=shutdown -c recovery_target_name=end -c shared_buffers=1GB -c fsync=off\n\nThat way I can measure how long it takes to replay exactly the same WAL, and\nalso take profiles of exactly the same work, without influencing time results.\n\nI copy the WAL files to the primary to ensure that walreceiver (standby) /\nwalsender (primary) performance doesn't make the result variability higher.\n\n\n max_walsenders=10 max_walsenders=100\ne101dfac3a5 reverted 7.01s 7.02s\n093e5c57d50 / HEAD 8.25s 19.91s\nbharat-v3 7.14s\t\t 7.13s\n\nSo indeed, bharat-v3 largely fixes the issue.\n\nThe regression of v3 compared to e101dfac3a5 reverted seems pretty constant at\n~0.982x, independent of the concrete max_walsenders value. Which makes sense,\nthe work is constant.\n\n\nTo make it more extreme, I also tested a workload that is basically free to replay:\n\nc=16; /srv/dev/build/m-opt/src/bin/pgbench/pgbench --random-seed=0 -n -c$c -j$c -t5000 -P1 -f <( echo \"SELECT pg_logical_emit_message(false, 'c', 'a') FROM generate_series(1, 1000)\" ) && psql -c \"SELECT pg_create_restore_point('end');\"\n\n\n max_walsenders=10 max_walsenders=100\ne101dfac3a5 reverted 1.70s 1.70s\n093e5c57d50 / HEAD 3.00s 14.56s\nbharat-v3 1.88s 1.88s\n\nIn this extreme workload we still regress by ~0.904x.\n\nI'm not sure how much it's worth worrying about that - this is a quite\nunrealistic testcase.\n\nFWIW, if I just make WalSndWakeup() do nothing, I still see a very small, but\nreproducible, overhead: 1.72s - that's just the cost of the additional\nexternal function call.\n\nIf I add a no-waiters fastpath using proclist_is_empty() to\nConditionVariableBroadcast(), I get 1.77s. So the majority of the remaining\nslowdown indeed comes from the spinlock acquisition in\nConditionVariableBroadcast().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 May 2023 13:55:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Thu, May 18, 2023 at 1:23 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-05-15 20:09:00 +0530, Bharath Rupireddy wrote:\n> > > [1]\n> > > max_wal_senders = 100\n> > > before regression(ms) after regression(ms) v2 patch(ms)\n> > > 13394.4013 14141.2615 13455.2543\n> > > Compared with before regression 5.58% 0.45%\n> > >\n> > > max_wal_senders = 200\n> > > before regression(ms) after regression(ms) v2 patch(ms)\n> > > 13280.8507 14597.1173 13632.0606\n> > > Compared with before regression 9.91% 1.64%\n> > >\n> > > max_wal_senders = 300\n> > > before regression(ms) after regression(ms) v2 patch(ms)\n> > > 13535.0232 16735.7379 13705.7135\n> > > Compared with before regression 23.65% 1.26%\n> >\n> > Yes, the numbers with v2 patch look close to where we were before.\n> > Thanks for confirming. Just wondering, where does this extra\n> > 0.45%/1.64%/1.26% coming from?\n>\n> We still do more work for each WAL record than before, so I'd expect something\n> small. I'd say right now the main overhead with the patch comes from the\n> spinlock acquisitions in ConditionVariableBroadcast(), which happen even when\n> nobody is waiting.\n\n\n> > + ConditionVariableInit(&WalSndCtl->physicalWALSndCV);\n> > + ConditionVariableInit(&WalSndCtl->logicalWALSndCV);\n>\n> It's not obvious to me that it's worth having two CVs, because it's more\n> expensive to find no waiters in two CVs than to find no waiters in one CV.\n\nI disagree. In the tight per-WAL record recovery loop, WalSndWakeup\nwakes up logical walsenders for every WAL record, but it wakes up\nphysical walsenders only if the applied WAL record causes a TLI\nswitch. Therefore, the extra cost of spinlock acquire-release for per\nWAL record applies only for logical walsenders. On the other hand, if\nwe were to use a single CV, we would be unnecessarily waking up (if at\nall they are sleeping) physical walsenders for every WAL record -\nwhich is costly IMO.\n\nI still think separate CVs are good for selective wake ups given there\ncan be not so many TLI switch WAL records.\n\n> > + *\n> > + * XXX: When available, WaitEventSetWait() can be replaced with its CV's\n> > + * counterpart.\n>\n> I don't really understand that XXX - the potential bright future would be to\n> add support for CVs into wait event sets, not to replace WES with a CV?\n\nYes, I meant it and modified that part to:\n\n * XXX: A desirable future improvement would be to add support for CVs into\n * WaitEventSetWait().\n\n> FWIW, if I just make WalSndWakeup() do nothing, I still see a very small, but\n> reproducible, overhead: 1.72s - that's just the cost of the additional\n> external function call.\n\nHm, this is unavoidable.\n\n> If I add a no-waiters fastpath using proclist_is_empty() to\n> ConditionVariableBroadcast(), I get 1.77s. So the majority of the remaining\n> slowdown indeed comes from the spinlock acquisition in\n> ConditionVariableBroadcast().\n\nAcquiring spinlock for every replayed WAL record seems not great. In\ngeneral, this problem exists for CV infrastructure as a whole if\nConditionVariableBroadcast()/CVB() is called in a loop/hot path. I\nthink this can be proven easily by doing something like - select\npg_replication_origin_session_setup()/pg_wal_replay_resume()/pg_create_physical_replication_slot()\nfrom generate_series(1, 1000000...);, all of these functions end up in\nCVB().\n\nDo you think adding a fastpath exit when no waiters to CVB() is worth\nimplementing? Something like - an atomic state variable to each CV,\nsimilar to LWLock's state variable, setting it when adding waiters and\nresetting it when removing waiters, CVB() atomically reading the state\nfor fastpath (of course, we need memory barriers here). This might\ncomplicate things as each CV structure gets an extra state variable (4\nbytes), memory barriers, and atomic writes and reads.\n\nOr given the current uses of CVB() with no callers except recovery\ncalling it in a hot path with patch, maybe we can add an atomic waiter\ncount to WalSndCtl, incrementing it atomically in WalSndWait() before\nthe wait, decrementing it after the wait, and a fastpath in\nWalSndWakeup() reading it atomically to avoid CVB() calls. IIUC, this\nis something Jeff proposed upthread.\n\nThoughts?\n\nPlease find the attached v4 patch addressing the review comment (not\nthe fastpath one).\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 18 May 2023 20:11:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "At Thu, 18 May 2023 20:11:11 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> > > + ConditionVariableInit(&WalSndCtl->physicalWALSndCV);\n> > > + ConditionVariableInit(&WalSndCtl->logicalWALSndCV);\n> >\n> > It's not obvious to me that it's worth having two CVs, because it's more\n> > expensive to find no waiters in two CVs than to find no waiters in one CV.\n> \n> I disagree. In the tight per-WAL record recovery loop, WalSndWakeup\n> wakes up logical walsenders for every WAL record, but it wakes up\n> physical walsenders only if the applied WAL record causes a TLI\n> switch. Therefore, the extra cost of spinlock acquire-release for per\n> WAL record applies only for logical walsenders. On the other hand, if\n> we were to use a single CV, we would be unnecessarily waking up (if at\n> all they are sleeping) physical walsenders for every WAL record -\n> which is costly IMO.\n\nAs I was reading this, I start thinking that one reason for the\nregression could be to exccessive frequency of wakeups during logical\nreplication. In physical replication, we make sure to avoid exccessive\nwakeups when the stream is tightly packed. I'm just wondering why\nlogical replication doesn't (or can't) do the same thing, IMHO.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n \n\n\n", "msg_date": "Fri, 19 May 2023 12:07:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Friday, May 19, 2023 11:08 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Thu, 18 May 2023 20:11:11 +0530, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > > + ConditionVariableInit(&WalSndCtl->physicalWALSndCV);\n> > > > + ConditionVariableInit(&WalSndCtl->logicalWALSndCV);\n> > >\n> > > It's not obvious to me that it's worth having two CVs, because it's more\n> > > expensive to find no waiters in two CVs than to find no waiters in one CV.\n> >\n> > I disagree. In the tight per-WAL record recovery loop, WalSndWakeup\n> > wakes up logical walsenders for every WAL record, but it wakes up\n> > physical walsenders only if the applied WAL record causes a TLI\n> > switch. Therefore, the extra cost of spinlock acquire-release for per\n> > WAL record applies only for logical walsenders. On the other hand, if\n> > we were to use a single CV, we would be unnecessarily waking up (if at\n> > all they are sleeping) physical walsenders for every WAL record -\n> > which is costly IMO.\n> \n> As I was reading this, I start thinking that one reason for the\n> regression could be to exccessive frequency of wakeups during logical\n> replication. In physical replication, we make sure to avoid exccessive\n> wakeups when the stream is tightly packed. I'm just wondering why\n> logical replication doesn't (or can't) do the same thing, IMHO.\n\nI thought(from the e101dfa's commit message) physical walsenders can't send\ndata until it's been flushed, so it only gets wakeup at the time of flush or TLI\nswitch. For logical walsender, it can start to decode changes after the change\nis applied, and to avoid the case if the walsender is asleep, and there's work\nto be done, it wakeup logical walsender when applying each record.\n\nOr maybe you mean to wakeup(ConditionVariableBroadcast) walsender after\napplying some wal records like[1], But it seems it may delay the wakeup of\nwalsender a bit(the walsender may be asleep before reaching the threshold).\n\n\n\n\n[1] \n--- a/src/backend/access/transam/xlogrecovery.c\n+++ b/src/backend/access/transam/xlogrecovery.c\n@@ -1833,6 +1833,7 @@ ApplyWalRecord(XLogReaderState *xlogreader, XLogRecord *record, TimeLineID *repl\n {\n \tErrorContextCallback errcallback;\n \tbool\t\tswitchedTLI = false;\n+\tstatic int\tnreplay = 0;\n \n \t/* Setup error traceback support for ereport() */\n \terrcallback.callback = rm_redo_error_callback;\n@@ -1957,8 +1958,12 @@ ApplyWalRecord(XLogReaderState *xlogreader, XLogRecord *record, TimeLineID *repl\n \t * be created otherwise)\n \t * ------\n \t */\n-\tif (AllowCascadeReplication())\n+\tif (AllowCascadeReplication() &&\n+\t\tnreplay++ == 100)\n+\t{\n \t\tWalSndWakeup(switchedTLI, true);\n+\t\tnreplay = 0;\n+\t}\n\nBest Regards,\nHou zj\n\n\n", "msg_date": "Fri, 19 May 2023 07:36:21 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 2023-05-19 12:07:56 +0900, Kyotaro Horiguchi wrote:\n> At Thu, 18 May 2023 20:11:11 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> > > > + ConditionVariableInit(&WalSndCtl->physicalWALSndCV);\n> > > > + ConditionVariableInit(&WalSndCtl->logicalWALSndCV);\n> > >\n> > > It's not obvious to me that it's worth having two CVs, because it's more\n> > > expensive to find no waiters in two CVs than to find no waiters in one CV.\n> > \n> > I disagree. In the tight per-WAL record recovery loop, WalSndWakeup\n> > wakes up logical walsenders for every WAL record, but it wakes up\n> > physical walsenders only if the applied WAL record causes a TLI\n> > switch. Therefore, the extra cost of spinlock acquire-release for per\n> > WAL record applies only for logical walsenders. On the other hand, if\n> > we were to use a single CV, we would be unnecessarily waking up (if at\n> > all they are sleeping) physical walsenders for every WAL record -\n> > which is costly IMO.\n> \n> As I was reading this, I start thinking that one reason for the\n> regression could be to exccessive frequency of wakeups during logical\n> replication. In physical replication, we make sure to avoid exccessive\n> wakeups when the stream is tightly packed. I'm just wondering why\n> logical replication doesn't (or can't) do the same thing, IMHO.\n\nIt's possible we could try to reduce the frequency by issuing wakeups only at\nspecific points. The most obvious thing to do would be to wake only when\nwaiting for more WAL or when crossing a page boundary, or such. Unfortunately\nthat could easily lead to deadlocks, because the startup process might be\nblocked waiting for a lock, held by a backend doing logical decoding - which\ncan't progress until the startup process wakes the backend up.\n\nSo I don't think this is promising avenue in the near term.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 May 2023 09:10:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 2023-05-18 20:11:11 +0530, Bharath Rupireddy wrote:\n> Please find the attached v4 patch addressing the review comment (not\n> the fastpath one).\n\nI pushed a mildly edited version of this. I didn't like the name of the CVs\nmuch, so I renamed them to wal_flush_cv/wal_replay_cv. I also did some minor\ncomment polishing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 May 2023 09:46:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn Monday, May 22, 2023 12:11 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-05-19 12:07:56 +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 18 May 2023 20:11:11 +0530, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > > > +\n> ConditionVariableInit(&WalSndCtl->physicalWALSndCV);\n> > > > > + ConditionVariableInit(&WalSndCtl->logicalWALSndCV);\n> > > >\n> > > > It's not obvious to me that it's worth having two CVs, because it's more\n> > > > expensive to find no waiters in two CVs than to find no waiters in one CV.\n> > >\n> > > I disagree. In the tight per-WAL record recovery loop, WalSndWakeup\n> > > wakes up logical walsenders for every WAL record, but it wakes up\n> > > physical walsenders only if the applied WAL record causes a TLI\n> > > switch. Therefore, the extra cost of spinlock acquire-release for per\n> > > WAL record applies only for logical walsenders. On the other hand, if\n> > > we were to use a single CV, we would be unnecessarily waking up (if at\n> > > all they are sleeping) physical walsenders for every WAL record -\n> > > which is costly IMO.\n> >\n> > As I was reading this, I start thinking that one reason for the\n> > regression could be to exccessive frequency of wakeups during logical\n> > replication. In physical replication, we make sure to avoid exccessive\n> > wakeups when the stream is tightly packed. I'm just wondering why\n> > logical replication doesn't (or can't) do the same thing, IMHO.\n> \n> It's possible we could try to reduce the frequency by issuing wakeups only at\n> specific points. The most obvious thing to do would be to wake only when\n> waiting for more WAL or when crossing a page boundary, or such.\n> Unfortunately\n> that could easily lead to deadlocks, because the startup process might be\n> blocked waiting for a lock, held by a backend doing logical decoding - which\n> can't progress until the startup process wakes the backend up.\n\nJust out of curiosity about the mentioned deadlock scenario, no comment on the\ncommitted patch.\n\nAbout \"a backend doing logical decoding\", do you mean the case when a user\nstart a backend and invoke pg_logical_slot_get_changes() to do the logical\ndecoding ? If so, it seems the logical decoding in a backend won't be waked up\nby startup process because the backend won't be registered as a walsender so\nthe backend won't be found in WalSndWakeup().\n\nOr do you mean the deadlock between the real logical walsender and startup\nprocess ? (I might miss something) I think the logical decoding doesn't lock\nthe target user relation when decoding because it normally can get the needed\ninformation from WAL. Besides, the walsender sometimes will lock the system\ntable(e.g. use RelidByRelfilenumber() to get the relid) but it will unlock it\nafter finishing systable scan.\n\nSo, if possible, would you be able to share some details about the deadlock\ncase you mentioned earlier? it's helpful as it can prevent similar problems in\nthe future development.\n\nBest Regards,\nHou zj\n\n\n", "msg_date": "Mon, 22 May 2023 12:15:07 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 2023-05-22 12:15:07 +0000, Zhijie Hou (Fujitsu) wrote:\n> About \"a backend doing logical decoding\", do you mean the case when a user\n> start a backend and invoke pg_logical_slot_get_changes() to do the logical\n> decoding ? If so, it seems the logical decoding in a backend won't be waked up\n> by startup process because the backend won't be registered as a walsender so\n> the backend won't be found in WalSndWakeup().\n\nI meant logical decoding happening inside a walsender instance.\n\n\n> Or do you mean the deadlock between the real logical walsender and startup\n> process ? (I might miss something) I think the logical decoding doesn't lock\n> the target user relation when decoding because it normally can get the needed\n> information from WAL.\n\nIt does lock catalog tables briefly. There's no guarantee that such locks are\nreleased immediately. I forgot the details, but IIRC there's some outfuncs\n(enum?) that intentionally delay releasing locks till transaction commit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 May 2023 10:52:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "On Tuesday, May 23, 2023 1:53 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-05-22 12:15:07 +0000, Zhijie Hou (Fujitsu) wrote:\n> > About \"a backend doing logical decoding\", do you mean the case when a\n> user\n> > start a backend and invoke pg_logical_slot_get_changes() to do the logical\n> > decoding ? If so, it seems the logical decoding in a backend won't be waked\n> up\n> > by startup process because the backend won't be registered as a walsender\n> so\n> > the backend won't be found in WalSndWakeup().\n> \n> I meant logical decoding happening inside a walsender instance.\n> \n> \n> > Or do you mean the deadlock between the real logical walsender and startup\n> > process ? (I might miss something) I think the logical decoding doesn't lock\n> > the target user relation when decoding because it normally can get the\n> needed\n> > information from WAL.\n> \n> It does lock catalog tables briefly. There's no guarantee that such locks are\n> released immediately. I forgot the details, but IIRC there's some outfuncs\n> (enum?) that intentionally delay releasing locks till transaction commit.\n\nThanks for the explanation !\n\nI understand that the startup process can take lock on the catalog(when\nreplaying record) which may conflict with the lock in walsender.\n\nBut in walsender, I think we only start transaction after entering\nReorderBufferProcessTXN(), and the transaction started here will be released\nsoon after processing and outputting the decoded transaction's data(as the\ncomment in ReorderBufferProcessTXN() says:\" all locks acquired in here to be\nreleased, not reassigned to the parent and we do not want any database access\nhave persistent effects.\").\n\nBesides, during the process and output of the decoded transaction, the\nwalsender won't wait for the wakeup of startup process(e.g.\nWalSndWaitForWal()), it only waits if the data is being sent to subscriber. So\nit seems the lock conflict here won't cause the deadlock for now, although it\nmay have a risk if we change this logic later. Sorry if I missed something, and\nthanks again for your patience in explaining.\n\nBest Regards,\nHou zj\n\n\n\n\n", "msg_date": "Wed, 24 May 2023 05:53:51 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: walsender performance regression due to logical decoding on\n standby changes" }, { "msg_contents": "Hi,\n\nOn 2023-05-24 05:53:51 +0000, Zhijie Hou (Fujitsu) wrote:\n> On Tuesday, May 23, 2023 1:53 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-05-22 12:15:07 +0000, Zhijie Hou (Fujitsu) wrote:\n> > > About \"a backend doing logical decoding\", do you mean the case when a\n> > user\n> > > start a backend and invoke pg_logical_slot_get_changes() to do the logical\n> > > decoding ? If so, it seems the logical decoding in a backend won't be waked\n> > up\n> > > by startup process because the backend won't be registered as a walsender\n> > so\n> > > the backend won't be found in WalSndWakeup().\n> > \n> > I meant logical decoding happening inside a walsender instance.\n> > \n> > \n> > > Or do you mean the deadlock between the real logical walsender and startup\n> > > process ? (I might miss something) I think the logical decoding doesn't lock\n> > > the target user relation when decoding because it normally can get the\n> > needed\n> > > information from WAL.\n> > \n> > It does lock catalog tables briefly. There's no guarantee that such locks are\n> > released immediately. I forgot the details, but IIRC there's some outfuncs\n> > (enum?) that intentionally delay releasing locks till transaction commit.\n> \n> Thanks for the explanation !\n> \n> I understand that the startup process can take lock on the catalog(when\n> replaying record) which may conflict with the lock in walsender.\n> \n> But in walsender, I think we only start transaction after entering\n> ReorderBufferProcessTXN(), and the transaction started here will be released\n> soon after processing and outputting the decoded transaction's data(as the\n> comment in ReorderBufferProcessTXN() says:\" all locks acquired in here to be\n> released, not reassigned to the parent and we do not want any database access\n> have persistent effects.\").\n\nIt's possible that there's no immediate danger - but I wouldn't want to bet on\nit staying that way. Think about things like streaming out large transactions\netc. There also are potential dangers outside of plain things like locks -\nthere could be a recovery conflicts on the snapshot or such (although that\nnormally should be prevented via hot_standby_feedback or such).\n\n\nEven if there's no hard deadlock issue - not waking up a\nwalsender-in-logical-decoding that's currently waiting because (replay_count %\nN) != 0 means that the walsender might be delayed for quite a while - there\nmight not be any further records from the primary in the near future.\n\n\nI don't think any approach purely based on record counts has any chance to\nwork well. You could combine it with something else, e.g. with always waking\nup in XLogPageRead() and WaitForWALToBecomeAvailable(). But then you end up\nwith something relatively complicated again.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 May 2023 13:30:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: walsender performance regression due to logical decoding on\n standby changes" } ]
[ { "msg_contents": "Hi,\n\nOne complaint about PHJ is that it can, in rare cases, use a\nsurprising amount of temporary disk space where non-parallel HJ would\nnot. When it decides that it needs to double the number of batches to\ntry to fit each inner batch into memory, and then again and again\ndepending on your level of bad luck, it leaves behind all the earlier\ngenerations of inner batch files to be cleaned up at the end of the\nquery. That's stupid. Here's a patch to unlink them sooner, as a\nsmall improvement.\n\nThe reason I didn't do this earlier is that sharedtuplestore.c\ncontinues the pre-existing tradition where each parallel process\ncounts what it writes against its own temp_file_limit. At the time I\nthought I'd need to have one process unlink all the files, but if a\nprocess were to unlink files that it didn't create, that accounting\nsystem would break. Without some new kind of shared temp_file_limit\nmechanism that doesn't currently exist, per-process counters could go\nnegative, creating free money. In the attached patch, I realised\nsomething that I'd missed before: there is a safe point for each\nbackend to unlink just the files that it created, and there is no way\nfor a process that created files not to reach that point.\n\nHere's an example query that tries 8, 16 and then 32 batches on my\nmachine, because reltuples is clobbered with a bogus value.\nPathological cases can try many more rounds than that, but 3 is enough\nto demonstrate. Using truss and shell tricks I spat out the list of\ncreate and unlink operations from master and the attached draft/POC\npatch. See below.\n\n set work_mem = '1MB';\n CREATE TABLE t (i int);\n INSERT INTO t SELECT generate_series(1, 1000000);\n ANALYZE t;\n UPDATE pg_class SET reltuples = reltuples / 4 WHERE relname = 't';\n EXPLAIN ANALYZE SELECT COUNT(*) FROM t t1 JOIN t t2 USING (i);\n\nThis code is also exercised by the existing \"bad\" case in join_hash.sql.\n\nThis is the second of two experimental patches investigating increased\nresource usage in PHJ compared to HJ based on user complaints, this\none being per-batch temp files, and the other[1] being per-batch\nbuffer memory.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKCnU9NjFfzO219V-YeyWr8mZe4JRrf%3Dx_uv6qsePBcOw%40mail.gmail.com\n\n=====\n\nmaster:\n\n99861: create i3of8.p0.0\n99861: create i6of8.p0.0\n99861: create i4of8.p0.0\n99861: create i5of8.p0.0\n99861: create i7of8.p0.0\n99861: create i2of8.p0.0\n99861: create i1of8.p0.0\n99863: create i2of8.p1.0\n99862: create i7of8.p2.0\n99863: create i5of8.p1.0\n99862: create i1of8.p2.0\n99863: create i6of8.p1.0\n99862: create i5of8.p2.0\n99863: create i1of8.p1.0\n99862: create i3of8.p2.0\n99863: create i7of8.p1.0\n99862: create i4of8.p2.0\n99863: create i3of8.p1.0\n99862: create i2of8.p2.0\n99863: create i4of8.p1.0\n99862: create i6of8.p2.0\n99863: create i8of16.p1.0\n99861: create i8of16.p0.0\n99862: create i8of16.p2.0\n99863: create i9of16.p1.0\n99862: create i1of16.p2.0\n99863: create i1of16.p1.0\n99862: create i9of16.p2.0\n99861: create i9of16.p0.0\n99861: create i1of16.p0.0\n99862: create i10of16.p2.0\n99863: create i2of16.p1.0\n99862: create i2of16.p2.0\n99863: create i10of16.p1.0\n99861: create i2of16.p0.0\n99861: create i10of16.p0.0\n99862: create i11of16.p2.0\n99863: create i3of16.p1.0\n99862: create i3of16.p2.0\n99861: create i3of16.p0.0\n99863: create i11of16.p1.0\n99861: create i11of16.p0.0\n99863: create i4of16.p1.0\n99863: create i12of16.p1.0\n99862: create i12of16.p2.0\n99862: create i4of16.p2.0\n99861: create i12of16.p0.0\n99861: create i4of16.p0.0\n99863: create i13of16.p1.0\n99863: create i5of16.p1.0\n99862: create i5of16.p2.0\n99862: create i13of16.p2.0\n99861: create i5of16.p0.0\n99861: create i13of16.p0.0\n99862: create i6of16.p2.0\n99863: create i6of16.p1.0\n99861: create i14of16.p0.0\n99862: create i14of16.p2.0\n99863: create i14of16.p1.0\n99861: create i6of16.p0.0\n99863: create i15of16.p1.0\n99861: create i7of16.p0.0\n99863: create i7of16.p1.0\n99862: create i15of16.p2.0\n99861: create i15of16.p0.0\n99862: create i7of16.p2.0\n99863: create i16of32.p1.0\n99862: create i16of32.p2.0\n99861: create i16of32.p0.0\n99863: create i17of32.p1.0\n99863: create i1of32.p1.0\n99861: create i1of32.p0.0\n99862: create i17of32.p2.0\n99862: create i1of32.p2.0\n99861: create i17of32.p0.0\n99863: create i18of32.p1.0\n99863: create i2of32.p1.0\n99862: create i2of32.p2.0\n99862: create i18of32.p2.0\n99861: create i2of32.p0.0\n99861: create i18of32.p0.0\n99862: create i3of32.p2.0\n99862: create i19of32.p2.0\n99861: create i19of32.p0.0\n99861: create i3of32.p0.0\n99863: create i19of32.p1.0\n99863: create i3of32.p1.0\n99863: create i20of32.p1.0\n99863: create i4of32.p1.0\n99861: create i20of32.p0.0\n99861: create i4of32.p0.0\n99862: create i20of32.p2.0\n99862: create i4of32.p2.0\n99861: create i21of32.p0.0\n99863: create i21of32.p1.0\n99861: create i5of32.p0.0\n99863: create i5of32.p1.0\n99862: create i5of32.p2.0\n99862: create i21of32.p2.0\n99863: create i22of32.p1.0\n99863: create i6of32.p1.0\n99861: create i22of32.p0.0\n99862: create i22of32.p2.0\n99861: create i6of32.p0.0\n99862: create i6of32.p2.0\n99863: create i7of32.p1.0\n99863: create i23of32.p1.0\n99861: create i7of32.p0.0\n99862: create i23of32.p2.0\n99862: create i7of32.p2.0\n99861: create i23of32.p0.0\n99862: create i24of32.p2.0\n99862: create i8of32.p2.0\n99863: create i24of32.p1.0\n99863: create i8of32.p1.0\n99861: create i24of32.p0.0\n99861: create i8of32.p0.0\n99863: create i9of32.p1.0\n99863: create i25of32.p1.0\n99862: create i9of32.p2.0\n99862: create i25of32.p2.0\n99861: create i9of32.p0.0\n99861: create i25of32.p0.0\n99861: create i26of32.p0.0\n99862: create i26of32.p2.0\n99863: create i10of32.p1.0\n99862: create i10of32.p2.0\n99861: create i10of32.p0.0\n99863: create i26of32.p1.0\n99862: create i11of32.p2.0\n99861: create i11of32.p0.0\n99862: create i27of32.p2.0\n99861: create i27of32.p0.0\n99863: create i27of32.p1.0\n99863: create i11of32.p1.0\n99862: create i12of32.p2.0\n99861: create i28of32.p0.0\n99862: create i28of32.p2.0\n99861: create i12of32.p0.0\n99863: create i12of32.p1.0\n99863: create i28of32.p1.0\n99863: create i29of32.p1.0\n99863: create i13of32.p1.0\n99862: create i29of32.p2.0\n99862: create i13of32.p2.0\n99861: create i13of32.p0.0\n99861: create i29of32.p0.0\n99863: create i14of32.p1.0\n99862: create i14of32.p2.0\n99863: create i30of32.p1.0\n99861: create i30of32.p0.0\n99862: create i30of32.p2.0\n99861: create i14of32.p0.0\n99863: create i15of32.p1.0\n99863: create i31of32.p1.0\n99862: create i15of32.p2.0\n99861: create i31of32.p0.0\n99862: create i31of32.p2.0\n99861: create i15of32.p0.0\n99863: create o19of32.p1.0\n99861: create o19of32.p0.0\n99863: create o30of32.p1.0\n99861: create o20of32.p0.0\n99863: create o28of32.p1.0\n99862: create o23of32.p2.0\n99861: create o25of32.p0.0\n99863: create o21of32.p1.0\n99862: create o26of32.p2.0\n99861: create o15of32.p0.0\n99863: create o8of32.p1.0\n99862: create o7of32.p2.0\n99863: create o20of32.p1.0\n99861: create o2of32.p0.0\n99863: create o14of32.p1.0\n99862: create o3of32.p2.0\n99863: create o12of32.p1.0\n99861: create o8of32.p0.0\n99862: create o18of32.p2.0\n99863: create o7of32.p1.0\n99862: create o24of32.p2.0\n99861: create o24of32.p0.0\n99863: create o24of32.p1.0\n99862: create o10of32.p2.0\n99863: create o11of32.p1.0\n99861: create o23of32.p0.0\n99862: create o22of32.p2.0\n99863: create o31of32.p1.0\n99862: create o12of32.p2.0\n99861: create o10of32.p0.0\n99863: create o2of32.p1.0\n99862: create o30of32.p2.0\n99861: create o6of32.p0.0\n99863: create o22of32.p1.0\n99862: create o14of32.p2.0\n99861: create o17of32.p0.0\n99863: create o0of32.p1.0\n99862: create o29of32.p2.0\n99861: create o4of32.p0.0\n99863: create o6of32.p1.0\n99862: create o8of32.p2.0\n99861: create o11of32.p0.0\n99863: create o18of32.p1.0\n99862: create o15of32.p2.0\n99861: create o1of32.p0.0\n99863: create o5of32.p1.0\n99862: create o2of32.p2.0\n99861: create o12of32.p0.0\n99863: create o4of32.p1.0\n99862: create o28of32.p2.0\n99861: create o13of32.p0.0\n99863: create o9of32.p1.0\n99862: create o31of32.p2.0\n99861: create o21of32.p0.0\n99863: create o27of32.p1.0\n99862: create o0of32.p2.0\n99861: create o16of32.p0.0\n99863: create o26of32.p1.0\n99862: create o13of32.p2.0\n99861: create o29of32.p0.0\n99863: create o3of32.p1.0\n99862: create o5of32.p2.0\n99861: create o3of32.p0.0\n99863: create o25of32.p1.0\n99862: create o21of32.p2.0\n99861: create o5of32.p0.0\n99863: create o1of32.p1.0\n99862: create o20of32.p2.0\n99861: create o30of32.p0.0\n99863: create o17of32.p1.0\n99862: create o1of32.p2.0\n99861: create o14of32.p0.0\n99863: create o23of32.p1.0\n99862: create o16of32.p2.0\n99861: create o0of32.p0.0\n99863: create o13of32.p1.0\n99862: create o19of32.p2.0\n99861: create o28of32.p0.0\n99863: create o16of32.p1.0\n99862: create o6of32.p2.0\n99861: create o26of32.p0.0\n99863: create o15of32.p1.0\n99862: create o9of32.p2.0\n99861: create o18of32.p0.0\n99863: create o29of32.p1.0\n99862: create o11of32.p2.0\n99861: create o31of32.p0.0\n99862: create o4of32.p2.0\n99863: create o10of32.p1.0\n99861: create o27of32.p0.0\n99862: create o27of32.p2.0\n99861: create o7of32.p0.0\n99862: create o17of32.p2.0\n99861: create o22of32.p0.0\n99862: create o25of32.p2.0\n99861: create o9of32.p0.0\n99861: unlink i20of32.p0.0\n99861: unlink o24of32.p0.0\n99861: unlink i29of32.p2.0\n99861: unlink i7of8.p1.0\n99861: unlink i26of32.p2.0\n99861: unlink i3of8.p1.0\n99861: unlink o7of32.p1.0\n99861: unlink o22of32.p2.0\n99861: unlink o8of32.p1.0\n99861: unlink i30of32.p0.0\n99861: unlink i1of32.p2.0\n99861: unlink i7of32.p0.0\n99861: unlink i18of32.p1.0\n99861: unlink i8of32.p0.0\n99861: unlink i17of32.p1.0\n99861: unlink i6of16.p0.0\n99861: unlink o13of32.p1.0\n99861: unlink i9of16.p0.0\n99861: unlink o0of32.p0.0\n99861: unlink o6of32.p2.0\n99861: unlink o9of32.p2.0\n99861: unlink o23of32.p1.0\n99861: unlink i28of32.p1.0\n99861: unlink i27of32.p1.0\n99861: unlink i1of16.p1.0\n99861: unlink i11of16.p0.0\n99861: unlink o14of32.p0.0\n99861: unlink i10of32.p0.0\n99861: unlink o12of32.p2.0\n99861: unlink i19of32.p2.0\n99861: unlink i16of32.p2.0\n99861: unlink i6of8.p2.0\n99861: unlink i9of32.p2.0\n99861: unlink i6of32.p2.0\n99861: unlink i8of16.p2.0\n99861: unlink i2of8.p2.0\n99861: unlink i7of16.p2.0\n99861: unlink i10of32.p1.0\n99861: unlink i31of32.p2.0\n99861: unlink i11of16.p1.0\n99861: unlink o14of32.p1.0\n99861: unlink i1of16.p0.0\n99861: unlink i27of32.p0.0\n99861: unlink i28of32.p0.0\n99861: unlink o23of32.p0.0\n99861: unlink i21of32.p2.0\n99861: unlink o25of32.p2.0\n99861: unlink o0of32.p1.0\n99861: unlink o13of32.p0.0\n99861: unlink i9of16.p1.0\n99861: unlink i6of16.p1.0\n99861: unlink i8of32.p1.0\n99861: unlink i17of32.p0.0\n99861: unlink i7of32.p1.0\n99861: unlink i18of32.p0.0\n99861: unlink o15of32.p2.0\n99861: unlink i10of16.p2.0\n99861: unlink i11of32.p2.0\n99861: unlink i30of32.p1.0\n99861: unlink o8of32.p0.0\n99861: unlink i3of8.p0.0\n99861: unlink o7of32.p0.0\n99861: unlink i7of8.p0.0\n99861: unlink o24of32.p1.0\n99861: unlink o1of32.p2.0\n99861: unlink i20of32.p1.0\n99861: unlink o1of32.p0.0\n99861: unlink i7of8.p2.0\n99861: unlink i26of32.p1.0\n99861: unlink i29of32.p1.0\n99861: unlink o22of32.p1.0\n99861: unlink o8of32.p2.0\n99861: unlink i3of8.p2.0\n99861: unlink o7of32.p2.0\n99861: unlink i11of32.p0.0\n99861: unlink i1of32.p1.0\n99861: unlink o15of32.p0.0\n99861: unlink i10of16.p0.0\n99861: unlink i17of32.p2.0\n99861: unlink i18of32.p2.0\n99861: unlink o13of32.p2.0\n99861: unlink o25of32.p0.0\n99861: unlink i21of32.p0.0\n99861: unlink o9of32.p1.0\n99861: unlink o23of32.p2.0\n99861: unlink o6of32.p1.0\n99861: unlink i27of32.p2.0\n99861: unlink i28of32.p2.0\n99861: unlink i1of16.p2.0\n99861: unlink i31of32.p0.0\n99861: unlink i8of16.p0.0\n99861: unlink o12of32.p1.0\n99861: unlink i2of8.p0.0\n99861: unlink i7of16.p0.0\n99861: unlink i6of8.p0.0\n99861: unlink i16of32.p1.0\n99861: unlink i9of32.p0.0\n99861: unlink i19of32.p1.0\n99861: unlink i6of32.p0.0\n99861: unlink i19of32.p0.0\n99861: unlink i6of32.p1.0\n99861: unlink i6of8.p1.0\n99861: unlink i16of32.p0.0\n99861: unlink i9of32.p1.0\n99861: unlink i7of16.p1.0\n99861: unlink i2of8.p1.0\n99861: unlink i8of16.p1.0\n99861: unlink o12of32.p0.0\n99861: unlink i31of32.p1.0\n99861: unlink i10of32.p2.0\n99861: unlink i11of16.p2.0\n99861: unlink o14of32.p2.0\n99861: unlink o6of32.p0.0\n99861: unlink o9of32.p0.0\n99861: unlink i21of32.p1.0\n99861: unlink o0of32.p2.0\n99861: unlink o25of32.p1.0\n99861: unlink i6of16.p2.0\n99861: unlink i9of16.p2.0\n99861: unlink i7of32.p2.0\n99861: unlink i8of32.p2.0\n99861: unlink o15of32.p1.0\n99861: unlink i10of16.p1.0\n99861: unlink i30of32.p2.0\n99861: unlink i1of32.p0.0\n99861: unlink i11of32.p1.0\n99861: unlink o22of32.p0.0\n99861: unlink i29of32.p0.0\n99861: unlink i26of32.p0.0\n99861: unlink o1of32.p1.0\n99861: unlink o24of32.p2.0\n99861: unlink i20of32.p2.0\n99861: unlink i3of16.p0.0\n99861: unlink o19of32.p1.0\n99861: unlink o16of32.p1.0\n99861: unlink i13of16.p1.0\n99861: unlink i2of32.p0.0\n99861: unlink i12of32.p1.0\n99861: unlink i5of16.p2.0\n99861: unlink o31of32.p0.0\n99861: unlink i4of32.p2.0\n99861: unlink o2of32.p1.0\n99861: unlink o28of32.p2.0\n99861: unlink o27of32.p2.0\n99861: unlink i23of32.p2.0\n99861: unlink i5of8.p2.0\n99861: unlink o21of32.p0.0\n99861: unlink i25of32.p0.0\n99861: unlink i1of8.p2.0\n99861: unlink i13of32.p2.0\n99861: unlink o18of32.p2.0\n99861: unlink i12of16.p2.0\n99861: unlink o17of32.p2.0\n99861: unlink i5of32.p1.0\n99861: unlink i15of32.p0.0\n99861: unlink i4of16.p1.0\n99861: unlink i4of8.p0.0\n99861: unlink i14of16.p0.0\n99861: unlink o11of32.p0.0\n99861: unlink i22of32.p1.0\n99861: unlink o29of32.p1.0\n99861: unlink o3of32.p2.0\n99861: unlink o26of32.p1.0\n99861: unlink o5of32.p0.0\n99861: unlink o20of32.p2.0\n99861: unlink o5of32.p1.0\n99861: unlink i24of32.p2.0\n99861: unlink o26of32.p0.0\n99861: unlink o29of32.p0.0\n99861: unlink i22of32.p0.0\n99861: unlink i14of16.p1.0\n99861: unlink o11of32.p1.0\n99861: unlink o30of32.p2.0\n99861: unlink i4of16.p0.0\n99861: unlink i4of8.p1.0\n99861: unlink i15of32.p1.0\n99861: unlink i5of32.p0.0\n99861: unlink i2of16.p2.0\n99861: unlink i3of32.p2.0\n99861: unlink i25of32.p1.0\n99861: unlink o21of32.p1.0\n99861: unlink o4of32.p2.0\n99861: unlink o2of32.p0.0\n99861: unlink i14of32.p2.0\n99861: unlink o10of32.p2.0\n99861: unlink i15of16.p2.0\n99861: unlink o31of32.p1.0\n99861: unlink i12of32.p0.0\n99861: unlink i2of32.p1.0\n99861: unlink o16of32.p0.0\n99861: unlink i13of16.p0.0\n99861: unlink i3of16.p1.0\n99861: unlink o19of32.p0.0\n99861: unlink o16of32.p2.0\n99861: unlink i13of16.p2.0\n99861: unlink o19of32.p2.0\n99861: unlink i12of32.p2.0\n99861: unlink o10of32.p0.0\n99861: unlink i15of16.p0.0\n99861: unlink i5of16.p1.0\n99861: unlink i14of32.p0.0\n99861: unlink i4of32.p1.0\n99861: unlink o27of32.p1.0\n99861: unlink o2of32.p2.0\n99861: unlink o28of32.p1.0\n99861: unlink i23of32.p1.0\n99861: unlink o4of32.p0.0\n99861: unlink i5of8.p1.0\n99861: unlink i1of8.p1.0\n99861: unlink i13of32.p1.0\n99861: unlink i3of32.p0.0\n99861: unlink i12of16.p1.0\n99861: unlink o17of32.p1.0\n99861: unlink o18of32.p1.0\n99861: unlink i2of16.p0.0\n99861: unlink i5of32.p2.0\n99861: unlink o30of32.p0.0\n99861: unlink i4of16.p2.0\n99861: unlink i22of32.p2.0\n99861: unlink o26of32.p2.0\n99861: unlink o29of32.p2.0\n99861: unlink o3of32.p1.0\n99861: unlink i24of32.p0.0\n99861: unlink o20of32.p0.0\n99861: unlink o5of32.p2.0\n99861: unlink o20of32.p1.0\n99861: unlink i24of32.p1.0\n99861: unlink o3of32.p0.0\n99861: unlink o30of32.p1.0\n99861: unlink i4of8.p2.0\n99861: unlink i14of16.p2.0\n99861: unlink o11of32.p2.0\n99861: unlink i15of32.p2.0\n99861: unlink o18of32.p0.0\n99861: unlink i2of16.p1.0\n99861: unlink i12of16.p0.0\n99861: unlink o17of32.p0.0\n99861: unlink i3of32.p1.0\n99861: unlink i13of32.p0.0\n99861: unlink i25of32.p2.0\n99861: unlink i1of8.p0.0\n99861: unlink o4of32.p1.0\n99861: unlink i5of8.p0.0\n99861: unlink o21of32.p2.0\n99861: unlink i23of32.p0.0\n99861: unlink o28of32.p0.0\n99861: unlink o27of32.p0.0\n99861: unlink i4of32.p0.0\n99861: unlink i14of32.p1.0\n99861: unlink i5of16.p0.0\n99861: unlink o31of32.p2.0\n99861: unlink o10of32.p1.0\n99861: unlink i15of16.p1.0\n99861: unlink i2of32.p2.0\n99861: unlink i3of16.p2.0\n\nPatched:\n\n93662: create i3of8.p0.0\n93662: create i6of8.p0.0\n93662: create i4of8.p0.0\n93662: create i5of8.p0.0\n93662: create i7of8.p0.0\n93662: create i2of8.p0.0\n93662: create i1of8.p0.0\n93664: create i4of8.p1.0\n93663: create i2of8.p2.0\n93664: create i6of8.p1.0\n93663: create i7of8.p2.0\n93664: create i1of8.p1.0\n93663: create i3of8.p2.0\n93664: create i2of8.p1.0\n93663: create i4of8.p2.0\n93664: create i7of8.p1.0\n93664: create i5of8.p1.0\n93663: create i5of8.p2.0\n93664: create i3of8.p1.0\n93663: create i6of8.p2.0\n93663: create i1of8.p2.0\n93664: create i8of16.p1.0\n93662: create i8of16.p0.0\n93663: create i8of16.p2.0\n93662: create i9of16.p0.0\n93664: create i1of16.p1.0\n93663: create i1of16.p2.0\n93664: create i9of16.p1.0\n93662: create i1of16.p0.0\n93663: create i9of16.p2.0\n93663: create i10of16.p2.0\n93664: create i10of16.p1.0\n93663: create i2of16.p2.0\n93664: create i2of16.p1.0\n93662: create i2of16.p0.0\n93662: create i10of16.p0.0\n93663: create i11of16.p2.0\n93663: create i3of16.p2.0\n93664: create i3of16.p1.0\n93664: create i11of16.p1.0\n93662: create i3of16.p0.0\n93662: create i11of16.p0.0\n93662: create i12of16.p0.0\n93664: create i12of16.p1.0\n93663: create i12of16.p2.0\n93664: create i4of16.p1.0\n93662: create i4of16.p0.0\n93663: create i4of16.p2.0\n93664: create i5of16.p1.0\n93663: create i5of16.p2.0\n93662: create i5of16.p0.0\n93664: create i13of16.p1.0\n93662: create i13of16.p0.0\n93663: create i13of16.p2.0\n93664: create i6of16.p1.0\n93664: create i14of16.p1.0\n93663: create i6of16.p2.0\n93663: create i14of16.p2.0\n93662: create i14of16.p0.0\n93662: create i6of16.p0.0\n93662: create i7of16.p0.0\n93663: create i15of16.p2.0\n93662: create i15of16.p0.0\n93663: create i7of16.p2.0\n93664: create i15of16.p1.0\n93664: create i7of16.p1.0\n93664: unlink i1of8.p1.0\n93663: unlink i1of8.p2.0\n93662: unlink i1of8.p0.0\n93664: unlink i2of8.p1.0\n93663: unlink i2of8.p2.0\n93662: unlink i2of8.p0.0\n93664: unlink i3of8.p1.0\n93663: unlink i3of8.p2.0\n93664: unlink i4of8.p1.0\n93662: unlink i3of8.p0.0\n93664: unlink i5of8.p1.0\n93663: unlink i4of8.p2.0\n93662: unlink i4of8.p0.0\n93664: unlink i6of8.p1.0\n93663: unlink i5of8.p2.0\n93664: unlink i7of8.p1.0\n93662: unlink i5of8.p0.0\n93663: unlink i6of8.p2.0\n93662: unlink i6of8.p0.0\n93663: unlink i7of8.p2.0\n93662: unlink i7of8.p0.0\n93664: create i16of32.p1.0\n93663: create i16of32.p2.0\n93662: create i16of32.p0.0\n93663: create i1of32.p2.0\n93664: create i17of32.p1.0\n93663: create i17of32.p2.0\n93664: create i1of32.p1.0\n93662: create i1of32.p0.0\n93662: create i17of32.p0.0\n93663: create i18of32.p2.0\n93663: create i2of32.p2.0\n93664: create i18of32.p1.0\n93664: create i2of32.p1.0\n93662: create i2of32.p0.0\n93662: create i18of32.p0.0\n93663: create i3of32.p2.0\n93663: create i19of32.p2.0\n93662: create i19of32.p0.0\n93664: create i19of32.p1.0\n93662: create i3of32.p0.0\n93664: create i3of32.p1.0\n93663: create i4of32.p2.0\n93663: create i20of32.p2.0\n93664: create i4of32.p1.0\n93662: create i20of32.p0.0\n93664: create i20of32.p1.0\n93662: create i4of32.p0.0\n93664: create i5of32.p1.0\n93664: create i21of32.p1.0\n93662: create i21of32.p0.0\n93663: create i21of32.p2.0\n93663: create i5of32.p2.0\n93662: create i5of32.p0.0\n93664: create i22of32.p1.0\n93663: create i6of32.p2.0\n93664: create i6of32.p1.0\n93663: create i22of32.p2.0\n93662: create i22of32.p0.0\n93662: create i6of32.p0.0\n93664: create i7of32.p1.0\n93664: create i23of32.p1.0\n93663: create i7of32.p2.0\n93662: create i7of32.p0.0\n93663: create i23of32.p2.0\n93662: create i23of32.p0.0\n93664: create i24of32.p1.0\n93662: create i24of32.p0.0\n93664: create i8of32.p1.0\n93663: create i8of32.p2.0\n93662: create i8of32.p0.0\n93663: create i24of32.p2.0\n93663: create i9of32.p2.0\n93664: create i25of32.p1.0\n93663: create i25of32.p2.0\n93662: create i9of32.p0.0\n93664: create i9of32.p1.0\n93662: create i25of32.p0.0\n93663: create i26of32.p2.0\n93663: create i10of32.p2.0\n93664: create i26of32.p1.0\n93664: create i10of32.p1.0\n93662: create i26of32.p0.0\n93662: create i10of32.p0.0\n93662: create i11of32.p0.0\n93664: create i11of32.p1.0\n93662: create i27of32.p0.0\n93663: create i27of32.p2.0\n93664: create i27of32.p1.0\n93663: create i11of32.p2.0\n93663: create i28of32.p2.0\n93664: create i28of32.p1.0\n93663: create i12of32.p2.0\n93664: create i12of32.p1.0\n93662: create i28of32.p0.0\n93662: create i12of32.p0.0\n93664: create i29of32.p1.0\n93664: create i13of32.p1.0\n93663: create i13of32.p2.0\n93663: create i29of32.p2.0\n93662: create i13of32.p0.0\n93662: create i29of32.p0.0\n93663: create i30of32.p2.0\n93664: create i30of32.p1.0\n93663: create i14of32.p2.0\n93664: create i14of32.p1.0\n93662: create i30of32.p0.0\n93662: create i14of32.p0.0\n93664: create i31of32.p1.0\n93663: create i31of32.p2.0\n93664: create i15of32.p1.0\n93663: create i15of32.p2.0\n93662: create i31of32.p0.0\n93662: create i15of32.p0.0\n93664: unlink i1of16.p1.0\n93663: unlink i1of16.p2.0\n93662: unlink i1of16.p0.0\n93664: unlink i2of16.p1.0\n93663: unlink i2of16.p2.0\n93664: unlink i3of16.p1.0\n93662: unlink i2of16.p0.0\n93663: unlink i3of16.p2.0\n93664: unlink i4of16.p1.0\n93662: unlink i3of16.p0.0\n93663: unlink i4of16.p2.0\n93664: unlink i5of16.p1.0\n93663: unlink i5of16.p2.0\n93664: unlink i6of16.p1.0\n93663: unlink i6of16.p2.0\n93662: unlink i4of16.p0.0\n93664: unlink i7of16.p1.0\n93663: unlink i7of16.p2.0\n93662: unlink i5of16.p0.0\n93664: unlink i8of16.p1.0\n93663: unlink i8of16.p2.0\n93664: unlink i9of16.p1.0\n93662: unlink i6of16.p0.0\n93663: unlink i9of16.p2.0\n93664: unlink i10of16.p1.0\n93662: unlink i7of16.p0.0\n93663: unlink i10of16.p2.0\n93664: unlink i11of16.p1.0\n93663: unlink i11of16.p2.0\n93664: unlink i12of16.p1.0\n93662: unlink i8of16.p0.0\n93664: unlink i13of16.p1.0\n93663: unlink i12of16.p2.0\n93662: unlink i9of16.p0.0\n93664: unlink i14of16.p1.0\n93663: unlink i13of16.p2.0\n93662: unlink i10of16.p0.0\n93664: unlink i15of16.p1.0\n93663: unlink i14of16.p2.0\n93662: unlink i11of16.p0.0\n93663: unlink i15of16.p2.0\n93662: unlink i12of16.p0.0\n93662: unlink i13of16.p0.0\n93662: unlink i14of16.p0.0\n93662: unlink i15of16.p0.0\n93664: create o19of32.p1.0\n93663: create o19of32.p2.0\n93662: create o23of32.p0.0\n93664: create o30of32.p1.0\n93663: create o20of32.p2.0\n93662: create o26of32.p0.0\n93664: create o28of32.p1.0\n93663: create o25of32.p2.0\n93664: create o21of32.p1.0\n93662: create o7of32.p0.0\n93663: create o15of32.p2.0\n93664: create o8of32.p1.0\n93663: create o2of32.p2.0\n93662: create o3of32.p0.0\n93664: create o20of32.p1.0\n93663: create o8of32.p2.0\n93662: create o18of32.p0.0\n93664: create o14of32.p1.0\n93663: create o24of32.p2.0\n93662: create o24of32.p0.0\n93664: create o12of32.p1.0\n93663: create o23of32.p2.0\n93664: create o7of32.p1.0\n93662: create o10of32.p0.0\n93663: create o10of32.p2.0\n93664: create o24of32.p1.0\n93662: create o22of32.p0.0\n93663: create o6of32.p2.0\n93664: create o11of32.p1.0\n93663: create o17of32.p2.0\n93662: create o12of32.p0.0\n93664: create o31of32.p1.0\n93663: create o4of32.p2.0\n93664: create o2of32.p1.0\n93662: create o30of32.p0.0\n93663: create o11of32.p2.0\n93664: create o22of32.p1.0\n93663: create o1of32.p2.0\n93662: create o14of32.p0.0\n93664: create o0of32.p1.0\n93663: create o12of32.p2.0\n93662: create o29of32.p0.0\n93664: create o6of32.p1.0\n93663: create o13of32.p2.0\n93664: create o18of32.p1.0\n93663: create o21of32.p2.0\n93662: create o8of32.p0.0\n93664: create o5of32.p1.0\n93663: create o16of32.p2.0\n93662: create o15of32.p0.0\n93664: create o4of32.p1.0\n93663: create o29of32.p2.0\n93662: create o2of32.p0.0\n93664: create o9of32.p1.0\n93663: create o3of32.p2.0\n93662: create o28of32.p0.0\n93664: create o27of32.p1.0\n93663: create o5of32.p2.0\n93662: create o31of32.p0.0\n93664: create o26of32.p1.0\n93663: create o30of32.p2.0\n93662: create o0of32.p0.0\n93664: create o3of32.p1.0\n93663: create o14of32.p2.0\n93664: create o25of32.p1.0\n93663: create o0of32.p2.0\n93662: create o13of32.p0.0\n93664: create o1of32.p1.0\n93663: create o28of32.p2.0\n93662: create o5of32.p0.0\n93664: create o17of32.p1.0\n93663: create o26of32.p2.0\n93664: create o23of32.p1.0\n93662: create o21of32.p0.0\n93663: create o18of32.p2.0\n93664: create o13of32.p1.0\n93662: create o20of32.p0.0\n93663: create o31of32.p2.0\n93664: create o16of32.p1.0\n93662: create o1of32.p0.0\n93663: create o27of32.p2.0\n93664: create o15of32.p1.0\n93662: create o16of32.p0.0\n93663: create o7of32.p2.0\n93664: create o29of32.p1.0\n93662: create o19of32.p0.0\n93663: create o22of32.p2.0\n93664: create o10of32.p1.0\n93662: create o6of32.p0.0\n93663: create o9of32.p2.0\n93662: create o9of32.p0.0\n93662: create o11of32.p0.0\n93662: create o4of32.p0.0\n93662: create o27of32.p0.0\n93662: create o17of32.p0.0\n93662: create o25of32.p0.0\n93662: unlink i9of32.p0.0\n93662: unlink i6of32.p0.0\n93662: unlink o23of32.p0.0\n93662: unlink i28of32.p0.0\n93662: unlink i27of32.p0.0\n93662: unlink o25of32.p2.0\n93662: unlink i21of32.p2.0\n93662: unlink o9of32.p1.0\n93662: unlink o6of32.p1.0\n93662: unlink o14of32.p1.0\n93662: unlink i31of32.p2.0\n93662: unlink i10of32.p1.0\n93662: unlink i20of32.p1.0\n93662: unlink i1of32.p1.0\n93662: unlink o24of32.p1.0\n93662: unlink o8of32.p2.0\n93662: unlink i18of32.p0.0\n93662: unlink o7of32.p2.0\n93662: unlink i17of32.p0.0\n93662: unlink o13of32.p0.0\n93662: unlink i30of32.p1.0\n93662: unlink i11of32.p2.0\n93662: unlink o1of32.p0.0\n93662: unlink o15of32.p2.0\n93662: unlink o1of32.p1.0\n93662: unlink i30of32.p0.0\n93662: unlink o13of32.p1.0\n93662: unlink i17of32.p1.0\n93662: unlink i18of32.p1.0\n93662: unlink i1of32.p0.0\n93662: unlink o24of32.p0.0\n93662: unlink i20of32.p0.0\n93662: unlink o22of32.p2.0\n93662: unlink i7of32.p2.0\n93662: unlink i8of32.p2.0\n93662: unlink i26of32.p2.0\n93662: unlink i29of32.p2.0\n93662: unlink o0of32.p2.0\n93662: unlink i10of32.p0.0\n93662: unlink o14of32.p0.0\n93662: unlink i16of32.p2.0\n93662: unlink o6of32.p0.0\n93662: unlink i19of32.p2.0\n93662: unlink o9of32.p0.0\n93662: unlink o12of32.p2.0\n93662: unlink i27of32.p1.0\n93662: unlink i28of32.p1.0\n93662: unlink i6of32.p1.0\n93662: unlink o23of32.p1.0\n93662: unlink i9of32.p1.0\n93662: unlink o25of32.p1.0\n93662: unlink i21of32.p1.0\n93662: unlink o12of32.p0.0\n93662: unlink i16of32.p0.0\n93662: unlink o6of32.p2.0\n93662: unlink i19of32.p0.0\n93662: unlink o9of32.p2.0\n93662: unlink o14of32.p2.0\n93662: unlink o0of32.p0.0\n93662: unlink i10of32.p2.0\n93662: unlink i31of32.p1.0\n93662: unlink i26of32.p0.0\n93662: unlink i29of32.p0.0\n93662: unlink o22of32.p0.0\n93662: unlink i7of32.p0.0\n93662: unlink i8of32.p0.0\n93662: unlink i20of32.p2.0\n93662: unlink i1of32.p2.0\n93662: unlink o24of32.p2.0\n93662: unlink o7of32.p1.0\n93662: unlink o8of32.p1.0\n93662: unlink i11of32.p1.0\n93662: unlink i30of32.p2.0\n93662: unlink o15of32.p1.0\n93662: unlink o15of32.p0.0\n93662: unlink i11of32.p0.0\n93662: unlink o1of32.p2.0\n93662: unlink o13of32.p2.0\n93662: unlink o8of32.p0.0\n93662: unlink i18of32.p2.0\n93662: unlink o7of32.p0.0\n93662: unlink i17of32.p2.0\n93662: unlink i8of32.p1.0\n93662: unlink o22of32.p1.0\n93662: unlink i7of32.p1.0\n93662: unlink i29of32.p1.0\n93662: unlink i26of32.p1.0\n93662: unlink i31of32.p0.0\n93662: unlink o0of32.p1.0\n93662: unlink i19of32.p1.0\n93662: unlink i16of32.p1.0\n93662: unlink o12of32.p1.0\n93662: unlink i21of32.p0.0\n93662: unlink o25of32.p0.0\n93662: unlink i28of32.p2.0\n93662: unlink i27of32.p2.0\n93662: unlink i9of32.p2.0\n93662: unlink i6of32.p2.0\n93662: unlink o23of32.p2.0\n93662: unlink i15of32.p1.0\n93662: unlink o30of32.p2.0\n93662: unlink o11of32.p1.0\n93662: unlink o3of32.p1.0\n93662: unlink i24of32.p2.0\n93662: unlink i5of32.p2.0\n93662: unlink o20of32.p2.0\n93662: unlink i22of32.p0.0\n93662: unlink o29of32.p0.0\n93662: unlink o26of32.p0.0\n93662: unlink i3of32.p0.0\n93662: unlink o31of32.p1.0\n93662: unlink o10of32.p2.0\n93662: unlink o4of32.p0.0\n93662: unlink i14of32.p2.0\n93662: unlink o19of32.p0.0\n93662: unlink o16of32.p0.0\n93662: unlink i12of32.p0.0\n93662: unlink o2of32.p2.0\n93662: unlink o21of32.p1.0\n93662: unlink i4of32.p1.0\n93662: unlink i25of32.p1.0\n93662: unlink i23of32.p2.0\n93662: unlink i2of32.p2.0\n93662: unlink o27of32.p2.0\n93662: unlink o28of32.p2.0\n93662: unlink i25of32.p0.0\n93662: unlink o21of32.p0.0\n93662: unlink i4of32.p0.0\n93662: unlink i12of32.p1.0\n93662: unlink o16of32.p1.0\n93662: unlink o19of32.p1.0\n93662: unlink o4of32.p1.0\n93662: unlink o31of32.p0.0\n93662: unlink o26of32.p1.0\n93662: unlink i3of32.p1.0\n93662: unlink o29of32.p1.0\n93662: unlink i22of32.p1.0\n93662: unlink o17of32.p2.0\n93662: unlink o18of32.p2.0\n93662: unlink o3of32.p0.0\n93662: unlink i13of32.p2.0\n93662: unlink o11of32.p0.0\n93662: unlink i15of32.p0.0\n93662: unlink o5of32.p2.0\n93662: unlink i15of32.p2.0\n93662: unlink o5of32.p0.0\n93662: unlink o11of32.p2.0\n93662: unlink o30of32.p1.0\n93662: unlink o3of32.p2.0\n93662: unlink i13of32.p0.0\n93662: unlink o17of32.p0.0\n93662: unlink o18of32.p0.0\n93662: unlink i24of32.p1.0\n93662: unlink i5of32.p1.0\n93662: unlink o20of32.p1.0\n93662: unlink o10of32.p1.0\n93662: unlink o31of32.p2.0\n93662: unlink i14of32.p1.0\n93662: unlink o2of32.p1.0\n93662: unlink o21of32.p2.0\n93662: unlink i4of32.p2.0\n93662: unlink i25of32.p2.0\n93662: unlink i2of32.p0.0\n93662: unlink o27of32.p0.0\n93662: unlink o28of32.p0.0\n93662: unlink i23of32.p0.0\n93662: unlink i23of32.p1.0\n93662: unlink o28of32.p1.0\n93662: unlink i2of32.p1.0\n93662: unlink o27of32.p1.0\n93662: unlink i12of32.p2.0\n93662: unlink o2of32.p0.0\n93662: unlink o19of32.p2.0\n93662: unlink o16of32.p2.0\n93662: unlink o4of32.p2.0\n93662: unlink i14of32.p0.0\n93662: unlink o10of32.p0.0\n93662: unlink o29of32.p2.0\n93662: unlink o26of32.p2.0\n93662: unlink i3of32.p2.0\n93662: unlink i22of32.p2.0\n93662: unlink i5of32.p0.0\n93662: unlink o20of32.p0.0\n93662: unlink i24of32.p0.0\n93662: unlink o18of32.p1.0\n93662: unlink o17of32.p1.0\n93662: unlink i13of32.p1.0\n93662: unlink o30of32.p0.0\n93662: unlink o5of32.p1.0", "msg_date": "Wed, 10 May 2023 15:11:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Unlinking Parallel Hash Join inner batch files sooner" }, { "msg_contents": "Hi,\n\nThanks for working on this!\n\nOn Wed, 10 May 2023 15:11:20 +1200\nThomas Munro <thomas.munro@gmail.com> wrote:\n\n> One complaint about PHJ is that it can, in rare cases, use a\n> surprising amount of temporary disk space where non-parallel HJ would\n> not. When it decides that it needs to double the number of batches to\n> try to fit each inner batch into memory, and then again and again\n> depending on your level of bad luck, it leaves behind all the earlier\n> generations of inner batch files to be cleaned up at the end of the\n> query. That's stupid. Here's a patch to unlink them sooner, as a\n> small improvement.\n\nThis patch can indeed save a decent amount of temporary disk space.\n\nConsidering its complexity is (currently?) quite low, it worth it.\n\n> The reason I didn't do this earlier is that sharedtuplestore.c\n> continues the pre-existing tradition where each parallel process\n> counts what it writes against its own temp_file_limit. At the time I\n> thought I'd need to have one process unlink all the files, but if a\n> process were to unlink files that it didn't create, that accounting\n> system would break. Without some new kind of shared temp_file_limit\n> mechanism that doesn't currently exist, per-process counters could go\n> negative, creating free money. In the attached patch, I realised\n> something that I'd missed before: there is a safe point for each\n> backend to unlink just the files that it created, and there is no way\n> for a process that created files not to reach that point.\n\nIndeed.\n\nFor what it worth, from my new and non-experienced understanding of the\nparallel mechanism, waiting for all workers to reach\nWAIT_EVENT_HASH_GROW_BATCHES_REPARTITION, after re-dispatching old batches in\nnew ones, seems like a safe place to instruct each workers to clean their old\ntemp files.\n\n> Here's an example query that tries 8, 16 and then 32 batches on my\n> machine, because reltuples is clobbered with a bogus value.\n\nNice!\n\nRegards,\n\n\n", "msg_date": "Wed, 10 May 2023 23:00:31 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Unlinking Parallel Hash Join inner batch files sooner" }, { "msg_contents": "On 11/05/2023 00:00, Jehan-Guillaume de Rorthais wrote:\n> On Wed, 10 May 2023 15:11:20 +1200\n> Thomas Munro <thomas.munro@gmail.com> wrote:\n>> The reason I didn't do this earlier is that sharedtuplestore.c\n>> continues the pre-existing tradition where each parallel process\n>> counts what it writes against its own temp_file_limit. At the time I\n>> thought I'd need to have one process unlink all the files, but if a\n>> process were to unlink files that it didn't create, that accounting\n>> system would break. Without some new kind of shared temp_file_limit\n>> mechanism that doesn't currently exist, per-process counters could go\n>> negative, creating free money. In the attached patch, I realised\n>> something that I'd missed before: there is a safe point for each\n>> backend to unlink just the files that it created, and there is no way\n>> for a process that created files not to reach that point.\n> \n> Indeed.\n> \n> For what it worth, from my new and non-experienced understanding of the\n> parallel mechanism, waiting for all workers to reach\n> WAIT_EVENT_HASH_GROW_BATCHES_REPARTITION, after re-dispatching old batches in\n> new ones, seems like a safe place to instruct each workers to clean their old\n> temp files.\n\nLooks good to me too at a quick glance. There's this one \"XXX free\" \ncomment though:\n\n> \tfor (int i = 1; i < old_nbatch; ++i)\n> \t{\n> \t\tParallelHashJoinBatch *shared =\n> \t\tNthParallelHashJoinBatch(old_batches, i);\n> \t\tSharedTuplestoreAccessor *accessor;\n> \n> \t\taccessor = sts_attach(ParallelHashJoinBatchInner(shared),\n> \t\t\t\t\t\t\t ParallelWorkerNumber + 1,\n> \t\t\t\t\t\t\t &pstate->fileset);\n> \t\tsts_dispose(accessor);\n> \t\t/* XXX free */\n> \t}\n\nI think that's referring to the fact that sts_dispose() doesn't free the \n'accessor', or any of the buffers etc. that it contains. That's a \npre-existing problem, though: ExecParallelHashRepartitionRest() already \nleaks the SharedTuplestoreAccessor structs and their buffers etc. of the \nold batches. I'm a little surprised there isn't aready an sts_free() \nfunction.\n\nAnother thought is that it's a bit silly to have to call sts_attach() \njust to delete the files. Maybe sts_dispose() should take the same three \narguments that sts_attach() does, instead.\n\nSo that freeing would be nice to tidy up, although the amount of memory \nleaked is tiny so might not be worth it, and it's a pre-existing issue. \nI'm marking this as Ready for Committer.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 27 Sep 2023 19:42:03 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Unlinking Parallel Hash Join inner batch files sooner" }, { "msg_contents": "On Wed, Sep 27, 2023 at 11:42 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> Looks good to me too at a quick glance. There's this one \"XXX free\"\n> comment though:\n>\n> > for (int i = 1; i < old_nbatch; ++i)\n> > {\n> > ParallelHashJoinBatch *shared =\n> > NthParallelHashJoinBatch(old_batches, i);\n> > SharedTuplestoreAccessor *accessor;\n> >\n> > accessor = sts_attach(ParallelHashJoinBatchInner(shared),\n> > ParallelWorkerNumber + 1,\n> > &pstate->fileset);\n> > sts_dispose(accessor);\n> > /* XXX free */\n> > }\n>\n> I think that's referring to the fact that sts_dispose() doesn't free the\n> 'accessor', or any of the buffers etc. that it contains. That's a\n> pre-existing problem, though: ExecParallelHashRepartitionRest() already\n> leaks the SharedTuplestoreAccessor structs and their buffers etc. of the\n> old batches. I'm a little surprised there isn't aready an sts_free()\n> function.\n>\n> Another thought is that it's a bit silly to have to call sts_attach()\n> just to delete the files. Maybe sts_dispose() should take the same three\n> arguments that sts_attach() does, instead.\n>\n> So that freeing would be nice to tidy up, although the amount of memory\n> leaked is tiny so might not be worth it, and it's a pre-existing issue.\n> I'm marking this as Ready for Committer.\n\n(I thought I'd go around and nudge CF entries where both author and\nreviewer are committers.)\n\nHi Thomas, do you have any additional thoughts on the above?\n\n\n", "msg_date": "Wed, 22 Nov 2023 15:34:16 +0700", "msg_from": "John Naylor <johncnaylorls@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unlinking Parallel Hash Join inner batch files sooner" }, { "msg_contents": "Hi,\n\nI see in [1] that the reporter mentioned a delay between the error \nmessage in parallel HashJoin and the return control back from PSQL. Your \npatch might reduce this delay.\nAlso, I have the same complaint from users who processed gigabytes of \ndata in parallel HashJoin. Presumably, they also stuck into the unlink \nof tons of temporary files. So, are you going to do something with this \ncode?\n\n[1] \nhttps://www.postgresql.org/message-id/18349-83d33dd3d0c855c3%40postgresql.org\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 21 Feb 2024 13:34:37 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Unlinking Parallel Hash Join inner batch files sooner" }, { "msg_contents": "On Wed, Feb 21, 2024 at 7:34 PM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> I see in [1] that the reporter mentioned a delay between the error\n> message in parallel HashJoin and the return control back from PSQL. Your\n> patch might reduce this delay.\n> Also, I have the same complaint from users who processed gigabytes of\n> data in parallel HashJoin. Presumably, they also stuck into the unlink\n> of tons of temporary files. So, are you going to do something with this\n> code?\n\nYeah, right. I will aim to get this into the tree next week. First,\nthere are a couple of minor issues to resolve around freeing that\nHeikki mentioned. Then there is the question of whether we think this\nmight be a candidate for back-patching, given the complaints you\nmention. Opinions?\n\nI would add that the problems you reach when you get to very large\nnumber of partitions are hard (see several very long threads about\nextreme skew for one version of the problem, but even with zero/normal\nskewness and perfect estimation of the number of partitions, if you\nask a computer to partition 42TB of data into partitions that fit in a\nwork_mem suitable for a Commodore 64, it's gonna hurt on several\nlevels) and this would only slightly improve one symptom. One idea\nthat might improve just the directory entry and file descriptor\naspect, would be to scatter the partitions into (say) 1MB chunks\nwithin the file, and hope that the file system supports holes (a bit\nlike logtape.c's multiplexing but I wouldn't do it quite like that).\n\n\n", "msg_date": "Thu, 22 Feb 2024 12:42:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Unlinking Parallel Hash Join inner batch files sooner" }, { "msg_contents": "On 22/2/2024 06:42, Thomas Munro wrote:\n> On Wed, Feb 21, 2024 at 7:34 PM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> I see in [1] that the reporter mentioned a delay between the error\n>> message in parallel HashJoin and the return control back from PSQL. Your\n>> patch might reduce this delay.\n>> Also, I have the same complaint from users who processed gigabytes of\n>> data in parallel HashJoin. Presumably, they also stuck into the unlink\n>> of tons of temporary files. So, are you going to do something with this\n>> code?\n> \n> Yeah, right. I will aim to get this into the tree next week. First,\n> there are a couple of minor issues to resolve around freeing that\n> Heikki mentioned. Then there is the question of whether we think this\n> might be a candidate for back-patching, given the complaints you\n> mention. Opinions?\nThe code is related to performance, not a bug. Also, it adds one \nexternal function into the 'sharedtuplestore.h'. IMO, it isn't worth it \nto make back-patches.\n> \n> I would add that the problems you reach when you get to very large\n> number of partitions are hard (see several very long threads about\n> extreme skew for one version of the problem, but even with zero/normal\n> skewness and perfect estimation of the number of partitions, if you\n> ask a computer to partition 42TB of data into partitions that fit in a\n> work_mem suitable for a Commodore 64, it's gonna hurt on several\n> levels) and this would only slightly improve one symptom. One idea\n> that might improve just the directory entry and file descriptor\n> aspect, would be to scatter the partitions into (say) 1MB chunks\n> within the file, and hope that the file system supports holes (a bit\n> like logtape.c's multiplexing but I wouldn't do it quite like that).\nThanks, I found in [1] good entry point to dive into this issue.\n\n[1] \nhttps://www.postgresql.org/message-id/CA+hUKGKDbv+5uiJZDdB1wttkMPFs9CDb6=02Qxitq4am-KBM_A@mail.gmail.com\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 22 Feb 2024 11:37:42 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Unlinking Parallel Hash Join inner batch files sooner" }, { "msg_contents": "On Thu, Feb 22, 2024 at 5:37 PM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 22/2/2024 06:42, Thomas Munro wrote:\n> > extreme skew for one version of the problem, but even with zero/normal\n> > skewness and perfect estimation of the number of partitions, if you\n\nSorry, I meant to write \"but even with no duplicates\" there (mention\nof \"normal\" was brain fade).\n\n\n", "msg_date": "Fri, 23 Feb 2024 08:42:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Unlinking Parallel Hash Join inner batch files sooner" }, { "msg_contents": "On Wed, Feb 21, 2024 at 6:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Yeah, right. I will aim to get this into the tree next week. First,\n> there are a couple of minor issues to resolve around freeing that\n> Heikki mentioned. Then there is the question of whether we think this\n> might be a candidate for back-patching, given the complaints you\n> mention. Opinions?\n\nIt doesn't appear to me that this got committed. On the procedural\nquestion, I would personally treat it as a non-back-patchable bug fix\ni.e. master-only but without regard to feature freeze. However, I can\nsee arguments for either treating it as a back-patchable fix or for\nwaiting until v18 development opens. What would you like to do?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 May 2024 14:56:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unlinking Parallel Hash Join inner batch files sooner" } ]
[ { "msg_contents": "I was looking at the new smgrzeroextend() function in the smgr API. The \ndocumentation isn't very extensive:\n\n/*\n * smgrzeroextend() -- Add new zeroed out blocks to a file.\n *\n * Similar to smgrextend(), except the relation can be extended by\n * multiple blocks at once and the added blocks will be filled with\n * zeroes.\n */\n\nThe documentation of smgrextend() is:\n\n/*\n * smgrextend() -- Add a new block to a file.\n *\n * The semantics are nearly the same as smgrwrite(): write at the\n * specified position. However, this is to be used for the case of\n * extending a relation (i.e., blocknum is at or beyond the current\n * EOF). Note that we assume writing a block beyond current EOF\n * causes intervening file space to become filled with zeroes.\n */\n\nSo if you want to understand what smgrzeroextend() does, you need to \nmentally combine the documentation of three different functions. Could \nwe write documentation for each function that stands on its own? And \ndocument the function arguments, like what does blocknum and nblocks mean?\n\nMoreover, the text \"except the relation can be extended by multiple \nblocks at once and the added blocks will be filled with zeroes\" doesn't \nmake much sense as a differentiation, because smgrextend() does that as \nwell.\n\nAFAICT, the differences between smgrextend() and smgrzeroextend() are:\n\n1. smgrextend() writes a payload block in addition to extending the \nfile, smgrzeroextend() just extends the file without writing a payload.\n\n2. smgrzeroextend() uses various techniques (posix_fallocate() etc.) to \nmake sure the extended space is actually reserved on disk, smgrextend() \ndoes not.\n\n#1 seems fine, but the naming of the APIs does not reflect that at all.\n\nIf we think that #2 is important, maybe smgrextend() should do that as \nwell? Or at least explain why it's not needed?\n\n\n", "msg_date": "Wed, 10 May 2023 11:50:14 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "smgrzeroextend clarification" }, { "msg_contents": "On Wed, May 10, 2023 at 3:20 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> I was looking at the new smgrzeroextend() function in the smgr API. The\n> documentation isn't very extensive:\n>\n> /*\n> * smgrzeroextend() -- Add new zeroed out blocks to a file.\n> *\n> * Similar to smgrextend(), except the relation can be extended by\n> * multiple blocks at once and the added blocks will be filled with\n> * zeroes.\n> */\n>\n> The documentation of smgrextend() is:\n>\n> /*\n> * smgrextend() -- Add a new block to a file.\n> *\n> * The semantics are nearly the same as smgrwrite(): write at the\n> * specified position. However, this is to be used for the case of\n> * extending a relation (i.e., blocknum is at or beyond the current\n> * EOF). Note that we assume writing a block beyond current EOF\n> * causes intervening file space to become filled with zeroes.\n> */\n>\n> So if you want to understand what smgrzeroextend() does, you need to\n> mentally combine the documentation of three different functions. Could\n> we write documentation for each function that stands on its own? And\n> document the function arguments, like what does blocknum and nblocks mean?\n\nWhy not?\n\n> Moreover, the text \"except the relation can be extended by multiple\n> blocks at once and the added blocks will be filled with zeroes\" doesn't\n> make much sense as a differentiation, because smgrextend() does that as\n> well.\n\nNot exactly. smgrextend() doesn't write a zero-ed block on its own, it\nwrites the content that's passed to it via 'buffer'. It's just that\nsome of smgrextend() callers pass in a zero buffer. Whereas,\nsmgrzeroextend() writes zero-ed blocks on its own, something\nsmgrextend() called in a loop with zero-ed 'buffer'. Therefore, the\nexisting wording seems fine to me.\n\n> AFAICT, the differences between smgrextend() and smgrzeroextend() are:\n>\n> 1. smgrextend() writes a payload block in addition to extending the\n> file, smgrzeroextend() just extends the file without writing a payload.\n\nI think how smgrzeroextend() extends the zeroed blocks is internal to\nit, and mdzeroextend() happens to use fallocate (if available). I\nthink the existing wording around smgrzeroextend() seems fine to me.\n\n> 2. smgrzeroextend() uses various techniques (posix_fallocate() etc.) to\n> make sure the extended space is actually reserved on disk, smgrextend()\n> does not.\n\nIt's not smgrzeroextend() per se, it is mdzeroextend() that uses\nfallocate() if available.\n\nOverall, +0.5 from me if we want to avoid comment traversals to\nunderstand what these functions do and be more descriptive; we might\nend up duplicating comments. But, I'm fine with \"\"except the relation\ncan be extended by multiple....\" before smgrzeroextend().\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 May 2023 18:28:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "Hi,\n\nOn 2023-05-10 11:50:14 +0200, Peter Eisentraut wrote:\n> I was looking at the new smgrzeroextend() function in the smgr API. The\n> documentation isn't very extensive:\n> \n> /*\n> * smgrzeroextend() -- Add new zeroed out blocks to a file.\n> *\n> * Similar to smgrextend(), except the relation can be extended by\n> * multiple blocks at once and the added blocks will be filled with\n> * zeroes.\n> */\n> \n> The documentation of smgrextend() is:\n> \n> /*\n> * smgrextend() -- Add a new block to a file.\n> *\n> * The semantics are nearly the same as smgrwrite(): write at the\n> * specified position. However, this is to be used for the case of\n> * extending a relation (i.e., blocknum is at or beyond the current\n> * EOF). Note that we assume writing a block beyond current EOF\n> * causes intervening file space to become filled with zeroes.\n> */\n> \n> So if you want to understand what smgrzeroextend() does, you need to\n> mentally combine the documentation of three different functions. Could we\n> write documentation for each function that stands on its own? And document\n> the function arguments, like what does blocknum and nblocks mean?\n\nI guess it couldn't hurt. But if we go down that route, we basically need to\nrewrite all the function headers in smgr.c, I think.\n\n\n> Moreover, the text \"except the relation can be extended by multiple blocks\n> at once and the added blocks will be filled with zeroes\" doesn't make much\n> sense as a differentiation, because smgrextend() does that as well.\n\nHm? smgrextend() writes a single block, and it's filled with the caller\nprovided buffer.\n\n\n> AFAICT, the differences between smgrextend() and smgrzeroextend() are:\n> \n> 1. smgrextend() writes a payload block in addition to extending the file,\n> smgrzeroextend() just extends the file without writing a payload.\n> \n> 2. smgrzeroextend() uses various techniques (posix_fallocate() etc.) to make\n> sure the extended space is actually reserved on disk, smgrextend() does not.\n> \n> #1 seems fine, but the naming of the APIs does not reflect that at all.\n> \n> If we think that #2 is important, maybe smgrextend() should do that as well?\n> Or at least explain why it's not needed?\n\nsmgrextend() does #2 - it just does it by writing data.\n\nThe FileFallocate() path in smgrzeroextend() tries to avoid writing data if\nextending by sufficient blocks - not having dirty data in the kernel page\ncache can substantially reduce the IO usage.\n\nWhereas the FileZero() path just optimizes the number of syscalls (and cache\nmisses etc).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 May 2023 11:10:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "On 10.05.23 20:10, Andres Freund wrote:\n>> Moreover, the text \"except the relation can be extended by multiple blocks\n>> at once and the added blocks will be filled with zeroes\" doesn't make much\n>> sense as a differentiation, because smgrextend() does that as well.\n> \n> Hm? smgrextend() writes a single block, and it's filled with the caller\n> provided buffer.\n\nBut there is nothing that says that the block written by smgrextend() \nhas to be the one right after the last existing block. You can give it \nany block number, and it will write there, and the blocks in between \nthat are skipped over will effectively be filled with zeros. This is \nbecause of the way the POSIX file system APIs work.\n\nYou can observe this by hacking it up like this:\n\n smgrextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n char *buffer, bool skipFsync)\n {\n+ if (blocknum > smgrnblocks(reln, forknum) + 1)\n+ elog(INFO, \"XXX\");\n+\n smgrsw[reln->smgr_which].smgr_extend(reln, forknum, blocknum,\n buffer, skipFsync);\n\nThen you will get various test \"failures\" for hash indexes.\n\nIf you hack it up even further and actively fill the skipped-over blocks \nwith something other than zeros, you will get even more dramatic failures.\n\nSo apparently, this behavior is actively being used.\n\nMaybe it was never meant that way and only works accidentally? Maybe \nhash indexes are broken?\n\n\n\n", "msg_date": "Thu, 11 May 2023 11:37:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "On Thu, 11 May 2023 at 05:37, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Maybe it was never meant that way and only works accidentally? Maybe\n> hash indexes are broken?\n\nIt's explicitly documented to be this way. And I think it has to work\nthis way for recovery to work.\n\nI think the reason you and Bharath and Andres are talking past each\nother is that they're thinking about how the implementation works and\nyou're talking about the API definition.\n\nIf you read the API definition and treat the functions as a black box\nI think you're right -- those two definitions sound pretty much\nequivalent to me. They both extend the file, possibly multiple blocks,\nand zero fill. The only difference is that smgrextend() additionally\nallows you to provide data.\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 12 May 2023 14:06:38 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "On Sat, May 13, 2023 at 6:07 AM Greg Stark <stark@mit.edu> wrote:\n> On Thu, 11 May 2023 at 05:37, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > Maybe it was never meant that way and only works accidentally? Maybe\n> > hash indexes are broken?\n>\n> It's explicitly documented to be this way. And I think it has to work\n> this way for recovery to work.\n>\n> I think the reason you and Bharath and Andres are talking past each\n> other is that they're thinking about how the implementation works and\n> you're talking about the API definition.\n>\n> If you read the API definition and treat the functions as a black box\n> I think you're right -- those two definitions sound pretty much\n> equivalent to me. They both extend the file, possibly multiple blocks,\n> and zero fill. The only difference is that smgrextend() additionally\n> allows you to provide data.\n\nJust a thought: should RelationCopyStorageUsingBuffer(), the new code\nused by CREATE DATABASE with the default strategy WAL_LOG, use the\nnewer interface so that it creates fully allocated files instead of\nsparse ones?\n\n\n", "msg_date": "Sat, 13 May 2023 06:36:23 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "Hi, \n\nOn May 11, 2023 2:37:00 AM PDT, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>On 10.05.23 20:10, Andres Freund wrote:\n>>> Moreover, the text \"except the relation can be extended by multiple blocks\n>>> at once and the added blocks will be filled with zeroes\" doesn't make much\n>>> sense as a differentiation, because smgrextend() does that as well.\n>> \n>> Hm? smgrextend() writes a single block, and it's filled with the caller\n>> provided buffer.\n>\n>But there is nothing that says that the block written by smgrextend() has to be the one right after the last existing block. You can give it any block number, and it will write there, and the blocks in between that are skipped over will effectively be filled with zeros. This is because of the way the POSIX file system APIs work.\n\nSure, but that's pretty much independent of my changes. With the exception of, I believe, hash indexes we are quite careful to never leave holes in files. And not just for performance reasons - itd make it much more likely to encounter ENOSPC while writing back blocks. Being unable to checkpoint (because they fail due to ENOSPC), is quite nasty. \n\n\n>Maybe it was never meant that way and only works accidentally? Maybe hash indexes are broken?\n\nIt's known behavior I think - but also quite bad. I think it got a good bit worse after WAL support for hash indexes went in. I think during replay we sometimes end up actually allocating the blocks one by one.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 12 May 2023 15:12:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "Hi, \n\nOn May 12, 2023 11:36:23 AM PDT, Thomas Munro <thomas.munro@gmail.com> wrote:\n>Just a thought: should RelationCopyStorageUsingBuffer(), the new code\n>used by CREATE DATABASE with the default strategy WAL_LOG, use the\n>newer interface so that it creates fully allocated files instead of\n>sparse ones?\n\nI played with that, but at least on Linux with ext4 and xfs it was hard to find cases where it really was beneficial. That's actually how I ended up finding the issues I'd fixed recently-ish.\n\nI think it might be different if we had an option to not use a strategy for the target database - right now we IIRC write back due to ring replacement. Entirely or largely in order, which I think removes most of the issues you could have.\n\nOne issue is that it'd be worse on platforms / filesystems without fallocate support, because we would write the data back twice (once with zeros, once the real data). Perhaps we should add a separate parameter controlling the fallback behaviour.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 12 May 2023 16:24:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "On 10.05.23 20:10, Andres Freund wrote:\n>> So if you want to understand what smgrzeroextend() does, you need to\n>> mentally combine the documentation of three different functions. Could we\n>> write documentation for each function that stands on its own? And document\n>> the function arguments, like what does blocknum and nblocks mean?\n> I guess it couldn't hurt. But if we go down that route, we basically need to\n> rewrite all the function headers in smgr.c, I think.\n\nI took a stab at this, going through the function comments in smgr.c and \nmd.c and try to make some things easier to follow.\n\n- Took at out the weird leading tabs in the older comments.\n\n- Rephrased some comments so that smgr.c is more like an API \ndocumentation and md.c documents what that particular instance of that \nAPI does.\n\n- Move the *extend and *zeroextend functions to a more sensible place \namong all the functions. Especially since *write and *extend are very \nsimilar, it makes sense to have them close together. This way, it's \nalso easier to follow \"this function is like that function\" comments.\n\n- Also moved mdwriteback(), which was in some completely odd place.\n\n- Added comments for function arguments that were previously not documented.\n\n- Reworded the comments for *extend and *zeroextend to make more sense \nrelative to each other.\n\n- I left this for smgrzeroextend():\n\nFIXME: why both blocknum and nblocks\n\nLike, what happens when blocknum is not the block right after the last \nexisting block? Do you get an implicit POSIX hole between the end of \nthe file and the specified block and then an explicit hole for the next \nnblocks? We should be clear here what the designer of this API intended.\n\n\nThe name smgrzeroextend is not great. The \"zero\" isn't what \ndistinguishes it from smgrextend.", "msg_date": "Tue, 16 May 2023 20:40:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "Hi,\n\nOn 2023-05-16 20:40:01 +0200, Peter Eisentraut wrote:\n> On 10.05.23 20:10, Andres Freund wrote:\n> > > So if you want to understand what smgrzeroextend() does, you need to\n> > > mentally combine the documentation of three different functions. Could we\n> > > write documentation for each function that stands on its own? And document\n> > > the function arguments, like what does blocknum and nblocks mean?\n> > I guess it couldn't hurt. But if we go down that route, we basically need to\n> > rewrite all the function headers in smgr.c, I think.\n> \n> I took a stab at this, going through the function comments in smgr.c and\n> md.c and try to make some things easier to follow.\n> \n> - Took at out the weird leading tabs in the older comments.\n\nI hate those. I wonder if we could devise a regex that'd remove them\ntree-wide, instead of ending up doing it very slowly one by one - I've\ne.g. removed them from a bunch of pgstat files. Reflowing the comments after\nis probably the most painful part.\n\nFWIW, I'd do that in a separate commit, and then add that commit to\n.git-blame-ignore-revs. Otherwise blaming gets unnecessarily painful. Also\nmakes it easier to see the actual content changes...\n\n\n> - Rephrased some comments so that smgr.c is more like an API documentation\n> and md.c documents what that particular instance of that API does.\n> \n> - Move the *extend and *zeroextend functions to a more sensible place among\n> all the functions. Especially since *write and *extend are very similar, it\n> makes sense to have them close together. This way, it's also easier to\n> follow \"this function is like that function\" comments.\n\nFor me the prior location made a bit more sense - we should always extend the\nfile before writing or reading the relevant blocks. But I don't really care.\n\n\n> - Also moved mdwriteback(), which was in some completely odd place.\n> \n> - Added comments for function arguments that were previously not documented.\n> \n> - Reworded the comments for *extend and *zeroextend to make more sense\n> relative to each other.\n> \n> - I left this for smgrzeroextend():\n> \n> FIXME: why both blocknum and nblocks\n\nI guess you're suggesting that we would do an lstat() to figure out the size\ninstead? Or use some cached size? That'd not be trivial to add - and it just\nseems unrelated to smgzerorextend(), it's just as true for smgrextend().\n\n\n> Like, what happens when blocknum is not the block right after the last\n> existing block?\n\nThe same as with smgrextend().\n\n\n> Do you get an implicit POSIX hole between the end of the file and the\n> specified block and then an explicit hole for the next nblocks?\n\nI don't know what you mean with an explicit hole? The whole point of\nposix_fallocate() is that it actually allocates blocks - I don't really see\nhow such a space could be described as a hole? My understanding of the\n\"implicit POSIX hole\" semantics is that the point precisely is that there is\n*no* space allocated for the region.\n\n\n> We should be clear here what the designer of this API intended.\n> \n> \n> The name smgrzeroextend is not great. The \"zero\" isn't what distinguishes\n> it from smgrextend.\n\nI think it's a quite distinguishing feature - you can't provide initial block\ncontents, which you can with smgrextend() (and we largely do). And using\nfallocate() is only possible because we know that we're intending to read\nzeros.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 May 2023 16:38:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: smgrzeroextend clarification" }, { "msg_contents": "I have committed some of the unrelated formatting changes separately, so \nwhat's left now is attached.\n\nOn 17.05.23 01:38, Andres Freund wrote:\n>> - I left this for smgrzeroextend():\n>>\n>> FIXME: why both blocknum and nblocks\n> \n> I guess you're suggesting that we would do an lstat() to figure out the size\n> instead? Or use some cached size? That'd not be trivial to add - and it just\n> seems unrelated to smgzerorextend(), it's just as true for smgrextend().\n\nWhat smgzerorextend() does now seems sensible to me. I'd just like one \nor two sentences that document its API. Right now, blocknum and nblocks \nare not documented at all. Of course, we can guess, but I'm also \ninterested in edge cases. What are you supposed to pass for blocknum? \nWhat happens if it's not right after the current last block? You \nanswered that, but is that just what happens to happen, or do we \nactually want to support that? What happens if blocknum is *before* the \ncurrently last block? Would that overwrite the last existing blocks \nwith zero? What if nblocks is negative? Does that zero out blocks \nbackwards?\n\nObviously, the answer for most of these right now is that you're not \nsupposed to do that. But as the smgrextend() + hash index case shows, \nthese things tend to grow in unexpected directions.\n\nAlso, slightly unrelated to the API, did we consider COW file systems? \nLike, is explicitly allocating blocks of zeroes sensible on btrfs?", "msg_date": "Fri, 19 May 2023 17:03:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: smgrzeroextend clarification" } ]
[ { "msg_contents": "We already have\n\\ef\n\\ev\n\nThe use case here is simply that it saves me from:\n\\d <table>\n[scroll through all the fields]\n[often scroll right]\nselect function name\n\\ef [paste function name]\n\nand tab completion is much narrower\n\nWhen doing conversions and reviews all of this stuff has to be reviewed.\nOftentimes, renamed, touched.\n\nI am 100% willing to write the code, docs, etc. but would appreciate\nfeedback.\n\nKirk...\n\nWe already have\\ef\\evThe use case here is simply that it saves me from:\\d <table>[scroll through all the fields][often scroll right]select function name\\ef [paste function name]and tab completion is much narrowerWhen doing conversions and reviews all of this stuff has to be reviewed.Oftentimes, renamed, touched.I am 100% willing to write the code, docs, etc. but would appreciate feedback.Kirk...", "msg_date": "Wed, 10 May 2023 11:32:44 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Discussion: psql \\et <trigger_name> -> edit the trigger function" }, { "msg_contents": "Hi\n\nst 10. 5. 2023 v 17:33 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> We already have\n> \\ef\n> \\ev\n>\n> The use case here is simply that it saves me from:\n> \\d <table>\n> [scroll through all the fields]\n> [often scroll right]\n> select function name\n> \\ef [paste function name]\n>\n> and tab completion is much narrower\n>\n> When doing conversions and reviews all of this stuff has to be reviewed.\n> Oftentimes, renamed, touched.\n>\n> I am 100% willing to write the code, docs, etc. but would appreciate\n> feedback.\n>\n\n\\et can be little bit confusing, because looks like editing trigger, not\ntrigger function\n\nwhat \\eft triggername\n\n?\n\nregards\n\nPavel\n\n\n\n>\n> Kirk...\n>\n\nHist 10. 5. 2023 v 17:33 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:We already have\\ef\\evThe use case here is simply that it saves me from:\\d <table>[scroll through all the fields][often scroll right]select function name\\ef [paste function name]and tab completion is much narrowerWhen doing conversions and reviews all of this stuff has to be reviewed.Oftentimes, renamed, touched.I am 100% willing to write the code, docs, etc. but would appreciate feedback.\\et can be little bit confusing, because looks like editing trigger, not trigger functionwhat \\eft triggername?regardsPavel Kirk...", "msg_date": "Wed, 10 May 2023 18:19:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Discussion: psql \\et <trigger_name> -> edit the trigger function" }, { "msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n\n> We already have\n> \\ef\n> \\ev\n>\n> The use case here is simply that it saves me from:\n> \\d <table>\n> [scroll through all the fields]\n> [often scroll right]\n> select function name\n> \\ef [paste function name]\n>\n> and tab completion is much narrower\n\nI think it would make more sense to model it on the filtering letters\navailable for \\df:\n\n \\df[anptw][S+] [FUNCPTRN [TYPEPTRN ...]]\n list [only agg/normal/procedure/trigger/window] functions\n\n\nI just noticed that tab completion after e.g. \\dft does not take the\nfunction type restriction into account, so the solution for \\ef<letters>\nshould be made to work for both. I wonder if it would even be possible\nto share the tab completion filtering conditions with the actual\nimplementation of \\df.\n\nAlso, I notice that \\df only tab completes functions (i.e. not\nprocedures), although it actually returns all routines.\n\n> When doing conversions and reviews all of this stuff has to be reviewed.\n> Oftentimes, renamed, touched.\n>\n> I am 100% willing to write the code, docs, etc. but would appreciate\n> feedback.\n\nI'm happy to assist with and review at least the tab completion parts of\nthis effort.\n\n> Kirk...\n\n- ilmari\n\n\n", "msg_date": "Wed, 10 May 2023 17:24:54 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Discussion: psql \\et <trigger_name> -> edit the trigger function" }, { "msg_contents": "On Wed, May 10, 2023 at 12:20 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> st 10. 5. 2023 v 17:33 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>\n>> We already have\n>> \\ef\n>> \\ev\n>>\n>> The use case here is simply that it saves me from:\n>> \\d <table>\n>> [scroll through all the fields]\n>> [often scroll right]\n>> select function name\n>> \\ef [paste function name]\n>>\n>> and tab completion is much narrower\n>>\n>> When doing conversions and reviews all of this stuff has to be reviewed.\n>> Oftentimes, renamed, touched.\n>>\n>> I am 100% willing to write the code, docs, etc. but would appreciate\n>> feedback.\n>>\n>\n> \\et can be little bit confusing, because looks like editing trigger, not\n> trigger function\n>\n> what \\eft triggername\n>\n> ?\n>\n> Pavel, I am \"torn\" because of my OCD, I would expect\n\\eft <TAB>\nto list functions that RETURN TRIGGER as opposed to the level of\nindirection I was aiming for.\n\nwhere\n\\et <TAB>\n Would specifically let me complete the Trigger_Name, but find the function\n\nIt makes me wonder, now if:\n\\etf\n\nIs better for this (edit trigger function... given the trigger name).\nAnd as another poster suggested. As we do the AUTOCOMPLETE for that, we\ncould address it for:\n\\ef?\n\nbecause:\n\\eft <TAB>\nis valuable as well, and deserves to work just like all \\ef? items\n\nIt seems like a logical way to break it down.\n\n> regards\n>\n> Pavel\n>\n>\n>\n>>\n>> Kirk...\n>>\n>\n\nOn Wed, May 10, 2023 at 12:20 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hist 10. 5. 2023 v 17:33 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:We already have\\ef\\evThe use case here is simply that it saves me from:\\d <table>[scroll through all the fields][often scroll right]select function name\\ef [paste function name]and tab completion is much narrowerWhen doing conversions and reviews all of this stuff has to be reviewed.Oftentimes, renamed, touched.I am 100% willing to write the code, docs, etc. but would appreciate feedback.\\et can be little bit confusing, because looks like editing trigger, not trigger functionwhat \\eft triggername?Pavel, I am \"torn\" because of my OCD, I would expect \\eft <TAB>to list functions that RETURN TRIGGER as opposed to the level of indirection I was aiming for.where\\et <TAB>  Would specifically let me complete the Trigger_Name, but find the functionIt makes me wonder, now if:\\etf Is better for this (edit trigger function... given the trigger name).And as another poster suggested.  As we do the AUTOCOMPLETE for that, we could address it for:\\ef?because:\\eft <TAB>is valuable as well, and deserves to work just like all \\ef? itemsIt seems like a logical way to break it down.   regardsPavel Kirk...", "msg_date": "Wed, 10 May 2023 13:08:23 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Discussion: psql \\et <trigger_name> -> edit the trigger function" }, { "msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n> \\et <TAB>\n> Would specifically let me complete the Trigger_Name, but find the function\n\nHmm, I wonder how useful that's really going to be, considering\nthat trigger names aren't unique across tables. Wouldn't it\nneed to be more like \"\\et table-name trigger-name\"?\n\nAlso, in a typical database I bet a large fraction of pg_trigger.tgname\nentries are going to be \"RI_ConstraintTrigger_something\". Are we going\nto suppress those?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 May 2023 13:18:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Discussion: psql \\et <trigger_name> -> edit the trigger function" }, { "msg_contents": "I wrote:\n> Hmm, I wonder how useful that's really going to be, considering\n> that trigger names aren't unique across tables. Wouldn't it\n> need to be more like \"\\et table-name trigger-name\"?\n\nDifferent line of thought: \\et seems awfully single-purpose.\nPerhaps we should think more of \"\\st table-name trigger-name\"\n(show trigger), which perhaps could print something along the\nlines of\n\nCREATE TRIGGER after_ins_stmt_trig AFTER INSERT ON main_table\nFOR EACH STATEMENT EXECUTE FUNCTION trigger_func('after_ins_stmt');\n\nCREATE FUNCTION public.trigger_func()\n RETURNS trigger\n... the rest like \\sf for the trigger function\n\nIf you indeed want to edit the function, it's a quick copy-and-paste\nfrom here. But if you just want to see the trigger definition,\nthis is more wieldy than looking at the whole \"\\d table-name\"\noutput. Also we have less of an overloading problem with the\ncommand name.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 May 2023 13:33:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Discussion: psql \\et <trigger_name> -> edit the trigger function" }, { "msg_contents": "st 10. 5. 2023 v 19:08 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> On Wed, May 10, 2023 at 12:20 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> st 10. 5. 2023 v 17:33 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>>\n>>> We already have\n>>> \\ef\n>>> \\ev\n>>>\n>>> The use case here is simply that it saves me from:\n>>> \\d <table>\n>>> [scroll through all the fields]\n>>> [often scroll right]\n>>> select function name\n>>> \\ef [paste function name]\n>>>\n>>> and tab completion is much narrower\n>>>\n>>> When doing conversions and reviews all of this stuff has to be reviewed.\n>>> Oftentimes, renamed, touched.\n>>>\n>>> I am 100% willing to write the code, docs, etc. but would appreciate\n>>> feedback.\n>>>\n>>\n>> \\et can be little bit confusing, because looks like editing trigger, not\n>> trigger function\n>>\n>> what \\eft triggername\n>>\n>> ?\n>>\n>> Pavel, I am \"torn\" because of my OCD, I would expect\n> \\eft <TAB>\n> to list functions that RETURN TRIGGER as opposed to the level of\n> indirection I was aiming for.\n>\n> where\n> \\et <TAB>\n> Would specifically let me complete the Trigger_Name, but find the\n> function\n>\n> It makes me wonder, now if:\n> \\etf\n>\n> Is better for this (edit trigger function... given the trigger name).\n> And as another poster suggested. As we do the AUTOCOMPLETE for that, we\n> could address it for:\n> \\ef?\n>\n> because:\n> \\eft <TAB>\n> is valuable as well, and deserves to work just like all \\ef? items\n>\n> It seems like a logical way to break it down.\n>\n\nThis is a problem, and it isn't easy to find a design that is consistent\nand useful. Maybe Tom's proposal \"\\st\" is best, although the \"t\" can be\nmessy - it can be \"t\" like table or \"t\" like trigger or \"t\" like type.\n\nPersonally, I don't like editing DDL in psql or pgAdmin. In all my training\nI say \"don't do it\". But on second hand, I agree so it can be handy for\nprototyping or for some playing.\n\nI think implementing \"\\st triggername\" can be a good start, and then we can\ncontinue in design later.\n\nMy comments:\n\n* Maybe \"\\str\" can be better than only \"\\st\". Only \"\\st\" can be confusing -\nminimally we use \"t\" like symbol for tables\n\n* I think so arguments can be - tablename, triggername or [tablename\ntriggername]\n\nIt can display more triggers than just one when specification is general or\nresult is not uniq\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n\n\n> regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>>\n>>> Kirk...\n>>>\n>>\n\nst 10. 5. 2023 v 19:08 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Wed, May 10, 2023 at 12:20 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hist 10. 5. 2023 v 17:33 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:We already have\\ef\\evThe use case here is simply that it saves me from:\\d <table>[scroll through all the fields][often scroll right]select function name\\ef [paste function name]and tab completion is much narrowerWhen doing conversions and reviews all of this stuff has to be reviewed.Oftentimes, renamed, touched.I am 100% willing to write the code, docs, etc. but would appreciate feedback.\\et can be little bit confusing, because looks like editing trigger, not trigger functionwhat \\eft triggername?Pavel, I am \"torn\" because of my OCD, I would expect \\eft <TAB>to list functions that RETURN TRIGGER as opposed to the level of indirection I was aiming for.where\\et <TAB>  Would specifically let me complete the Trigger_Name, but find the functionIt makes me wonder, now if:\\etf Is better for this (edit trigger function... given the trigger name).And as another poster suggested.  As we do the AUTOCOMPLETE for that, we could address it for:\\ef?because:\\eft <TAB>is valuable as well, and deserves to work just like all \\ef? itemsIt seems like a logical way to break it down.   This is a problem, and it isn't easy to find a design that is consistent and useful. Maybe Tom's proposal \"\\st\" is best, although the \"t\" can be messy - it can be \"t\" like table or \"t\" like trigger or \"t\" like type.Personally, I don't like editing DDL in psql or pgAdmin. In all my training I say \"don't do it\". But on second hand,  I agree so it can be handy for prototyping or for some playing.I think implementing \"\\st triggername\" can be a good start, and then we can continue in design later.My comments:* Maybe \"\\str\" can be better than only \"\\st\". Only \"\\st\" can be confusing - minimally we use \"t\" like symbol for tables* I think so arguments can be - tablename, triggername or [tablename triggername]It can display more triggers than just one when specification is general or result is not uniqRegardsPavel regardsPavel Kirk...", "msg_date": "Thu, 11 May 2023 07:26:50 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Discussion: psql \\et <trigger_name> -> edit the trigger function" }, { "msg_contents": "On Wed, May 10, 2023 at 1:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Hmm, I wonder how useful that's really going to be, considering\n> > that trigger names aren't unique across tables. Wouldn't it\n> > need to be more like \"\\et table-name trigger-name\"?\n>\n> Different line of thought: \\et seems awfully single-purpose.\n> Perhaps we should think more of \"\\st table-name trigger-name\"\n> (show trigger), which perhaps could print something along the\n> lines of\n>\n> CREATE TRIGGER after_ins_stmt_trig AFTER INSERT ON main_table\n> FOR EACH STATEMENT EXECUTE FUNCTION trigger_func('after_ins_stmt');\n>\n> CREATE FUNCTION public.trigger_func()\n> RETURNS trigger\n> ... the rest like \\sf for the trigger function\n>\n> If you indeed want to edit the function, it's a quick copy-and-paste\n> from here. But if you just want to see the trigger definition,\n> this is more wieldy than looking at the whole \"\\d table-name\"\n> output. Also we have less of an overloading problem with the\n> command name.\n>\n\nI agree that the argument for \\et or \\etf fails. Simply on the one to many\nissues.\nAnd I agree that a more consistent approach is best.\n\nHaving just cleaned up 158 Triggers/Trigger Functions... Just having \\eft\n<TAB> work would be nice.\n\nWhich would solve my problem of quickly getting the tables triggers and\nreviewing the code.\n\nI like the idea of adding to the \\s* options. As in \"show\".\nbut the \"t\" is very common (table, trigger, type). I think \\st \\str \\sty\ncould work, but this is the first place where we would be doing this?\n\nHonestly I think \\st is \"missing\", especially to throw something in\ndbfiddle or another tool.\n\nAnd if we drop \"trigger\" from this, then \\st and \\sT where T would be for\nTypes as elsewhere.\n\nNow that feels more consistent?\n\nSo, currently thinking:\n1) lets get \\ef? <TAB> working\n2) Discuss: \\st \\sT for outputting Table and Type Creation DDL...\n\nSomething is telling me that #2 (\\st) might be a can of worms, since it\nseems so obviously \"missing\"?\n\n\n>\n> regards, tom lane\n>\n\nI appreciate the feedback!\n\nOn Wed, May 10, 2023 at 1:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Hmm, I wonder how useful that's really going to be, considering\n> that trigger names aren't unique across tables.  Wouldn't it\n> need to be more like \"\\et table-name trigger-name\"?\n\nDifferent line of thought: \\et seems awfully single-purpose.\nPerhaps we should think more of \"\\st table-name trigger-name\"\n(show trigger), which perhaps could print something along the\nlines of\n\nCREATE TRIGGER after_ins_stmt_trig AFTER INSERT ON main_table\nFOR EACH STATEMENT EXECUTE FUNCTION trigger_func('after_ins_stmt');\n\nCREATE FUNCTION public.trigger_func()\n RETURNS trigger\n... the rest like \\sf for the trigger function\n\nIf you indeed want to edit the function, it's a quick copy-and-paste\nfrom here.  But if you just want to see the trigger definition,\nthis is more wieldy than looking at the whole \"\\d table-name\"\noutput.  Also we have less of an overloading problem with the\ncommand name.I agree that the argument for \\et or \\etf fails.  Simply on the one to many issues.And I agree that a more consistent approach is best.Having just cleaned up 158 Triggers/Trigger Functions... Just having \\eft <TAB> work would be nice.Which would solve my problem of quickly getting the tables triggers and reviewing the code.I like the idea of adding to the \\s* options.  As in \"show\".but the \"t\" is very common (table, trigger, type).  I think \\st \\str \\sty could work, but this is the first place where we would be doing this?Honestly I think \\st is \"missing\", especially to throw something in dbfiddle or another tool.And if we drop \"trigger\" from this, then \\st and \\sT  where T would be for Types as elsewhere.Now that feels more consistent?So, currently thinking:1) lets get \\ef? <TAB> working2) Discuss: \\st \\sT  for outputting Table and Type Creation DDL...Something is telling me that #2 (\\st) might be a can of worms, since it seems so obviously \"missing\"? \n\n                        regards, tom laneI appreciate the feedback!", "msg_date": "Thu, 11 May 2023 15:57:17 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Discussion: psql \\et <trigger_name> -> edit the trigger function" } ]
[ { "msg_contents": "In writing the PG 16 release notes, I came upon an oddity in our new\ncreateuser syntax, specifically --role and --member. It turns out that\n--role matches CREATE ROLE ... ROLE IN (and has prior to PG 16) while\nthe new --member option matches CREATE ROLE ... ROLE. The PG 16 feature\ndiscussion thread is here:\n\n\thttps://www.postgresql.org/message-id/flat/69a9851035cf0f0477bcc5d742b031a3%40oss.nttdata.com\n\nThis seems like it will be forever confusing to people. I frankly don't\nknow why --role matching CREATE ROLE ... ROLE IN was not already\nconfusing in pre-PG 16. Any new ideas for improvement?\n\nAt a minium I would like to apply the attached doc patch to PG 16 to\nimprove awkward wording in CREATE ROLE and createuser.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.", "msg_date": "Wed, 10 May 2023 13:33:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "createuser --memeber and PG 16" }, { "msg_contents": "> On 10 May 2023, at 19:33, Bruce Momjian <bruce@momjian.us> wrote:\n\n> I frankly don't\n> know why --role matching CREATE ROLE ... ROLE IN was not already\n> confusing in pre-PG 16. Any new ideas for improvement?\n\nIIRC there were a number of ideas presented in that thread but backwards\ncompatibility with --role already \"taken\" made it complicated, so --role and\n--member were the least bad options.\n\n> At a minium I would like to apply the attached doc patch to PG 16 to\n> improve awkward wording in CREATE ROLE and createuser.\n\nNo objection.\n\n+ role. (This in effect makes the new role a <quote>group</quote>.)\nWhile not introduced here, isn't the latter part interesting enough to warrant\nnot being inside parenthesis?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 11 May 2023 14:21:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Thu, May 11, 2023 at 02:21:22PM +0200, Daniel Gustafsson wrote:\n> > On 10 May 2023, at 19:33, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> > I frankly don't\n> > know why --role matching CREATE ROLE ... ROLE IN was not already\n> > confusing in pre-PG 16. Any new ideas for improvement?\n> \n> IIRC there were a number of ideas presented in that thread but backwards\n> compatibility with --role already \"taken\" made it complicated, so --role and\n> --member were the least bad options.\n> \n> > At a minimum I would like to apply the attached doc patch to PG 16 to\n> > improve awkward wording in CREATE ROLE and createuser.\n> \n> No objection.\n> \n> + role. (This in effect makes the new role a <quote>group</quote>.)\n> While not introduced here, isn't the latter part interesting enough to warrant\n> not being inside parenthesis?\n\nThe concept of group itself is deprecated, which I think is why the\nparenthesis are used.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Thu, 11 May 2023 09:34:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Wed, May 10, 2023 at 1:33 PM Bruce Momjian <bruce@momjian.us> wrote:\n> This seems like it will be forever confusing to people. I frankly don't\n> know why --role matching CREATE ROLE ... ROLE IN was not already\n> confusing in pre-PG 16. Any new ideas for improvement?\n\nYeah, it's a bad situation. I think --role is basically misnamed.\nSomething like --add-to-group would have been clearer, but that also\nhas the problem of being inconsistent with the SQL command. The whole\nROLE vs. IN ROLE thing is inherently quite confusing, I think. It's\nvery easy to get confused about which direction the membership arrows\nare pointing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 May 2023 10:07:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On 11.05.23 16:07, Robert Haas wrote:\n> On Wed, May 10, 2023 at 1:33 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> This seems like it will be forever confusing to people. I frankly don't\n>> know why --role matching CREATE ROLE ... ROLE IN was not already\n>> confusing in pre-PG 16. Any new ideas for improvement?\n> \n> Yeah, it's a bad situation. I think --role is basically misnamed.\n> Something like --add-to-group would have been clearer, but that also\n> has the problem of being inconsistent with the SQL command. The whole\n> ROLE vs. IN ROLE thing is inherently quite confusing, I think. It's\n> very easy to get confused about which direction the membership arrows\n> are pointing.\n\nIt's hard to tell that for the --member option as well. For\n\ncreateuser foo --member bar\n\nit's not intuitive whether foo becomes a member of bar or bar becomes a \nmember of foo. Maybe something more verbose like --member-of would help?\n\n\n\n", "msg_date": "Fri, 12 May 2023 16:35:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Fri, May 12, 2023 at 04:35:34PM +0200, Peter Eisentraut wrote:\n> it's not intuitive whether foo becomes a member of bar or bar becomes a\n> member of foo. Maybe something more verbose like --member-of would help?\n\nIndeed, presented like that it could be confusing, and --member-of\nsounds like it could be a good idea instead of --member.\n--\nMichael", "msg_date": "Mon, 15 May 2023 16:27:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Thu, May 11, 2023 at 09:34:42AM -0400, Bruce Momjian wrote:\n> On Thu, May 11, 2023 at 02:21:22PM +0200, Daniel Gustafsson wrote:\n>> IIRC there were a number of ideas presented in that thread but backwards\n>> compatibility with --role already \"taken\" made it complicated, so --role and\n>> --member were the least bad options.\n>> \n>>> At a minimum I would like to apply the attached doc patch to PG 16 to\n>>> improve awkward wording in CREATE ROLE and createuser.\n>> \n>> No objection.\n\nNone from here as well.\n\n>> + role. (This in effect makes the new role a <quote>group</quote>.)\n>> While not introduced here, isn't the latter part interesting enough to warrant\n>> not being inside parenthesis?\n> \n> The concept of group itself is deprecated, which I think is why the\n> parenthesis are used.\n\nNot sure on this one. The original docs come from 58d214e, and this\nsentence was already in there.\n--\nMichael", "msg_date": "Mon, 15 May 2023 16:33:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Mon, May 15, 2023 at 04:33:27PM +0900, Michael Paquier wrote:\n> On Thu, May 11, 2023 at 09:34:42AM -0400, Bruce Momjian wrote:\n> > On Thu, May 11, 2023 at 02:21:22PM +0200, Daniel Gustafsson wrote:\n> >> IIRC there were a number of ideas presented in that thread but backwards\n> >> compatibility with --role already \"taken\" made it complicated, so --role and\n> >> --member were the least bad options.\n> >> \n> >>> At a minimum I would like to apply the attached doc patch to PG 16 to\n> >>> improve awkward wording in CREATE ROLE and createuser.\n> >> \n> >> No objection.\n> \n> None from here as well.\n> \n> >> + role. (This in effect makes the new role a <quote>group</quote>.)\n> >> While not introduced here, isn't the latter part interesting enough to warrant\n> >> not being inside parenthesis?\n> > \n> > The concept of group itself is deprecated, which I think is why the\n> > parenthesis are used.\n> \n> Not sure on this one. The original docs come from 58d214e, and this\n> sentence was already in there.\n\nTrue. I have removed the parenthesis in this updated patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.", "msg_date": "Mon, 15 May 2023 09:55:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Mon, May 15, 2023 at 04:27:04PM +0900, Michael Paquier wrote:\n> On Fri, May 12, 2023 at 04:35:34PM +0200, Peter Eisentraut wrote:\n>> it's not intuitive whether foo becomes a member of bar or bar becomes a\n>> member of foo. Maybe something more verbose like --member-of would help?\n> \n> Indeed, presented like that it could be confusing, and --member-of\n> sounds like it could be a good idea instead of --member.\n\n--member specifieѕ an existing role that will be given membership to the\nnew role (i.e., GRANT newrole TO existingrole). IMO --member-of sounds\nlike the new role will be given membership to the specified existing role\n(i.e., GRANT existingrole TO newrole). IOW a command like\n\n\tcreateuser newrole --member-of existingrole\n\nwould make existingrole a \"member of\" newrole according to \\du. Perhaps\n--role should be --member-of because it makes the new role a member of the\nexisting role.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 May 2023 13:11:49 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Wed, May 10, 2023 at 01:33:26PM -0400, Bruce Momjian wrote:\n> In writing the PG 16 release notes, I came upon an oddity in our new\n> createuser syntax, specifically --role and --member. It turns out that\n> --role matches CREATE ROLE ... ROLE IN (and has prior to PG 16) while\n> the new --member option matches CREATE ROLE ... ROLE. The PG 16 feature\n> discussion thread is here:\n> \n> \thttps://www.postgresql.org/message-id/flat/69a9851035cf0f0477bcc5d742b031a3%40oss.nttdata.com\n> \n> This seems like it will be forever confusing to people. I frankly don't\n> know why --role matching CREATE ROLE ... ROLE IN was not already\n> confusing in pre-PG 16. Any new ideas for improvement?\n> \n> At a minium I would like to apply the attached doc patch to PG 16 to\n> improve awkward wording in CREATE ROLE and createuser.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 18 May 2023 22:22:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Thu, May 18, 2023 at 10:22:32PM -0400, Bruce Momjian wrote:\n> Patch applied.\n\nThanks, Bruce.\n--\nMichael", "msg_date": "Fri, 19 May 2023 14:50:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On 15.05.23 22:11, Nathan Bossart wrote:\n> On Mon, May 15, 2023 at 04:27:04PM +0900, Michael Paquier wrote:\n>> On Fri, May 12, 2023 at 04:35:34PM +0200, Peter Eisentraut wrote:\n>>> it's not intuitive whether foo becomes a member of bar or bar becomes a\n>>> member of foo. Maybe something more verbose like --member-of would help?\n>>\n>> Indeed, presented like that it could be confusing, and --member-of\n>> sounds like it could be a good idea instead of --member.\n> \n> --member specifieѕ an existing role that will be given membership to the\n> new role (i.e., GRANT newrole TO existingrole). IMO --member-of sounds\n> like the new role will be given membership to the specified existing role\n> (i.e., GRANT existingrole TO newrole). IOW a command like\n> \n> \tcreateuser newrole --member-of existingrole\n> \n> would make existingrole a \"member of\" newrole according to \\du. Perhaps\n> --role should be --member-of because it makes the new role a member of the\n> existing role.\n\nYeah, that's exactly my confusion.\n\nMaybe\n\ncreateuser --with-members\n\nand\n\ncreateuser --member-of\n\nwould be clearer.\n\n\n\n", "msg_date": "Sun, 21 May 2023 08:00:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Sun, May 21, 2023 at 08:00:15AM +0200, Peter Eisentraut wrote:\n> Maybe\n> \n> createuser --with-members\n> \n> and\n> \n> createuser --member-of\n> \n> would be clearer.\n\nThose seem like reasonable choices to me. I suspect we'll want to keep\n--role around for backward compatibility.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 21 May 2023 07:44:49 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Sun, May 21, 2023 at 07:44:49AM -0700, Nathan Bossart wrote:\n> On Sun, May 21, 2023 at 08:00:15AM +0200, Peter Eisentraut wrote:\n>> Maybe\n>> \n>> createuser --with-members\n>> \n>> and\n>> \n>> createuser --member-of\n>> \n>> would be clearer.\n> \n> Those seem like reasonable choices to me. I suspect we'll want to keep\n> --role around for backward compatibility.\n\nI've attached a draft patch for this. I also changed --admin to\n--with-admin.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 21 May 2023 08:22:05 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Sun, May 21, 2023 at 08:22:05AM -0700, Nathan Bossart wrote:\n> On Sun, May 21, 2023 at 07:44:49AM -0700, Nathan Bossart wrote:\n> > On Sun, May 21, 2023 at 08:00:15AM +0200, Peter Eisentraut wrote:\n> >> Maybe\n> >> \n> >> createuser --with-members\n> >> \n> >> and\n> >> \n> >> createuser --member-of\n> >> \n> >> would be clearer.\n> > \n> > Those seem like reasonable choices to me. I suspect we'll want to keep\n> > --role around for backward compatibility.\n> \n> I've attached a draft patch for this. I also changed --admin to\n> --with-admin.\n\nIf we want to go forward with this, the big question is whether we want\nto get this in before beta1. FYI, the release notes don't mention the\noption names.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 21 May 2023 11:34:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sun, May 21, 2023 at 08:22:05AM -0700, Nathan Bossart wrote:\n>> I've attached a draft patch for this. I also changed --admin to\n>> --with-admin.\n\n> If we want to go forward with this, the big question is whether we want\n> to get this in before beta1. FYI, the release notes don't mention the\n> option names.\n\n+1 for doing it before beta1.\n\nA few comments on the patch:\n\n>> Indicates an existing role that will be automatically added as a member of the new\n\n\"Specifies\" would be clearer than \"indicates\" (not your fault, but\nlet's avoid the passive construction while we are here). Likewise\nnearby.\n\n>> +\t\t{\"member-of\", required_argument, NULL, 6},\n\nWhy didn't you just translate this as 'g' instead of inventing\na new switch case?\n\n>> -\tprintf(_(\" -a, --admin=ROLE this role will be a member of new role with admin\\n\"\n>> +\tprintf(_(\" -a, --with-admin=ROLE this role will be a member of new role with admin\\n\"\n\nI think clearer would be\n\n>> +\tprintf(_(\" -a, --with-admin=ROLE ROLE will be a member of new role with admin\\n\"\n\nLikewise\n\n>> +\tprintf(_(\" -g, --member-of=ROLE new role will be a member of ROLE\\n\"));\n\n(I assume that's what this should say, it's backwards ATM)\nand\n\n>> +\tprintf(_(\" -m, --with-member=ROLE ROLE will be a member of new role\\n\"));\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 May 2023 11:45:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Sun, May 21, 2023 at 11:45:24AM -0400, Tom Lane wrote:\n> A few comments on the patch:\n\nThanks for taking a look.\n\n>>> Indicates an existing role that will be automatically added as a member of the new\n> \n> \"Specifies\" would be clearer than \"indicates\" (not your fault, but\n> let's avoid the passive construction while we are here). Likewise\n> nearby.\n\nFixed.\n\n>>> +\t\t{\"member-of\", required_argument, NULL, 6},\n> \n> Why didn't you just translate this as 'g' instead of inventing\n> a new switch case?\n\nFixed. *facepalm*\n\n> I think clearer would be\n> \n>>> +\tprintf(_(\" -a, --with-admin=ROLE ROLE will be a member of new role with admin\\n\"\n> \n> Likewise\n> \n>>> +\tprintf(_(\" -g, --member-of=ROLE new role will be a member of ROLE\\n\"));\n> \n> (I assume that's what this should say, it's backwards ATM)\n> and\n> \n>>> +\tprintf(_(\" -m, --with-member=ROLE ROLE will be a member of new role\\n\"));\n\nFixed.\n\nHow do folks feel about keeping --role undocumented? Should we give it a\nmention in the docs for --member-of?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 21 May 2023 10:07:56 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Fixed.\n\nv2 looks good to me, except the documentation wording for --with-role\nis needlessly inconsistent with --with-admin. The --with-admin\nwording looks better, so I suggest\n\n- Indicates the specified existing role should be automatically\n+ Specifies an existing role that will be automatically\n added as a member of the new role. Multiple existing roles can\n\n> How do folks feel about keeping --role undocumented? Should we give it a\n> mention in the docs for --member-of?\n\nI'm okay with leaving it undocumented, but I won't fight about it\nif somebody wants to argue for the other.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 May 2023 13:20:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Sun, May 21, 2023 at 01:20:01PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Fixed.\n> \n> v2 looks good to me, except the documentation wording for --with-role\n> is needlessly inconsistent with --with-admin. The --with-admin\n> wording looks better, so I suggest\n> \n> - Indicates the specified existing role should be automatically\n> + Specifies an existing role that will be automatically\n> added as a member of the new role. Multiple existing roles can\n\nWill do.\n\n>> How do folks feel about keeping --role undocumented? Should we give it a\n>> mention in the docs for --member-of?\n> \n> I'm okay with leaving it undocumented, but I won't fight about it\n> if somebody wants to argue for the other.\n\nAlright. Barring any additional feedback, I'll commit this tonight.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 21 May 2023 12:16:58 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Sun, May 21, 2023 at 12:16:58PM -0700, Nathan Bossart wrote:\n> On Sun, May 21, 2023 at 01:20:01PM -0400, Tom Lane wrote:\n> > Nathan Bossart <nathandbossart@gmail.com> writes:\n>>> How do folks feel about keeping --role undocumented? Should we give it a\n>>> mention in the docs for --member-of?\n>> \n>> I'm okay with leaving it undocumented, but I won't fight about it\n>> if somebody wants to argue for the other.\n> \n> Alright. Barring any additional feedback, I'll commit this tonight.\n\nv2 passes the eye test, and I am not spotting any references to the\npast option names. Thanks!\n\n+$node->issues_sql_like(\n+ [ 'createuser', 'regress_user11', '--role', 'regress_user1' ],\n+ qr/statement: CREATE ROLE regress_user11 NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN NOREPLICATION NOBYPASSRLS IN ROLE regress_user1;/,\n+ '--role (for backward compatibility)');\n\nNot sure I would have kept this test, still that's cheap enough to\ntest.\n--\nMichael", "msg_date": "Mon, 22 May 2023 09:11:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Mon, May 22, 2023 at 09:11:18AM +0900, Michael Paquier wrote:\n> On Sun, May 21, 2023 at 12:16:58PM -0700, Nathan Bossart wrote:\n>> Alright. Barring any additional feedback, I'll commit this tonight.\n> \n> v2 passes the eye test, and I am not spotting any references to the\n> past option names. Thanks!\n\nCommitted. Thanks for taking a look. I'll keep an eye on the buildfarm\nfor a few.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 21 May 2023 20:18:29 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On 21.05.23 19:07, Nathan Bossart wrote:\n> How do folks feel about keeping --role undocumented? Should we give it a\n> mention in the docs for --member-of?\n\nWe made a point in this release to document deprecated options \nconsistently. See commit 2f80c95740.\n\n\n", "msg_date": "Mon, 22 May 2023 08:42:28 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Mon, May 22, 2023 at 08:42:28AM +0200, Peter Eisentraut wrote:\n> On 21.05.23 19:07, Nathan Bossart wrote:\n>> How do folks feel about keeping --role undocumented? Should we give it a\n>> mention in the docs for --member-of?\n> \n> We made a point in this release to document deprecated options consistently.\n> See commit 2f80c95740.\n\nAlright. Does the attached patch suffice?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 22 May 2023 05:11:14 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Mon, May 22, 2023 at 05:11:14AM -0700, Nathan Bossart wrote:\n> On Mon, May 22, 2023 at 08:42:28AM +0200, Peter Eisentraut wrote:\n> > On 21.05.23 19:07, Nathan Bossart wrote:\n> >> How do folks feel about keeping --role undocumented? Should we give it a\n> >> mention in the docs for --member-of?\n> > \n> > We made a point in this release to document deprecated options consistently.\n> > See commit 2f80c95740.\n> \n> Alright. Does the attached patch suffice?\n\nSeeing the precedent with --no-blobs and --blobs, yes, that should be\nenough. You may want to wait until beta1 is stamped to apply\nsomething, though, as the period between the stamp and the tag is used\nto check the state of the tarballs to-be-released.\n--\nMichael", "msg_date": "Tue, 23 May 2023 07:50:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" }, { "msg_contents": "On Tue, May 23, 2023 at 07:50:36AM +0900, Michael Paquier wrote:\n> Seeing the precedent with --no-blobs and --blobs, yes, that should be\n> enough. You may want to wait until beta1 is stamped to apply\n> something, though, as the period between the stamp and the tag is used\n> to check the state of the tarballs to-be-released.\n\nThanks, committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 May 2023 19:39:24 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: createuser --memeber and PG 16" } ]
[ { "msg_contents": "Hi\n\nWhile translating the v15 message catalog (yes, I'm quite late!), I\nnoticed that commit 1f39bce02154 introduced three copies of the\nfollowing message in hashagg_batch_read():\n\n+ ereport(ERROR,\n+ (errcode_for_file_access(),\n+ errmsg(\"unexpected EOF for tape %d: requested %zu bytes, read %zu bytes\",\n+ tapenum, sizeof(uint32), nread)));\n\nThese messages should only arise when a hash spill file has gone\ncorrupt: as I understand, this cannot happen merely because of a full\ndisk, because that should fail during _write_ of the file, not read.\nAnd no other user-caused causes should exist.\n\nTherefore, I propose to turn these messages into errmsg_internal(), so\nthat we do not translate them. They could even be an elog() and ditch\nthe errcode, but I see no reason to go that far. Or we could make them\nERRCODE_DATA_CORRUPTED.\n\n\nBTW, the aforementioned commit actually appeared in v13, and I\ntranslated the message there at the time. The reason I noticed this now\nis that the %d was changed to %p by commit c4649cce39a4, and it was that\nchange that triggered me now.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Small aircraft do not crash frequently ... usually only once!\"\n (ponder, http://thedailywtf.com/)\n\n\n", "msg_date": "Wed, 10 May 2023 19:54:07 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "de-catalog one error message" }, { "msg_contents": "> On 10 May 2023, at 19:54, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> ..as I understand, this cannot happen merely because of a full\n> disk, because that should fail during _write_ of the file, not read.\n> And no other user-caused causes should exist.\n> \n> Therefore, I propose to turn these messages into errmsg_internal(), so\n> that we do not translate them.\n\nAFAICT from following the code that seems correct, and I agree with removing\nthem from translation to lessen the burden on our translators.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 11 May 2023 12:11:30 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: de-catalog one error message" }, { "msg_contents": "On 2023-May-11, Daniel Gustafsson wrote:\n\n> > On 10 May 2023, at 19:54, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Therefore, I propose to turn these messages into errmsg_internal(), so\n> > that we do not translate them.\n> \n> AFAICT from following the code that seems correct, and I agree with removing\n> them from translation to lessen the burden on our translators.\n\nThanks for looking! Pushed now.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)\n\n\n", "msg_date": "Tue, 16 May 2023 11:48:52 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: de-catalog one error message" } ]
[ { "msg_contents": "This commit:\n\n\tcommit 366283961a\n\tAuthor: Amit Kapila <akapila@postgresql.org>\n\tDate: Thu Jul 21 08:47:38 2022 +0530\n\t\n\t Allow users to skip logical replication of data having origin.\n\nhas this change:\n\n\tdiff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\n\tindex 670a5406d6..a186e35f00 100644\n\t--- a/doc/src/sgml/catalogs.sgml\n\t+++ b/doc/src/sgml/catalogs.sgml\n\t@@ -7943,6 +7943,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l\n\t see <xref linkend=\"logical-replication-publication\"/>.\n\t </para></entry>\n\t </row>\n\t+\n\t+ <row>\n\t+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n-->\t+ <structfield>suborigin</structfield> <type>text</type>\n\t+ </para>\n\t+ <para>\n-->\t+ The origin value must be either <literal>none</literal> or\n\t+ <literal>any</literal>. The default is <literal>any</literal>.\n\t+ If <literal>none</literal>, the subscription will request the publisher\n\t+ to only send changes that don't have an origin. If\n\t+ <literal>any</literal>, the publisher sends changes regardless of their\n\t+ origin.\n\t+ </para></entry>\n\t+ </row>\n\t </tbody>\n\t </tgroup>\n\t </table>\n\nIs 'suborigin' the right column mame, and if so, should \"The origin\nvalue\" be \"The suborigin value\"?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Wed, 10 May 2023 15:36:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Subscription suborigin?" }, { "msg_contents": "\nNever mind --- I just realized \"sub\" is the table prefix. :-(\n\n---------------------------------------------------------------------------\n\nOn Wed, May 10, 2023 at 03:36:31PM -0400, Bruce Momjian wrote:\n> This commit:\n> \n> \tcommit 366283961a\n> \tAuthor: Amit Kapila <akapila@postgresql.org>\n> \tDate: Thu Jul 21 08:47:38 2022 +0530\n> \t\n> \t Allow users to skip logical replication of data having origin.\n> \n> has this change:\n> \n> \tdiff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\n> \tindex 670a5406d6..a186e35f00 100644\n> \t--- a/doc/src/sgml/catalogs.sgml\n> \t+++ b/doc/src/sgml/catalogs.sgml\n> \t@@ -7943,6 +7943,20 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l\n> \t see <xref linkend=\"logical-replication-publication\"/>.\n> \t </para></entry>\n> \t </row>\n> \t+\n> \t+ <row>\n> \t+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> -->\t+ <structfield>suborigin</structfield> <type>text</type>\n> \t+ </para>\n> \t+ <para>\n> -->\t+ The origin value must be either <literal>none</literal> or\n> \t+ <literal>any</literal>. The default is <literal>any</literal>.\n> \t+ If <literal>none</literal>, the subscription will request the publisher\n> \t+ to only send changes that don't have an origin. If\n> \t+ <literal>any</literal>, the publisher sends changes regardless of their\n> \t+ origin.\n> \t+ </para></entry>\n> \t+ </row>\n> \t </tbody>\n> \t </tgroup>\n> \t </table>\n> \n> Is 'suborigin' the right column mame, and if so, should \"The origin\n> value\" be \"The suborigin value\"?\n> \n> -- \n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Embrace your flaws. They make you human, rather than perfect,\n> which you will never be.\n> \n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n", "msg_date": "Wed, 10 May 2023 15:37:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Subscription suborigin?" } ]
[ { "msg_contents": "Hi hackers!\n\nThere were several discussions where the limitations of the existing TOAST\npointers\nwere mentioned [1], [2] and [3] and from time to time this topic appears in\nother places.\n\nWe proposed a fresh approach to the TOAST mechanics in [2], but\nunfortunately the\npatch was met quite unfriendly, and after several iterations was rejected,\nalthough we\nstill have hopes for it and have several very promising features based on\nit.\n\nAnyway, the old TOAST pointer is also the cause of problems like [4], and\nthis part of\nthe PostgreSQL screams to be revised and improved.\n\nThe TOAST begins with the pointer to the externalized value - the TOAST\nPointer, which\nis very limited in means of storing data, and all TOAST improvements\nrequire revision\nof this Pointer structure. So we decided to ask the community for thoughts\nand ideas on\nhow to rework this pointer.\nThe TOAST Pointer (varatt_external structure) stores 4 fields:\n[varlena header][<4b - original data size><4b - size in TOAST table><4b -\nTOAST table OID><4b - ID of chunk>]\nIn [2] we proposed the new Custom TOAST pointer structure where main\nfeature is\nextensibility:\n[varlena header][<2b - total size of the TOAST pointer><4b size of original\ndata><4b - OID of algorithm used for TOASTing><variable length field used\nfor storing any custom data>]\nwhere Custom TOAST Pointer is distinguished from Regular one by va_flag\nfield which\nis a part of varlena header, so new pointer format does not interfere with\nthe old (regular) one.\nThe first field is necessary because the Custom TOAST pointer has variable\nlength due to the\ntail used for inline storage, and original value size is used by the\nExecutor. The third field could\nbe a subject for discussion.\n\nThoughts? Objections?\n\n[1] [PATCH] Infinite loop while acquiring new TOAST Oid\n<https://www.postgresql.org/message-id/CAJ7c6TPSvR2rKpoVX5TSXo_kMxXF%2B-SxLtrpPaMf907tX%3DnVCw%40mail.gmail.com>\n[2] Pluggable Toaster\n<https://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a%40sigaev.ru>\n[3] [PATCH] Compression dictionaries for JSONB\n<https://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com>\n[4] BUG #16722: PG hanging on COPY when table has close to 2^32 toasts in\nthe table.\n<https://www.postgresql.org/message-id/flat/16722-93043fb459a41073%40postgresql.org>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi hackers!There were several discussions where the limitations of the existing TOAST pointerswere mentioned [1], [2] and [3] and from time to time this topic appears in other places.We proposed a fresh approach to the TOAST mechanics in [2], but unfortunately thepatch was met quite unfriendly, and after several iterations was rejected, although westill have hopes for it and have several very promising features based on it.Anyway, the old TOAST pointer is also the cause of problems like [4], and this part ofthe PostgreSQL screams to be revised and improved.The TOAST begins with the pointer to the externalized value - the TOAST Pointer, whichis very limited in means of storing data, and all TOAST improvements require revisionof this Pointer structure. So we decided to ask the community for thoughts and ideas onhow to rework this pointer.The TOAST Pointer (varatt_external structure) stores 4 fields:[varlena header][<4b - original data size><4b - size in TOAST table><4b - TOAST table OID><4b - ID of chunk>]In [2] we proposed the new Custom TOAST pointer structure where main feature isextensibility:[varlena header][<2b - total size of the TOAST pointer><4b size of original data><4b - OID of algorithm used for TOASTing><variable length field used for storing any custom data>]where Custom TOAST Pointer is distinguished from Regular one by va_flag field whichis a part of varlena header, so new pointer format does not interfere with the old (regular) one.The first field is necessary because the Custom TOAST pointer has variable length due to thetail used for inline storage, and original value size is used by the Executor. The third field couldbe a subject for discussion.Thoughts? Objections?[1] [PATCH] Infinite loop while acquiring new TOAST Oid[2] Pluggable Toaster[3] [PATCH] Compression dictionaries for JSONB[4] BUG #16722: PG hanging on COPY when table has close to 2^32 toasts in the table.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Wed, 10 May 2023 23:04:57 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": true, "msg_subject": "RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi Nikita,\n\n> this part of the PostgreSQL screams to be revised and improved\n\nI completely agree. The problem with TOAST pointers is that they are\nnot extendable at the moment which prevents adding new compression\nalgorithms (e.g. ZSTD), new features like compression dictionaries\n[1], etc. I suggest we add extensibility in order to solve this\nproblem for the foreseeable future for everyone.\n\n> where Custom TOAST Pointer is distinguished from Regular one by va_flag field\n> which is a part of varlena header\n\nI don't think that varlena header is the best place to distinguish a\nclassical TOAST pointer from an extended one. On top of that I don't\nsee any free bits that would allow adding such a flag to the on-disk\nvarlena representation [2].\n\nThe current on-disk TOAST pointer representation is following:\n\n```\ntypedef struct varatt_external\n{\nint32 va_rawsize; /* Original data size (includes header) */\nuint32 va_extinfo; /* External saved size (without header) and\n * compression method */\nOid va_valueid; /* Unique ID of value within TOAST table */\nOid va_toastrelid; /* RelID of TOAST table containing it */\n} varatt_external;\n```\n\nNote that currently only 2 compression methods are supported:\n\n```\ntypedef enum ToastCompressionId\n{\nTOAST_PGLZ_COMPRESSION_ID = 0,\nTOAST_LZ4_COMPRESSION_ID = 1,\nTOAST_INVALID_COMPRESSION_ID = 2\n} ToastCompressionId;\n```\n\nI suggest adding a new flag that will mark an extended TOAST format:\n\n```\ntypedef enum ToastCompressionId\n{\nTOAST_PGLZ_COMPRESSION_ID = 0,\nTOAST_LZ4_COMPRESSION_ID = 1,\nTOAST_RESERVED_COMPRESSION_ID = 2,\nTOAST_HAS_EXTENDED_FORMAT = 3,\n} ToastCompressionId;\n```\n\nFor an extended format we add a varint (utf8-like) bitmask right after\nvaratt_external that marks the features supported in this particular\ninstance of the pointer. The rest of the data is interpreted depending\non the bits set. This will allow us to extend the pointers\nindefinitely.\n\nNote that the proposed approach doesn't require running any\nmigrations. Note also that I described only the on-disk\nrepresentation. We can tweak the in-memory representation as we want\nwithout affecting the end user.\n\nThoughts?\n\n[1]: https://commitfest.postgresql.org/43/3626/\n[2]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/include/postgres.h;h=0446daa0e61722067bb75aa693a92b38736e12df;hb=164d174bbf9a3aba719c845497863cd3c49a3ad0#l178\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 18 May 2023 13:51:58 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Thu, 18 May 2023 at 12:52, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Nikita,\n>\n> > this part of the PostgreSQL screams to be revised and improved\n>\n> I completely agree. The problem with TOAST pointers is that they are\n> not extendable at the moment which prevents adding new compression\n> algorithms (e.g. ZSTD), new features like compression dictionaries\n> [1], etc. I suggest we add extensibility in order to solve this\n> problem for the foreseeable future for everyone.\n>\n> > where Custom TOAST Pointer is distinguished from Regular one by va_flag field\n> > which is a part of varlena header\n>\n> I don't think that varlena header is the best place to distinguish a\n> classical TOAST pointer from an extended one. On top of that I don't\n> see any free bits that would allow adding such a flag to the on-disk\n> varlena representation [2].\n>\n> The current on-disk TOAST pointer representation is following:\n>\n> ```\n> typedef struct varatt_external\n> {\n> int32 va_rawsize; /* Original data size (includes header) */\n> uint32 va_extinfo; /* External saved size (without header) and\n> * compression method */\n> Oid va_valueid; /* Unique ID of value within TOAST table */\n> Oid va_toastrelid; /* RelID of TOAST table containing it */\n> } varatt_external;\n> ```\n\nNo, that's inaccurate. The complete on-disk representation of a varatt is\n\n{\n uint8 va_header; /* Always 0x80 or 0x01 */\n uint8 va_tag; /* Type of datum */\n char va_data[FLEXIBLE_ARRAY_MEMBER]; /* Type-dependent\ndata, for toasted values that's currently only a varatt_external */\n} varattrib_1b_e;\n\nWith va_tag being filled with one of the vartag_external values:\n\ntypedef enum vartag_external\n{\n VARTAG_INDIRECT = 1,\n VARTAG_EXPANDED_RO = 2,\n VARTAG_EXPANDED_RW = 3,\n VARTAG_ONDISK = 18\n} vartag_external;\n\nThis enum still has many options to go before it exceeds the maximum\nof the uint8 va_tag field. Therefore, I don't think we have no disk\nrepresentations left, nor do I think we'll need to add another option\nto the ToastCompressionId enum.\nAs an example, we can add another VARTAG option for dictionary-enabled\nexternal toast; like what the pluggable toast patch worked on. I think\nwe can salvage some ideas from that patch, even if the main idea got\nstuck.\n\nKind regards,\n\nMatthias van de Meent\nNeon Inc.\n\n\n", "msg_date": "Thu, 18 May 2023 14:05:54 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi,\n\n> No, that's inaccurate. The complete on-disk representation of a varatt is\n>\n> {\n> uint8 va_header; /* Always 0x80 or 0x01 */\n> uint8 va_tag; /* Type of datum */\n> char va_data[FLEXIBLE_ARRAY_MEMBER]; /* Type-dependent\n> data, for toasted values that's currently only a varatt_external */\n> } varattrib_1b_e;\n>\n> With va_tag being filled with one of the vartag_external values:\n>\n> typedef enum vartag_external\n> {\n> VARTAG_INDIRECT = 1,\n> VARTAG_EXPANDED_RO = 2,\n> VARTAG_EXPANDED_RW = 3,\n> VARTAG_ONDISK = 18\n> } vartag_external;\n>\n> This enum still has many options to go before it exceeds the maximum\n> of the uint8 va_tag field. Therefore, I don't think we have no disk\n> representations left, nor do I think we'll need to add another option\n> to the ToastCompressionId enum.\n> As an example, we can add another VARTAG option for dictionary-enabled\n> external toast; like what the pluggable toast patch worked on. I think\n> we can salvage some ideas from that patch, even if the main idea got\n> stuck.\n\nThe problem here is that the comments are ambiguous regarding what to\ncall \"TOAST pointer\" exactly. I proposed a patch for this but it was\nnot accepted [1].\n\nSo the exact on-disk representation of a TOAST'ed value (for\nlittle-endian machines) is:\n\n0b00000001, 18 (va_tag), (varatt_external here)\n\nWhere 18 is sizeof(varatt_external) + 2, because the length includes\nthe length of the header.\n\nI agree that va_tag can have another use. But since we are going to\nmake varatt_external variable in size (otherwise I don't see how it\ncould be really **extendable**) I don't think this is the right\napproach.\n\nAlso I agree that this particular statement is incorrect:\n\n> This will allow us to extend the pointers indefinitely.\n\nvaratt_external is going to be limited to 255. But it seems to be a\nreasonable limitation for the nearest 10-20 years or so.\n\n[1]: https://commitfest.postgresql.org/39/3820/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 18 May 2023 15:57:14 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi,\n\n> I agree that va_tag can have another use. But since we are going to\n> make varatt_external variable in size (otherwise I don't see how it\n> could be really **extendable**) I don't think this is the right\n> approach.\n\nOn second thought, perhaps we are talking more or less about the same thing?\n\nIt doesn't matter what will be used as a sign of presence of a varint\nbitmask in the pointer. My initial proposal to use ToastCompressionId\nfor this is probably redundant since va_tag > 18 will already tell\nthat.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 18 May 2023 16:11:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Thu, 18 May 2023 at 15:12, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > I agree that va_tag can have another use. But since we are going to\n> > make varatt_external variable in size (otherwise I don't see how it\n> > could be really **extendable**) I don't think this is the right\n> > approach.\n\nWhy would we modify va_tag=18; data=varatt_external? A different\nva_tag option would allow us to keep the current layout around without\nmuch maintenance and a consistent low overhead.\n\n> On second thought, perhaps we are talking more or less about the same thing?\n>\n> It doesn't matter what will be used as a sign of presence of a varint\n> bitmask in the pointer. My initial proposal to use ToastCompressionId\n> for this is probably redundant since va_tag > 18 will already tell\n> that.\n\nI'm not sure \"extendable\" would be the right word, but as I see it:\n\n1. We need more space to store more metadata;\n2. Essentially all bits in varatt_external are already accounted for; and\n3. There are still many options left in va_tag\n\nIt seems to me that adding a new variant to va_att for marking new\nfeatures in the toast infrastructure makes the most sense, as we'd\nalso be able to do the new things like varints etc without needing to\nmodify existing toast paths significantly.\n\nWe'd need to stop using the va_tag as length indicator, but I don't\nthink it's currently assumed to be a length indicator anyway (see\nVARSIZE_EXTERNAL(ptr)). By not using the varatt_external struct\ncurrently in use, we could be able to get down to <18B toast pointers\nas well, though I'd consider that unlikely.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n", "msg_date": "Thu, 18 May 2023 16:33:56 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi!\n\nMatthias, in the Pluggable TOAST thread we proposed additional pointer\ndefinition, without modification\nof the original varatt_external - we have to keep it untouched for\ncompatibility issues. The following extension\nfor the TOAST pointer was proposed:\n\ntypedef struct varatt_custom\n{\nuint16 va_toasterdatalen;/* total size of toast pointer, < BLCKSZ */\nuint32 va_rawsize; /* Original data size (includes header) */\nuint32 va_toasterid; /* Toaster ID, actually Oid */\nchar va_toasterdata[FLEXIBLE_ARRAY_MEMBER]; /* Custom toaster data */\n} varatt_custom;\n\nwith the new tag VARTAG_CUSTOM = 127.\n\nRawsize we have to keep because it is used by Executor. And Toaster ID is\nthe OID by which\nwe identify the extension that uses this pointer invariant.\n\n\nOn Thu, May 18, 2023 at 5:34 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Thu, 18 May 2023 at 15:12, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> > Hi,\n> >\n> > > I agree that va_tag can have another use. But since we are going to\n> > > make varatt_external variable in size (otherwise I don't see how it\n> > > could be really **extendable**) I don't think this is the right\n> > > approach.\n>\n> Why would we modify va_tag=18; data=varatt_external? A different\n> va_tag option would allow us to keep the current layout around without\n> much maintenance and a consistent low overhead.\n>\n> > On second thought, perhaps we are talking more or less about the same\n> thing?\n> >\n> > It doesn't matter what will be used as a sign of presence of a varint\n> > bitmask in the pointer. My initial proposal to use ToastCompressionId\n> > for this is probably redundant since va_tag > 18 will already tell\n> > that.\n>\n> I'm not sure \"extendable\" would be the right word, but as I see it:\n>\n> 1. We need more space to store more metadata;\n> 2. Essentially all bits in varatt_external are already accounted for; and\n> 3. There are still many options left in va_tag\n>\n> It seems to me that adding a new variant to va_att for marking new\n> features in the toast infrastructure makes the most sense, as we'd\n> also be able to do the new things like varints etc without needing to\n> modify existing toast paths significantly.\n>\n> We'd need to stop using the va_tag as length indicator, but I don't\n> think it's currently assumed to be a length indicator anyway (see\n> VARSIZE_EXTERNAL(ptr)). By not using the varatt_external struct\n> currently in use, we could be able to get down to <18B toast pointers\n> as well, though I'd consider that unlikely.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon, Inc.\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Matthias, in the Pluggable TOAST thread we proposed additional pointer definition, without modificationof the original varatt_external - we have to keep it untouched for compatibility issues. The following extensionfor the TOAST pointer was proposed:typedef struct varatt_custom{\tuint16\tva_toasterdatalen;/* total size of toast pointer, < BLCKSZ */\tuint32\tva_rawsize;\t\t/* Original data size (includes header) */\tuint32\tva_toasterid;\t/* Toaster ID, actually Oid */\tchar\tva_toasterdata[FLEXIBLE_ARRAY_MEMBER];\t/* Custom toaster data */}\t\t\tvaratt_custom;with the new tag VARTAG_CUSTOM = 127.Rawsize we have to keep because it is used by Executor. And Toaster ID is the OID by whichwe identify the extension that uses this pointer invariant.On Thu, May 18, 2023 at 5:34 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Thu, 18 May 2023 at 15:12, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > I agree that va_tag can have another use. But since we are going to\n> > make varatt_external variable in size (otherwise I don't see how it\n> > could be really **extendable**) I don't think this is the right\n> > approach.\n\nWhy would we modify va_tag=18; data=varatt_external? A different\nva_tag option would allow us to keep the current layout around without\nmuch maintenance and a consistent low overhead.\n\n> On second thought, perhaps we are talking more or less about the same thing?\n>\n> It doesn't matter what will be used as a sign of presence of a varint\n> bitmask in the pointer. My initial proposal to use ToastCompressionId\n> for this is probably redundant since va_tag > 18 will already tell\n> that.\n\nI'm not sure \"extendable\" would be the right word, but as I see it:\n\n1. We need more space to store more metadata;\n2. Essentially all bits in varatt_external are already accounted for; and\n3. There are still many options left in va_tag\n\nIt seems to me that adding a new variant to va_att for marking new\nfeatures in the toast infrastructure makes the most sense, as we'd\nalso be able to do the new things like varints etc without needing to\nmodify existing toast paths significantly.\n\nWe'd need to stop using the va_tag as length indicator, but I don't\nthink it's currently assumed to be a length indicator anyway (see\nVARSIZE_EXTERNAL(ptr)). By not using the varatt_external struct\ncurrently in use, we could be able to get down to <18B toast pointers\nas well, though I'd consider that unlikely.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Thu, 18 May 2023 18:50:26 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": true, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi,\n\n> We'd need to stop using the va_tag as length indicator, but I don't\n> think it's currently assumed to be a length indicator anyway (see\n> VARSIZE_EXTERNAL(ptr)). By not using the varatt_external struct\n> currently in use, we could be able to get down to <18B toast pointers\n> as well, though I'd consider that unlikely.\n\nAgree.\n\nAnother thing we have to decide is what to do exactly in the scope of\nthis thread.\n\nI imagine it as a refactoring that will find all the places that deal\nwith current TOAST pointer and changes them to something like:\n\n```\nswitch(va_tag) {\n case DEFAULT_VA_TAG( equals 18 ):\n default_toast_process_case_abc(...);\n default:\n elog(ERROR, \"Unknown TOAST tag\")\n}\n```\n\nSo that next time somebody is going to need another type of TOAST\npointer this person will have only to add a corresponding tag and\nhandlers. (Something like \"virtual methods\" will produce a cleaner\ncode but will also break branch prediction, so I don't think we should\nuse those.)\n\nI don't think we need an example of adding a new TOAST tag in scope of\nthis work since the default one is going to end up being such an\nexample.\n\nDoes it make sense?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sun, 21 May 2023 16:39:20 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Sun, 21 May 2023, 15:39 Aleksander Alekseev,\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > We'd need to stop using the va_tag as length indicator, but I don't\n> > think it's currently assumed to be a length indicator anyway (see\n> > VARSIZE_EXTERNAL(ptr)). By not using the varatt_external struct\n> > currently in use, we could be able to get down to <18B toast pointers\n> > as well, though I'd consider that unlikely.\n>\n> Agree.\n>\n> Another thing we have to decide is what to do exactly in the scope of\n> this thread.\n>\n> I imagine it as a refactoring that will find all the places that deal\n> with current TOAST pointer and changes them to something like:\n>\n> ```\n> switch(va_tag) {\n> case DEFAULT_VA_TAG( equals 18 ):\n> default_toast_process_case_abc(...);\n> default:\n> elog(ERROR, \"Unknown TOAST tag\")\n> }\n> ```\n\nI'm not sure that we need all that.\nMany places do some special handling for VARATT_IS_EXTERNAL because\ndecompressing or detoasting is expensive and doing that as late as\npossible can be beneficial (e.g. EXPLAIN ANALYZE can run much faster\nbecause we never detoast returned columns). But only very few of these\ncases actually work on explicitly on-disk data: my IDE can't find any\nuses of VARATT_IS_EXTERNAL_ONDISK (i.e. the actual TOASTed value)\noutside the expected locations of the toast subsystems, amcheck, and\nlogical decoding (incl. the pgoutput plugin). I'm fairly sure we only\nneed to update existing paths in those subsystems to support another\nformat of external (but not the current VARTAG_ONDISK) data.\n\n> So that next time somebody is going to need another type of TOAST\n> pointer this person will have only to add a corresponding tag and\n> handlers. (Something like \"virtual methods\" will produce a cleaner\n> code but will also break branch prediction, so I don't think we should\n> use those.)\n\nYeah, I'm also not super stoked about using virtual methods for a new\nexternal toast implementation.\n\n> I don't think we need an example of adding a new TOAST tag in scope of\n> this work since the default one is going to end up being such an\n> example.\n>\n> Does it make sense?\n\nI see your point, but I do think we should also think about why we do\nthe change.\n\nE.g.: Our current toast infra is built around 4 uint32 fields in the\ntoast pointer; but with this change in place we can devise a new toast\npointer that uses varint encoding on the length-indicating fields to\nreduce the footprint of 18B to an expected 14 bytes.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n", "msg_date": "Mon, 22 May 2023 15:07:18 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi,\n\n> I see your point, but I do think we should also think about why we do\n> the change.\n\nPersonally at the moment I care only about implementing compression\ndictionaries on top of this, as is discussed in the corresponding\nthread [1]. I'm going to need new fields in the TOAST pointer's\nincluding (but not necessarily limited to) dictionary id.\n\nAs I understand, Nikita is interested in implementing 64-bit TOAST\npointers [2]. I must admit I didn't follow that thread too closely but\nI can imagine the needs are similar.\n\nLast but not least I remember somebody on the mailing list suggested\nadding ZSTD compression support for TOAST, besides LZ4. Assuming I\ndidn't dream it, the proposal was rejected due to the limited amount\nof free bits in ToastCompressionId. It was argued that two possible\ncombinations that are left should be treated with care and ZSTD will\nnot bring enough value to the users compared to LZ4.\n\nThese are 3 recent cases I could recall. This being said I think our\nsolution should be generic enough to cover possible future cases\nand/or cases unknown to us yet.\n\n[1]: https://postgr.es/m/CAJ7c6TM7%2BsTvwREeL74Y5U91%2B5ymNobRbOmnDRfdTonq9trZyQ%40mail.gmail.com\n[2]: https://commitfest.postgresql.org/43/4296/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 22 May 2023 16:47:12 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi,\n\nAleksander, I'm interested in extending TOAST pointer in various ways.\n64-bit TOAST value ID allows to resolve very complex issue for production\nsystems with large tables and heavy update rate.\n\nI agree with Matthias that there should not be processing of TOAST pointer\ninternals outside TOAST macros. Currently, TOASTed value is distinguished\nas VARATT_IS_EXTERNAL_ONDISK, and it should stay this way. Adding\ncompression requires another implementation (extension) of\nVARATT_EXTERNAL because current supports only 2 compression methods -\nit has only 1 bit responsible for compression method, and there is a safe\nway to do so, without affecting default TOAST mechanics - we must keep\nit this way for compatibility issues and not to break DB upgrade.\n\nAlso, I must remind that we should not forget about field alignment inside\nthe TOAST pointer.\n\nAs it was already mentioned, it seems not very reasonable trying to save\na byte or two while we are storing out-of-line values of at least 2 kb in\nsize.\n\nOn Mon, May 22, 2023 at 4:47 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi,\n>\n> > I see your point, but I do think we should also think about why we do\n> > the change.\n>\n> Personally at the moment I care only about implementing compression\n> dictionaries on top of this, as is discussed in the corresponding\n> thread [1]. I'm going to need new fields in the TOAST pointer's\n> including (but not necessarily limited to) dictionary id.\n>\n> As I understand, Nikita is interested in implementing 64-bit TOAST\n> pointers [2]. I must admit I didn't follow that thread too closely but\n> I can imagine the needs are similar.\n>\n> Last but not least I remember somebody on the mailing list suggested\n> adding ZSTD compression support for TOAST, besides LZ4. Assuming I\n> didn't dream it, the proposal was rejected due to the limited amount\n> of free bits in ToastCompressionId. It was argued that two possible\n> combinations that are left should be treated with care and ZSTD will\n> not bring enough value to the users compared to LZ4.\n>\n> These are 3 recent cases I could recall. This being said I think our\n> solution should be generic enough to cover possible future cases\n> and/or cases unknown to us yet.\n>\n> [1]:\n> https://postgr.es/m/CAJ7c6TM7%2BsTvwREeL74Y5U91%2B5ymNobRbOmnDRfdTonq9trZyQ%40mail.gmail.com\n> [2]: https://commitfest.postgresql.org/43/4296/\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Aleksander, I'm interested in extending TOAST pointer in various ways.64-bit TOAST value ID allows to resolve very complex issue for productionsystems with large tables and heavy update rate.I agree with Matthias that there should not be processing of TOAST pointerinternals outside TOAST macros. Currently, TOASTed value is distinguishedas VARATT_IS_EXTERNAL_ONDISK, and it should stay this way. Addingcompression requires another implementation (extension) ofVARATT_EXTERNAL because current supports only 2 compression methods -it has only 1 bit responsible for compression method, and there is a safeway to do so, without affecting default TOAST mechanics - we must keepit this way for compatibility issues and not to break DB upgrade.Also, I must remind that we should not forget about field alignment insidethe TOAST pointer.As it was already mentioned, it seems not very reasonable trying to savea byte or two while we are storing out-of-line values of at least 2 kb in size.On Mon, May 22, 2023 at 4:47 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi,\n\n> I see your point, but I do think we should also think about why we do\n> the change.\n\nPersonally at the moment I care only about implementing compression\ndictionaries on top of this, as is discussed in the corresponding\nthread [1]. I'm going to need new fields in the TOAST pointer's\nincluding (but not necessarily limited to) dictionary id.\n\nAs I understand, Nikita is interested in implementing 64-bit TOAST\npointers [2]. I must admit I didn't follow that thread too closely but\nI can imagine the needs are similar.\n\nLast but not least I remember somebody on the mailing list suggested\nadding ZSTD compression support for TOAST, besides LZ4. Assuming I\ndidn't dream it, the proposal was rejected due to the limited amount\nof free bits in ToastCompressionId. It was argued that two possible\ncombinations that are left should be treated with care and ZSTD will\nnot bring enough value to the users compared to LZ4.\n\nThese are 3 recent cases I could recall. This being said I think our\nsolution should be generic enough to cover possible future cases\nand/or cases unknown to us yet.\n\n[1]: https://postgr.es/m/CAJ7c6TM7%2BsTvwREeL74Y5U91%2B5ymNobRbOmnDRfdTonq9trZyQ%40mail.gmail.com\n[2]: https://commitfest.postgresql.org/43/4296/\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Mon, 22 May 2023 19:13:21 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": true, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Mon, 22 May 2023 at 18:13, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi,\n\nCould you please not top-post.\n\n> Aleksander, I'm interested in extending TOAST pointer in various ways.\n> 64-bit TOAST value ID allows to resolve very complex issue for production\n> systems with large tables and heavy update rate.\n\nCool. I agree that this would be nice, though it's quite unlikely\nwe'll ever be able to use much more than 32 bits concurrently with the\ncurrent 32-bit block IDs. But indeed, it is a good way of reducing\ntime spent searching for unused toast IDs.\n\n> I agree with Matthias that there should not be processing of TOAST pointer\n> internals outside TOAST macros. Currently, TOASTed value is distinguished\n> as VARATT_IS_EXTERNAL_ONDISK, and it should stay this way. Adding\n> compression requires another implementation (extension) of\n> VARATT_EXTERNAL because current supports only 2 compression methods -\n> it has only 1 bit responsible for compression method, and there is a safe\n> way to do so, without affecting default TOAST mechanics - we must keep\n> it this way for compatibility issues and not to break DB upgrade.\n>\n> Also, I must remind that we should not forget about field alignment inside\n> the TOAST pointer.\n\nWhat field alignment inside the TOAST pointers?\nCurrent TOAST pointers are not aligned: the varatt_external struct is\ncopied into and from the va_data section of varattrib_1b_e when we\nneed to access the data; so as far as I know this struct has no\nalignment to speak of.\n\n> As it was already mentioned, it seems not very reasonable trying to save\n> a byte or two while we are storing out-of-line values of at least 2 kb in size.\n\nIf we were talking about the data we're storing externally, sure. But\nthis is data we store in the original tuple, and moving that around is\nrelatively expensive. Reducing the aligned size of the toast pointer\ncan help reduce the size of the heap tuple, thus increasing the\nefficiency of the primary data table.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n", "msg_date": "Mon, 22 May 2023 19:14:19 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Mon, May 22, 2023 at 6:47 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Last but not least I remember somebody on the mailing list suggested\n> adding ZSTD compression support for TOAST, besides LZ4. Assuming I\n> didn't dream it, the proposal was rejected due to the limited amount\n> of free bits in ToastCompressionId. It was argued that two possible\n> combinations that are left should be treated with care and ZSTD will\n> not bring enough value to the users compared to LZ4.\n\nThis thread, I think:\n\n https://www.postgresql.org/message-id/flat/YoMiNmkztrslDbNS%40paquier.xyz\n\n--Jacob\n\n\n", "msg_date": "Mon, 22 May 2023 13:47:43 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Thu, May 18, 2023 at 8:06 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> This enum still has many options to go before it exceeds the maximum\n> of the uint8 va_tag field. Therefore, I don't think we have no disk\n> representations left, nor do I think we'll need to add another option\n> to the ToastCompressionId enum.\n> As an example, we can add another VARTAG option for dictionary-enabled\n> external toast; like what the pluggable toast patch worked on. I think\n> we can salvage some ideas from that patch, even if the main idea got\n> stuck.\n\nAdding another VARTAG option is somewhat different from adding another\nToastCompressionId. I think that right now we have embedded in various\nplaces the idea that VARTAG_EXTERNAL is the only thing that shows up\non disk, and we'd need to track down all such places and adjust them\nif we add other VARTAG types in the future. Depending on how it is to\nbe used, adding a new ToastCompressionId might be less work. However,\nI don't think we can use the possibility of adding a new VARTAG value\nas a reason why it's OK to use up the last possible ToastCompressionId\nvalue for something non-extensible.\n\nFor projects like this, the details matter a lot. If the goal is to\nadd a new compression type that behaves like the existing compression\ntypes, more or less, then I think we should allocate the last\nToastCompressionId bit to mean \"some other compression ID\" and add a\n1-byte header that says which one is in use. But if the new feature\nbeing added is enough different from the other compression methods,\nthen it might be better to do it in some other way e.g. a new VARTAG.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 May 2023 12:33:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Tue, May 23, 2023 at 12:33:50PM -0400, Robert Haas wrote:\n> For projects like this, the details matter a lot. If the goal is to\n> add a new compression type that behaves like the existing compression\n> types, more or less, then I think we should allocate the last\n> ToastCompressionId bit to mean \"some other compression ID\" and add a\n> 1-byte header that says which one is in use. But if the new feature\n> being added is enough different from the other compression methods,\n> then it might be better to do it in some other way e.g. a new VARTAG.\n\nAgreed. While the compression argument and the possibility to add\nmore options to toast pointers are very appealing, FWIW, I'd like to\nthink that the primary target is the 4-byte OID assignment limit of\nwhere backends loop infinitely until a OID can be found, which can be\na real pain for users with a large number of blobs or just enough\ntoast data to trigger it.\n\nSaying that even if I sent the patch for zstd on toast..\n--\nMichael", "msg_date": "Wed, 24 May 2023 09:17:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi!\n\nI've made a WIP patch that uses 64-bit TOAST value ID instead of 32-bit,\nand sent it as a part of discussion, but there was no feedback on such a\nsolution. There was a link to that discussion at the top of this thread.\n\nAlso, I have to note that, based on our work on Pluggable TOAST - extending\nTOAST pointer with additional structures would require review of the logical\nreplication engine, currently it is not suitable for any custom TOAST\npointers.\nCurrently we have no final solution for problems with logical replication\nfor\ncustom TOAST pointers.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!I've made a WIP patch that uses 64-bit TOAST value ID instead of 32-bit,and sent it as a part of discussion, but there was no feedback on such asolution. There was a link to that discussion at the top of this thread.Also, I have to note that, based on our work on Pluggable TOAST - extendingTOAST pointer with additional structures would require review of the logicalreplication engine, currently it is not suitable for any custom TOAST pointers.Currently we have no final solution for problems with logical replication forcustom TOAST pointers.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Wed, 24 May 2023 12:16:50 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": true, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Tue, 23 May 2023 at 18:34, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 18, 2023 at 8:06 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > This enum still has many options to go before it exceeds the maximum\n> > of the uint8 va_tag field. Therefore, I don't think we have no disk\n> > representations left, nor do I think we'll need to add another option\n> > to the ToastCompressionId enum.\n> > As an example, we can add another VARTAG option for dictionary-enabled\n> > external toast; like what the pluggable toast patch worked on. I think\n> > we can salvage some ideas from that patch, even if the main idea got\n> > stuck.\n>\n> Adding another VARTAG option is somewhat different from adding another\n> ToastCompressionId. I think that right now we have embedded in various\n> places the idea that VARTAG_EXTERNAL is the only thing that shows up\n> on disk, and we'd need to track down all such places and adjust them\n> if we add other VARTAG types in the future. Depending on how it is to\n> be used, adding a new ToastCompressionId might be less work. However,\n> I don't think we can use the possibility of adding a new VARTAG value\n> as a reason why it's OK to use up the last possible ToastCompressionId\n> value for something non-extensible.\n\nI think you might not have picked up on what I was arguing for, but I\nagree with what you just said.\n\nMy comment on not needing to invent a new ToastCompressionId was on\nthe topic of adding capabilities^ to toast pointers that do things\ndifferently than the current TOAST and need more info than just sizes,\n2x 32-bit ID and a compression algorithm.\n\n^ capabilities such as compression dictionaries (which would need to\nstore a dictionary ID in the pointer), TOAST IDs that are larger than\n32 bits, and other such advances.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n", "msg_date": "Wed, 24 May 2023 11:50:21 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "On Wed, May 24, 2023 at 11:50:21AM +0200, Matthias van de Meent wrote:\n> I think you might not have picked up on what I was arguing for, but I\n> agree with what you just said.\n> \n> My comment on not needing to invent a new ToastCompressionId was on\n> the topic of adding capabilities^ to toast pointers that do things\n> differently than the current TOAST and need more info than just sizes,\n> 2x 32-bit ID and a compression algorithm.\n\nI am not sure to understand why a new vartag is really required just\nfor the sake of compression when it comes to VARTAG_EXTERNAL, because\nthat's where the compression information is stored for ages. The code\nfootprint gets more invasive, as well if you add more compression\nmethods as a new vartag implies more code areas to handle.\n\n> ^ capabilities such as compression dictionaries (which would need to\n> store a dictionary ID in the pointer), TOAST IDs that are larger than\n> 32 bits, and other such advances.\n\nSaying that, I don't really see why we cannot just do both, because it\nis clear that many other projects want to fill in more data into\nvarlena headers for their own needs. Hence, I would do:\n1) Use the last bit of va_extinfo in varatt_external to link it more\ninfo related to compression, and keep the compression information\nclose to varatt_external.\n2) Add a new kind of \"custom\" vartag, for what any other requirements\nwant it to be.\n--\nMichael", "msg_date": "Wed, 6 Dec 2023 14:38:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: RFI: Extending the TOAST Pointer" }, { "msg_contents": "Hi,\n\nHere's the PoC for a custom TOAST pointer. The main idea is that custom\npointer\nprovides data space allowing to store custom metadata (i.e. TOAST method,\nrelation\nOIDs, advanced compression information, etc, and even keep part of the data\ninline.\n\nAny feedback would be greatly appreciated.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/", "msg_date": "Wed, 6 Dec 2023 12:49:46 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": true, "msg_subject": "Re: RFI: Extending the TOAST Pointer" } ]
[ { "msg_contents": "Hi.\n\nWhile reviewing another patch to the file info.c, I noticed there seem\nto be some unnecessary calls to strlen(query) in get_rel_infos()\nfunction.\n\ni.e. The query is explicitly initialized to an empty string\nimmediately prior, so why the strlen?\n\nPSA patch for this.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 11 May 2023 13:06:42 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Redundant strlen(query) in get_rel_infos" }, { "msg_contents": "On Thu, May 11, 2023 at 01:06:42PM +1000, Peter Smith wrote:\n> While reviewing another patch to the file info.c, I noticed there seem\n> to be some unnecessary calls to strlen(query) in get_rel_infos()\n> function.\n> \n> i.e. The query is explicitly initialized to an empty string\n> immediately prior, so why the strlen?\n\nIt just looks like this was copied from a surrounding area like\nget_db_infos(). Keeping the code as it is is no big deal, either, but\nyes we could just remove them and save the two calls. So ok by me.\n--\nMichael", "msg_date": "Thu, 11 May 2023 13:24:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Redundant strlen(query) in get_rel_infos" }, { "msg_contents": "> On 11 May 2023, at 06:24, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, May 11, 2023 at 01:06:42PM +1000, Peter Smith wrote:\n>> While reviewing another patch to the file info.c, I noticed there seem\n>> to be some unnecessary calls to strlen(query) in get_rel_infos()\n>> function.\n>> \n>> i.e. The query is explicitly initialized to an empty string\n>> immediately prior, so why the strlen?\n> \n> It just looks like this was copied from a surrounding area like\n> get_db_infos(). Keeping the code as it is is no big deal, either, but\n> yes we could just remove them and save the two calls. So ok by me.\n\nI think it's intentionally done in 73b9952e82 as defensive coding, and given\nthat this is far from a hot codepath I think leaving them is better.\n\nInstead I think it would be more worthwhile to remove these snprintf() made\nqueries and use PQExpbuffers. 29aeda6e4e6 introduced that in pg_upgrade and it\nis more in line with how we build queries in other tools.\n\nLooking at the snprintf sites made me remember a patchset I worked on last year\n(but I don't remember if I ended up submitting); there is no need to build one\nof the queries on the stack as it has no variables. The attached 0003 (which\nneeds a reindent of the query text) comes from that patchset. I think we\nshould do this regardless.\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 11 May 2023 11:57:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Redundant strlen(query) in get_rel_infos" }, { "msg_contents": "On Thu, May 11, 2023 at 11:57:37AM +0200, Daniel Gustafsson wrote:\n> I think it's intentionally done in 73b9952e82 as defensive coding, and given\n> that this is far from a hot codepath I think leaving them is better.\n> \n> Instead I think it would be more worthwhile to remove these snprintf() made\n> queries and use PQExpbuffers. 29aeda6e4e6 introduced that in pg_upgrade and it\n> is more in line with how we build queries in other tools.\n\nGood idea to reduce the overall presence of QUERY_ALLOC in the\nsurroundings.\n\n> Looking at the snprintf sites made me remember a patchset I worked on last year\n> (but I don't remember if I ended up submitting); there is no need to build one\n> of the queries on the stack as it has no variables. The attached 0003 (which\n> needs a reindent of the query text) comes from that patchset. I think we\n> should do this regardless.\n\nNot sure that this is an improvement in itself as\nget_tablespace_paths() includes QUERY_ALLOC because\nexecuteQueryOrDie() does so, so this could become a problem if\nsomeones decides to copy-paste this code with a query becomes longer\nthan QUERY_ALLOC once built? Perhaps that's not worth worrying, but I\nlike your suggestion of applying more PQExpbuffers, particularly if\napplied in a consistent way across the board. It could matter if the\ncode of get_tablespace_paths() is changed to use a query with\nparameters.\n--\nMichael", "msg_date": "Mon, 15 May 2023 16:45:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Redundant strlen(query) in get_rel_infos" }, { "msg_contents": "> On 15 May 2023, at 09:45, Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, May 11, 2023 at 11:57:37AM +0200, Daniel Gustafsson wrote:\n\n>> Looking at the snprintf sites made me remember a patchset I worked on last year\n>> (but I don't remember if I ended up submitting); there is no need to build one\n>> of the queries on the stack as it has no variables. The attached 0003 (which\n>> needs a reindent of the query text) comes from that patchset. I think we\n>> should do this regardless.\n> \n> Not sure that this is an improvement in itself as\n> get_tablespace_paths() includes QUERY_ALLOC because\n> executeQueryOrDie() does so, so this could become a problem if\n> someones decides to copy-paste this code with a query becomes longer\n> than QUERY_ALLOC once built? Perhaps that's not worth worrying,\n\nWe already have lots of invocations of executeQueryOrDie which doesn't pass via\na QUERY_ALLOC buffer so I don't see any risk with adding one more.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 15 May 2023 10:05:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Redundant strlen(query) in get_rel_infos" } ]
[ { "msg_contents": "Hi --\n\nWhile reviewing another patch for the file info.c, I noticed some\nmisplaced colon (':') in the verbose logs for print_rel_infos().\n\nPSA patch to fix it.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 11 May 2023 15:41:20 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "pg_upgrade - typo in verbose log" }, { "msg_contents": "> On 11 May 2023, at 07:41, Peter Smith <smithpb2250@gmail.com> wrote:\n\n> While reviewing another patch for the file info.c, I noticed some\n> misplaced colon (':') in the verbose logs for print_rel_infos().\n\nThat spelling was introduced in c2e9b2f28818 which was the initial import of\npg_upgrade into contrib/ for the 9.0 release (at that time the function was\nrelarr_print() which via a few other names was renamed to print_rel_infos() in\n0a5f1199319).\n\nIt's not entirely clear to me if the current spelling is a mistake or\nintentional, but I do agree that your version is an improvement.\n\nTo be consistent with other log output in pg_upgrade we should probably also\nwrap the relname and reltblspace in quotes as \\\"%s.%s\\\" and \\\"%s\\\" (and the\nDatabase in print_db_infos()).\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 11 May 2023 10:30:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade - typo in verbose log" }, { "msg_contents": "On Thu, May 11, 2023 at 6:30 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 11 May 2023, at 07:41, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> > While reviewing another patch for the file info.c, I noticed some\n> > misplaced colon (':') in the verbose logs for print_rel_infos().\n>\n> That spelling was introduced in c2e9b2f28818 which was the initial import of\n> pg_upgrade into contrib/ for the 9.0 release (at that time the function was\n> relarr_print() which via a few other names was renamed to print_rel_infos() in\n> 0a5f1199319).\n>\n> It's not entirely clear to me if the current spelling is a mistake or\n> intentional, but I do agree that your version is an improvement.\n>\n> To be consistent with other log output in pg_upgrade we should probably also\n> wrap the relname and reltblspace in quotes as \\\"%s.%s\\\" and \\\"%s\\\" (and the\n> Database in print_db_infos()).\n>\n\nThanks for checking, and for the feedback.\n\nPSA patch v2 updated as suggested.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 11 May 2023 19:09:28 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade - typo in verbose log" }, { "msg_contents": "On Thu, May 11, 2023 at 7:09 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, May 11, 2023 at 6:30 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 11 May 2023, at 07:41, Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > > While reviewing another patch for the file info.c, I noticed some\n> > > misplaced colon (':') in the verbose logs for print_rel_infos().\n> >\n> > That spelling was introduced in c2e9b2f28818 which was the initial import of\n> > pg_upgrade into contrib/ for the 9.0 release (at that time the function was\n> > relarr_print() which via a few other names was renamed to print_rel_infos() in\n> > 0a5f1199319).\n> >\n> > It's not entirely clear to me if the current spelling is a mistake or\n> > intentional, but I do agree that your version is an improvement.\n> >\n> > To be consistent with other log output in pg_upgrade we should probably also\n> > wrap the relname and reltblspace in quotes as \\\"%s.%s\\\" and \\\"%s\\\" (and the\n> > Database in print_db_infos()).\n> >\n>\n> Thanks for checking, and for the feedback.\n>\n> PSA patch v2 updated as suggested.\n>\n\nPing.\n\nI thought v2 was ready to be pushed, but then this thread went silent\nfor 3 months\n\n??\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 17 Aug 2023 18:09:31 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade - typo in verbose log" }, { "msg_contents": "On Thu, Aug 17, 2023 at 06:09:31PM +1000, Peter Smith wrote:\n> Ping.\n\nFunnily enough, I was looking at this entry yesterday, before you\nreplied, and was wondering what's happening here.\n\n> I thought v2 was ready to be pushed, but then this thread went silent\n> for 3 months\n\nChange looks fine, so applied.\n--\nMichael", "msg_date": "Fri, 18 Aug 2023 09:47:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade - typo in verbose log" }, { "msg_contents": "On Fri, Aug 18, 2023 at 10:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Aug 17, 2023 at 06:09:31PM +1000, Peter Smith wrote:\n> > Ping.\n>\n> Funnily enough, I was looking at this entry yesterday, before you\n> replied, and was wondering what's happening here.\n>\n> > I thought v2 was ready to be pushed, but then this thread went silent\n> > for 3 months\n>\n> Change looks fine, so applied.\n\nThanks!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 18 Aug 2023 11:22:42 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade - typo in verbose log" }, { "msg_contents": "> On 18 Aug 2023, at 02:47, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Aug 17, 2023 at 06:09:31PM +1000, Peter Smith wrote:\n>> Ping.\n> \n> Funnily enough, I was looking at this entry yesterday, before you\n> replied, and was wondering what's happening here.\n\nIt was a combination of summer vacation, doing CFM and looking at things for\nv16, and some things were left further down on the TODO stack duing this.\n\n>> I thought v2 was ready to be pushed, but then this thread went silent\n>> for 3 months\n> \n> Change looks fine, so applied.\n\nAgreed, and thanks for applying.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 21 Aug 2023 10:21:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade - typo in verbose log" } ]
[ { "msg_contents": "While merging commits from c9f7f926484d69e2806e35343af7e472fadfede7\nthrough 3db72ebcbe20debc6552500ee9ccb4b2007f12f8 into a fork, my\ncolleague Rushabh Lathia and I noticed that the merge caused certain\nqueries to return wrong answers. Rushabh then discovered that this\nalso happens in unmodified PostgreSQL. I believe the problem is most\nlikely with one of these three commits:\n\n3bef56e116 Invent \"join domains\" to replace the below_outer_join hack.\nb448f1c8d8 Do assorted mop-up in the planner.\n2489d76c49 Make Vars be outer-join-aware.\n\nThe following test case, mostly due to Rushabh, demonstrates the\nproblem on current master:\n\nrhaas=# create table one_row as values (1);\nSELECT 1\nrhaas=# create table three_oids as select * from (values (10::oid),\n(20::oid), (30::oid)) v(oid);\nSELECT 3\nrhaas=# create table just_one_oid as select * from (values (10::oid)) v(oid);\nSELECT 1\nrhaas=# SELECT r.oid, s.oid FROM just_one_oid s LEFT JOIN (select\nthree_oids.oid from one_row, three_oids LEFT JOIN pg_db_role_setting s\nON three_oids.oid = s.setrole AND s.setdatabase = 0::oid) r ON r.oid =\ns.oid;\n oid | oid\n-----+-----\n 10 | 10\n 20 | 10\n 30 | 10\n(3 rows)\n\nThe answer is clearly wrong, because the join clause requires r.oid\nand s.oid to be equal, but in the displayed output, they are not. I'm\nnot quite sure what's happening here. The join to pg_db_role_setting\ngets removed, which seems correct, because it has a primary key index\non (setrole, setdatabase), and the join clause requires both of those\ncolumns to be equal to some specific value. The output of the whole\nsubselect is correct. But the join between the subselect and\njust_one_oid somehow goes wrong: the join filter that ought to be\nthere is missing.\n\n Nested Loop Left Join (cost=0.00..272137387.88 rows=16581375000 width=8)\n -> Seq Scan on just_one_oid s (cost=0.00..35.50 rows=2550 width=4)\n -> Materialize (cost=0.00..139272.12 rows=6502500 width=4)\n -> Nested Loop (cost=0.00..81358.62 rows=6502500 width=4)\n -> Seq Scan on one_row (cost=0.00..35.50 rows=2550 width=0)\n -> Materialize (cost=0.00..48.25 rows=2550 width=4)\n -> Seq Scan on three_oids (cost=0.00..35.50\nrows=2550 width=4)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 May 2023 09:31:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "v16 regression - wrong query results with LEFT JOINs + join removal" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> While merging commits from c9f7f926484d69e2806e35343af7e472fadfede7\n> through 3db72ebcbe20debc6552500ee9ccb4b2007f12f8 into a fork, my\n> colleague Rushabh Lathia and I noticed that the merge caused certain\n> queries to return wrong answers.\n\nI believe this is a variant of the existing open issue about\nthe outer-join-Vars stuff:\n\nhttps://www.postgresql.org/message-id/flat/0b819232-4b50-f245-1c7d-c8c61bf41827%40postgrespro.ru\n\nalthough perhaps different in detail --- EXPLAIN shows that the\njoin condition has gone missing completely, rather than just\nbeing put at the wrong join level. Possibly related to the\njoin removal that happened to pg_db_role_setting?\n\nI've been incredibly distracted by $real-life over the past few\nweeks, but am hoping to get the outer-join-Vars issues resolved\nin the next day or two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 May 2023 09:56:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v16 regression - wrong query results with LEFT JOINs + join\n removal" }, { "msg_contents": "On Thu, May 11, 2023 at 9:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > While merging commits from c9f7f926484d69e2806e35343af7e472fadfede7\n> > through 3db72ebcbe20debc6552500ee9ccb4b2007f12f8 into a fork, my\n> > colleague Rushabh Lathia and I noticed that the merge caused certain\n> > queries to return wrong answers.\n>\n> I believe this is a variant of the existing open issue about\n> the outer-join-Vars stuff:\n>\n> https://www.postgresql.org/message-id/flat/0b819232-4b50-f245-1c7d-c8c61bf41827%40postgrespro.ru\n\nOuch, so we've had a known queries-returning-wrong-answers bug for\nmore than two months. That's not great. I'll try to find time today to\ncheck whether the patches on that thread resolve this issue.\n\n> I've been incredibly distracted by $real-life over the past few\n> weeks, but am hoping to get the outer-join-Vars issues resolved\n> in the next day or two.\n\nThanks. Hope the real life distractions are nothing too awful, but\nbest wishes either way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 May 2023 10:14:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: v16 regression - wrong query results with LEFT JOINs + join\n removal" }, { "msg_contents": "On Thu, May 11, 2023 at 10:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Ouch, so we've had a known queries-returning-wrong-answers bug for\n> more than two months. That's not great. I'll try to find time today to\n> check whether the patches on that thread resolve this issue.\n\nI tried out the v3 patches from that thread and they don't seem to\nmake any difference for me in this test case. So either (1) it's a\ndifferent issue or (2) those patches don't fully fix it or (3) I'm bad\nat testing things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 May 2023 11:50:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: v16 regression - wrong query results with LEFT JOINs + join\n removal" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, May 11, 2023 at 10:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Ouch, so we've had a known queries-returning-wrong-answers bug for\n>> more than two months. That's not great. I'll try to find time today to\n>> check whether the patches on that thread resolve this issue.\n\n> I tried out the v3 patches from that thread and they don't seem to\n> make any difference for me in this test case. So either (1) it's a\n> different issue or (2) those patches don't fully fix it or (3) I'm bad\n> at testing things.\n\nYeah, I've just traced the problem to remove_rel_from_query() deciding\nthat it can drop the qual of interest :-(. I'd done this:\n\n- if (RINFO_IS_PUSHED_DOWN(rinfo, joinrelids))\n+ if (bms_is_member(ojrelid, rinfo->required_relids))\n\nas part of an unfinished effort at getting rid of RestrictInfos'\nis_pushed_down flags, and this example shows that the replacement\ncondition is faulty.\n\nWhat I'm inclined to do about it is just revert this particular\nchange. But I'd better run around and see where else I did that,\nbecause the idea is evidently not ready for prime time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 May 2023 12:46:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v16 regression - wrong query results with LEFT JOINs + join\n removal" }, { "msg_contents": "I wrote:\n> Yeah, I've just traced the problem to remove_rel_from_query() deciding\n> that it can drop the qual of interest :-(. I'd done this:\n> - if (RINFO_IS_PUSHED_DOWN(rinfo, joinrelids))\n> + if (bms_is_member(ojrelid, rinfo->required_relids))\n> as part of an unfinished effort at getting rid of RestrictInfos'\n> is_pushed_down flags, and this example shows that the replacement\n> condition is faulty.\n\n> What I'm inclined to do about it is just revert this particular\n> change. But I'd better run around and see where else I did that,\n> because the idea is evidently not ready for prime time.\n\nLooks like that was the only such change in 2489d76c4 or\nfollow-ons, so done, with a test case based on your example.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 May 2023 13:46:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v16 regression - wrong query results with LEFT JOINs + join\n removal" }, { "msg_contents": "On Thu, May 11, 2023 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What I'm inclined to do about it is just revert this particular\n> > change. But I'd better run around and see where else I did that,\n> > because the idea is evidently not ready for prime time.\n>\n> Looks like that was the only such change in 2489d76c4 or\n> follow-ons, so done, with a test case based on your example.\n\nThanks, Tom!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 May 2023 14:19:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: v16 regression - wrong query results with LEFT JOINs + join\n removal" }, { "msg_contents": "On Thu, May 11, 2023 at 11:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, May 11, 2023 at 10:14 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > Ouch, so we've had a known queries-returning-wrong-answers bug for\n> > more than two months. That's not great. I'll try to find time today to\n> > check whether the patches on that thread resolve this issue.\n>\n> I tried out the v3 patches from that thread and they don't seem to\n> make any difference for me in this test case. So either (1) it's a\n> different issue or (2) those patches don't fully fix it or (3) I'm bad\n> at testing things.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n> Forgive the noob question... But does this trigger a regression test to\nbe created?\nAnd who tracks/pushes that?\n\nKirk...\n\nOn Thu, May 11, 2023 at 11:50 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, May 11, 2023 at 10:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Ouch, so we've had a known queries-returning-wrong-answers bug for\n> more than two months. That's not great. I'll try to find time today to\n> check whether the patches on that thread resolve this issue.\n\nI tried out the v3 patches from that thread and they don't seem to\nmake any difference for me in this test case. So either (1) it's a\ndifferent issue or (2) those patches don't fully fix it or (3) I'm bad\nat testing things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\nForgive the noob question...  But does this trigger a regression test to be created?And who tracks/pushes that?Kirk...", "msg_date": "Thu, 11 May 2023 16:16:01 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v16 regression - wrong query results with LEFT JOINs + join\n removal" }, { "msg_contents": "On Thu, May 11, 2023 at 4:16 PM Kirk Wolak <wolakk@gmail.com> wrote:\n> Forgive the noob question... But does this trigger a regression test to be created?\n> And who tracks/pushes that?\n\nTom included one in the commit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 May 2023 07:39:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: v16 regression - wrong query results with LEFT JOINs + join\n removal" } ]
[ { "msg_contents": "Hi hackers,\n\nI've come across an unexpected behavior in our CSV parser that I'd like to\nbring up for discussion.\n\n% cat example.csv\nid,rating,review\n1,5,\"Great product, will buy again.\"\n2,3,\"I bought this for my 6\" laptop but it didn't fit my 8\" tablet\"\n\n% psql\nCREATE TABLE reviews (id int, rating int, review text);\n\\COPY reviews FROM example.csv WITH CSV HEADER;\nSELECT * FROM reviews;\n\nThis gives:\n\nid | rating | review\n----+--------+-------------------------------------------------------------\n 1 | 5 | Great product, will buy again.\n 2 | 3 | I bought this for my 6 laptop but it didn't fit my 8 tablet\n(2 rows)\n\nThe parser currently accepts quoting within an unquoted field. This can lead to\ndata misinterpretation when the quote is part of the field data (e.g.,\nfor inches, like in the example).\n\nOur CSV output rules quote an entire field or not at all. But the import of\nfields with mid-field quotes might lead to surprising and undetected outcomes.\n\nI think we should throw a parsing error for unescaped mid-field quotes,\nand add a COPY option like ALLOW_MIDFIELD_QUOTES for cases where mid-field\nquotes are necessary. The error message could suggest this option when it\nencounters an unescaped mid-field quote.\n\nI think the convenience of not having to use an extra option doesn't outweigh\nthe risk of undetected data integrity issues.\n\nThoughts?\n\n/Joel\n\nHi hackers,I've come across an unexpected behavior in our CSV parser that I'd like tobring up for discussion.% cat example.csvid,rating,review1,5,\"Great product, will buy again.\"2,3,\"I bought this for my 6\" laptop but it didn't fit my 8\" tablet\"% psqlCREATE TABLE reviews (id int, rating int, review text);\\COPY reviews FROM example.csv WITH CSV HEADER;SELECT * FROM reviews;This gives:id | rating |                           review----+--------+-------------------------------------------------------------  1 |      5 | Great product, will buy again.  2 |      3 | I bought this for my 6 laptop but it didn't fit my 8 tablet(2 rows)The parser currently accepts quoting within an unquoted field. This can lead todata misinterpretation when the quote is part of the field data (e.g.,for inches, like in the example).Our CSV output rules quote an entire field or not at all. But the import offields with mid-field quotes might lead to surprising and undetected outcomes.I think we should throw a parsing error for unescaped mid-field quotes,and add a COPY option like ALLOW_MIDFIELD_QUOTES for cases where mid-fieldquotes are necessary. The error message could suggest this option when itencounters an unescaped mid-field quote.I think the convenience of not having to use an extra option doesn't outweighthe risk of undetected data integrity issues.Thoughts?/Joel", "msg_date": "Thu, 11 May 2023 16:03:33 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "čt 11. 5. 2023 v 16:04 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> Hi hackers,\n>\n> I've come across an unexpected behavior in our CSV parser that I'd like to\n> bring up for discussion.\n>\n> % cat example.csv\n> id,rating,review\n> 1,5,\"Great product, will buy again.\"\n> 2,3,\"I bought this for my 6\" laptop but it didn't fit my 8\" tablet\"\n>\n> % psql\n> CREATE TABLE reviews (id int, rating int, review text);\n> \\COPY reviews FROM example.csv WITH CSV HEADER;\n> SELECT * FROM reviews;\n>\n> This gives:\n>\n> id | rating | review\n> ----+--------+-------------------------------------------------------------\n> 1 | 5 | Great product, will buy again.\n> 2 | 3 | I bought this for my 6 laptop but it didn't fit my 8 tablet\n> (2 rows)\n>\n> The parser currently accepts quoting within an unquoted field. This can\n> lead to\n> data misinterpretation when the quote is part of the field data (e.g.,\n> for inches, like in the example).\n>\n> Our CSV output rules quote an entire field or not at all. But the import of\n> fields with mid-field quotes might lead to surprising and undetected\n> outcomes.\n>\n> I think we should throw a parsing error for unescaped mid-field quotes,\n> and add a COPY option like ALLOW_MIDFIELD_QUOTES for cases where mid-field\n> quotes are necessary. The error message could suggest this option when it\n> encounters an unescaped mid-field quote.\n>\n> I think the convenience of not having to use an extra option doesn't\n> outweigh\n> the risk of undetected data integrity issues.\n>\n> Thoughts?\n>\n\n+1\n\nPavel\n\n\n> /Joel\n>\n>\n\nčt 11. 5. 2023 v 16:04 odesílatel Joel Jacobson <joel@compiler.org> napsal:Hi hackers,I've come across an unexpected behavior in our CSV parser that I'd like tobring up for discussion.% cat example.csvid,rating,review1,5,\"Great product, will buy again.\"2,3,\"I bought this for my 6\" laptop but it didn't fit my 8\" tablet\"% psqlCREATE TABLE reviews (id int, rating int, review text);\\COPY reviews FROM example.csv WITH CSV HEADER;SELECT * FROM reviews;This gives:id | rating |                           review----+--------+-------------------------------------------------------------  1 |      5 | Great product, will buy again.  2 |      3 | I bought this for my 6 laptop but it didn't fit my 8 tablet(2 rows)The parser currently accepts quoting within an unquoted field. This can lead todata misinterpretation when the quote is part of the field data (e.g.,for inches, like in the example).Our CSV output rules quote an entire field or not at all. But the import offields with mid-field quotes might lead to surprising and undetected outcomes.I think we should throw a parsing error for unescaped mid-field quotes,and add a COPY option like ALLOW_MIDFIELD_QUOTES for cases where mid-fieldquotes are necessary. The error message could suggest this option when itencounters an unescaped mid-field quote.I think the convenience of not having to use an extra option doesn't outweighthe risk of undetected data integrity issues.Thoughts?+1Pavel/Joel", "msg_date": "Thu, 11 May 2023 16:30:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Thu, 11 May 2023 at 10:04, Joel Jacobson <joel@compiler.org> wrote:\n>\n> The parser currently accepts quoting within an unquoted field. This can lead to\n> data misinterpretation when the quote is part of the field data (e.g.,\n> for inches, like in the example).\n\nI think you're thinking about it differently than the parser. I think\nthe parser is treating this the way, say, the shell treats quotes.\nThat is, it sees a quoted \"I bought this for my 6\" followed by an\nunquoted \"a laptop but it didn't fit my 8\" followed by a quoted \"\ntablet\".\n\nSo for example, in that world you might only quote commas and newlines\nso you might print something like\n\n1,2,I bought this for my \"6\"\" laptop\n\" but it \"didn't\" fit my \"8\"\"\" laptop\n\nThe actual CSV spec https://datatracker.ietf.org/doc/html/rfc4180 only\nallows fully quoted or fully unquoted fields and there can only be\nescaped double-doublequote characters in quoted fields and no\ndoublequote characters in unquoted fields.\n\nBut it also says\n\n Due to lack of a single specification, there are considerable\n differences among implementations. Implementors should \"be\n conservative in what you do, be liberal in what you accept from\n others\" (RFC 793 [8]) when processing CSV files. An attempt at a\n common definition can be found in Section 2.\n\n\nSo the real question is are there tools out there that generate\nentries like this and what are their intentions?\n\n> I think we should throw a parsing error for unescaped mid-field quotes,\n> and add a COPY option like ALLOW_MIDFIELD_QUOTES for cases where mid-field\n> quotes are necessary. The error message could suggest this option when it\n> encounters an unescaped mid-field quote.\n>\n> I think the convenience of not having to use an extra option doesn't outweigh\n> the risk of undetected data integrity issues.\n\nIt's also a pretty annoying experience to get a message saying \"error,\nturn this option on to not get an error\". I get what you're saying\ntoo, which is more of a risk depends on whether turning off the error\nis really the right thing most of the time or is just causing data to\nbe read incorrectly.\n\n\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 12 May 2023 14:58:31 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On 2023-05-11 Th 10:03, Joel Jacobson wrote:\n> Hi hackers,\n>\n> I've come across an unexpected behavior in our CSV parser that I'd like to\n> bring up for discussion.\n>\n> % cat example.csv\n> id,rating,review\n> 1,5,\"Great product, will buy again.\"\n> 2,3,\"I bought this for my 6\" laptop but it didn't fit my 8\" tablet\"\n>\n> % psql\n> CREATE TABLE reviews (id int, rating int, review text);\n> \\COPY reviews FROM example.csv WITH CSV HEADER;\n> SELECT * FROM reviews;\n>\n> This gives:\n>\n> id | rating |                           review\n> ----+--------+-------------------------------------------------------------\n>   1 |      5 | Great product, will buy again.\n>   2 |      3 | I bought this for my 6 laptop but it didn't fit my 8 tablet\n> (2 rows)\n\n\nMaybe this is unexpected by you, but it's not by me. What other sane \ninterpretation of that data could there be? And what CSV producer \noutputs such horrible content? As you've noted, ours certainly does not. \nOur rules are clear: quotes within quotes must be escaped (default \nescape is by doubling the quote char). Allowing partial fields to be \nquoted was a deliberate decision when CSV parsing was implemented, \nbecause examples have been seen in the wild.\n\nSo I don't think our behaviour is broken or needs fixing. As mentioned \nby Greg, this is an example of the adage about being liberal in what you \naccept.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-11 Th 10:03, Joel Jacobson\n wrote:\n\n\n\n\n\nHi hackers,\n\n\n\nI've come across an unexpected behavior in our CSV parser\n that I'd like to\n\nbring up for discussion.\n\n\n\n% cat example.csv\n\nid,rating,review\n\n1,5,\"Great product, will buy again.\"\n\n2,3,\"I bought this for my 6\" laptop but it didn't fit my 8\"\n tablet\"\n\n\n\n% psql\n\nCREATE TABLE reviews (id int, rating int, review text);\n\n\\COPY reviews FROM example.csv WITH CSV HEADER;\n\nSELECT * FROM reviews;\n\n\n\nThis gives:\n\n\n\nid | rating |                           review\n\n----+--------+-------------------------------------------------------------\n\n  1 |      5 | Great product, will buy again.\n\n  2 |      3 | I bought this for my 6 laptop but it didn't\n fit my 8 tablet\n\n(2 rows)\n\n\n\n\nMaybe this is unexpected by you, but it's not by me. What other\n sane interpretation of that data could there be? And what CSV\n producer outputs such horrible content? As you've noted, ours\n certainly does not. Our rules are clear: quotes within quotes must\n be escaped (default escape is by doubling the quote char).\n Allowing partial fields to be quoted was a deliberate decision\n when CSV parsing was implemented, because examples have been seen\n in the wild.\n\nSo I don't think our behaviour is broken or needs fixing. As\n mentioned by Greg, this is an example of the adage about being\n liberal in what you accept.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 12 May 2023 15:57:06 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Fri, May 12, 2023, at 21:57, Andrew Dunstan wrote:\n> Maybe this is unexpected by you, but it's not by me. What other sane interpretation of that data could there be? And what CSV producer outputs such horrible content? As you've noted, ours certainly does not. Our rules are clear: quotes within quotes must be escaped (default escape is by doubling the quote char). Allowing partial fields to be quoted was a deliberate decision when CSV parsing was implemented, because examples have been seen in the wild.\n> \n> So I don't think our behaviour is broken or needs fixing. As mentioned by Greg, this is an example of the adage about being liberal in what you accept.\n> \n\nI understand your position, and your points are indeed in line with the\ntraditional \"Robustness Principle\" (aka \"Postel's Law\") [1] from 1980, which\nsuggests \"be conservative in what you send, be liberal in what you accept.\"\nHowever, I'd like to offer a different perspective that might be worth\nconsidering.\n\nA 2021 IETF draft, \"The Harmful Consequences of the Robustness Principle\" [2],\nargues that the flexibility advocated by Postel's Law can lead to problems such\nas unclear specifications and a multitude of varying implementations. Features\nthat initially seem helpful can unexpectedly turn into bugs, resulting in\nunanticipated consequences and data integrity risks.\n\nBased on the feedback from you and others, I'd like to revise my earlier\nproposal. Rather than adding an option to preserve the existing behavior, I now\nthink it's better to simply report an error in such cases. This approach offers\nseveral benefits: it simplifies the CSV parser, reduces the risk of\nmisinterpreting data due to malformed input, and prevents the all-too-familiar\nsituation where users blindly apply an error hint without understanding the\nconsequences.\n\nFinally, I acknowledge that we can't foresee the number of CSV producers that\nproduce mid-field quoting, and this change may cause compatibility issues for\nsome users. However, I consider this an acceptable tradeoff. Users encountering\nthe error would receive a clear message explaining that mid-field quoting is not\nallowed and that they should change their CSV producer's settings to escape\nquotes by doubling the quote character. Importantly, this change guarantees that\npreviously parsed data won't be misinterpreted, as it only enforces stricter\nparsing rules.\n\n[1] https://datatracker.ietf.org/doc/html/rfc761#section-2.10\n[2] https://www.ietf.org/archive/id/draft-iab-protocol-maintenance-05.html\n\n/Joel\nOn Fri, May 12, 2023, at 21:57, Andrew Dunstan wrote:Maybe this is unexpected by you, but it's not by me. What other\n sane interpretation of that data could there be? And what CSV\n producer outputs such horrible content? As you've noted, ours\n certainly does not. Our rules are clear: quotes within quotes must\n be escaped (default escape is by doubling the quote char).\n Allowing partial fields to be quoted was a deliberate decision\n when CSV parsing was implemented, because examples have been seen\n in the wild.So I don't think our behaviour is broken or needs fixing. As\n mentioned by Greg, this is an example of the adage about being\n liberal in what you accept.I understand your position, and your points are indeed in line with thetraditional \"Robustness Principle\" (aka \"Postel's Law\") [1] from 1980, whichsuggests \"be conservative in what you send, be liberal in what you accept.\"However, I'd like to offer a different perspective that might be worthconsidering.A 2021 IETF draft, \"The Harmful Consequences of the Robustness Principle\" [2],argues that the flexibility advocated by Postel's Law can lead to problems suchas unclear specifications and a multitude of varying implementations. Featuresthat initially seem helpful can unexpectedly turn into bugs, resulting inunanticipated consequences and data integrity risks.Based on the feedback from you and others, I'd like to revise my earlierproposal. Rather than adding an option to preserve the existing behavior, I nowthink it's better to simply report an error in such cases. This approach offersseveral benefits: it simplifies the CSV parser, reduces the risk ofmisinterpreting data due to malformed input, and prevents the all-too-familiarsituation where users blindly apply an error hint without understanding theconsequences.Finally, I acknowledge that we can't foresee the number of CSV producers thatproduce mid-field quoting, and this change may cause compatibility issues forsome users. However, I consider this an acceptable tradeoff. Users encounteringthe error would receive a clear message explaining that mid-field quoting is notallowed and that they should change their CSV producer's settings to escapequotes by doubling the quote character. Importantly, this change guarantees thatpreviously parsed data won't be misinterpreted, as it only enforces stricterparsing rules.[1] https://datatracker.ietf.org/doc/html/rfc761#section-2.10[2] https://www.ietf.org/archive/id/draft-iab-protocol-maintenance-05.html/Joel", "msg_date": "Sat, 13 May 2023 10:20:20 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On 2023-05-13 Sa 04:20, Joel Jacobson wrote:\n> On Fri, May 12, 2023, at 21:57, Andrew Dunstan wrote:\n>>\n>> Maybe this is unexpected by you, but it's not by me. What other sane \n>> interpretation of that data could there be? And what CSV producer \n>> outputs such horrible content? As you've noted, ours certainly does \n>> not. Our rules are clear: quotes within quotes must be escaped \n>> (default escape is by doubling the quote char). Allowing partial \n>> fields to be quoted was a deliberate decision when CSV parsing was \n>> implemented, because examples have been seen in the wild.\n>>\n>> So I don't think our behaviour is broken or needs fixing. As \n>> mentioned by Greg, this is an example of the adage about being \n>> liberal in what you accept.\n>>\n>\n> I understand your position, and your points are indeed in line with the\n> traditional \"Robustness Principle\" (aka \"Postel's Law\") [1] from 1980, \n> which\n> suggests \"be conservative in what you send, be liberal in what you \n> accept.\"\n> However, I'd like to offer a different perspective that might be worth\n> considering.\n>\n> A 2021 IETF draft, \"The Harmful Consequences of the Robustness \n> Principle\" [2],\n> argues that the flexibility advocated by Postel's Law can lead to \n> problems such\n> as unclear specifications and a multitude of varying implementations. \n> Features\n> that initially seem helpful can unexpectedly turn into bugs, resulting in\n> unanticipated consequences and data integrity risks.\n>\n> Based on the feedback from you and others, I'd like to revise my earlier\n> proposal. Rather than adding an option to preserve the existing \n> behavior, I now\n> think it's better to simply report an error in such cases. This \n> approach offers\n> several benefits: it simplifies the CSV parser, reduces the risk of\n> misinterpreting data due to malformed input, and prevents the \n> all-too-familiar\n> situation where users blindly apply an error hint without \n> understanding the\n> consequences.\n>\n> Finally, I acknowledge that we can't foresee the number of CSV \n> producers that\n> produce mid-field quoting, and this change may cause compatibility \n> issues for\n> some users. However, I consider this an acceptable tradeoff. Users \n> encountering\n> the error would receive a clear message explaining that mid-field \n> quoting is not\n> allowed and that they should change their CSV producer's settings to \n> escape\n> quotes by doubling the quote character. Importantly, this change \n> guarantees that\n> previously parsed data won't be misinterpreted, as it only enforces \n> stricter\n> parsing rules.\n>\n> [1] https://datatracker.ietf.org/doc/html/rfc761#section-2.10\n> [2] https://www.ietf.org/archive/id/draft-iab-protocol-maintenance-05.html\n>\n>\n\nI'm pretty reluctant to change something that's been working as designed \nfor almost 20 years, and about which we have hitherto had zero \ncomplaints that I recall.\n\nI could see an argument for a STRICT mode which would disallow partially \nquoted fields, although I'd like some evidence that we're dealing with a \nreal problem here. Is there really a CSV producer that produces output \nlike that you showed in your example? And if so has anyone objected to \nthem about the insanity of that?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-13 Sa 04:20, Joel Jacobson\n wrote:\n\n\n\n\n\nOn Fri, May 12, 2023, at 21:57, Andrew Dunstan wrote:\n\n\nMaybe this is unexpected by you, but it's not by me. What\n other sane interpretation of that data could there be? And\n what CSV producer outputs such horrible content? As you've\n noted, ours certainly does not. Our rules are clear: quotes\n within quotes must be escaped (default escape is by doubling\n the quote char). Allowing partial fields to be quoted was a\n deliberate decision when CSV parsing was implemented, because\n examples have been seen in the wild.\n\nSo I don't think our behaviour is broken or needs fixing. As\n mentioned by Greg, this is an example of the adage about being\n liberal in what you accept.\n\n\n\n\nI understand your position, and your points are indeed in\n line with the\n\ntraditional \"Robustness Principle\" (aka \"Postel's Law\") [1]\n from 1980, which\n\nsuggests \"be conservative in what you send, be liberal in\n what you accept.\"\n\nHowever, I'd like to offer a different perspective that might\n be worth\n\nconsidering.\n\n\n\nA 2021 IETF draft, \"The Harmful Consequences of the\n Robustness Principle\" [2],\n\nargues that the flexibility advocated by Postel's Law can\n lead to problems such\n\nas unclear specifications and a multitude of varying\n implementations. Features\n\nthat initially seem helpful can unexpectedly turn into bugs,\n resulting in\n\nunanticipated consequences and data integrity risks.\n\n\n\nBased on the feedback from you and others, I'd like to revise\n my earlier\n\nproposal. Rather than adding an option to preserve the\n existing behavior, I now\n\nthink it's better to simply report an error in such cases.\n This approach offers\n\nseveral benefits: it simplifies the CSV parser, reduces the\n risk of\n\nmisinterpreting data due to malformed input, and prevents the\n all-too-familiar\n\nsituation where users blindly apply an error hint without\n understanding the\n\nconsequences.\n\n\n\nFinally, I acknowledge that we can't foresee the number of\n CSV producers that\n\nproduce mid-field quoting, and this change may cause\n compatibility issues for\n\nsome users. However, I consider this an acceptable tradeoff.\n Users encountering\n\nthe error would receive a clear message explaining that\n mid-field quoting is not\n\nallowed and that they should change their CSV producer's\n settings to escape\n\nquotes by doubling the quote character. Importantly, this\n change guarantees that\n\npreviously parsed data won't be misinterpreted, as it only\n enforces stricter\n\nparsing rules.\n\n\n\n\n [1] https://datatracker.ietf.org/doc/html/rfc761#section-2.10\n\n[2] https://www.ietf.org/archive/id/draft-iab-protocol-maintenance-05.html\n\n\n\n\n\n\n\n\n\n\nI'm pretty reluctant to change something that's been working as\n designed for almost 20 years, and about which we have hitherto had\n zero complaints that I recall.\nI could see an argument for a STRICT mode which would disallow\n partially quoted fields, although I'd like some evidence that\n we're dealing with a real problem here. Is there really a CSV\n producer that produces output like that you showed in your\n example? And if so has anyone objected to them about the insanity\n of that?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 13 May 2023 08:44:48 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I could see an argument for a STRICT mode which would disallow partially \n> quoted fields, although I'd like some evidence that we're dealing with a \n> real problem here. Is there really a CSV producer that produces output \n> like that you showed in your example? And if so has anyone objected to \n> them about the insanity of that?\n\nI think you'd want not just \"some evidence\" but \"compelling evidence\".\nAny such option is going to add cycles into the low-level input parser\nfor COPY, which we know is a hot spot and we've expended plenty of\nsweat on. Adding a speed penalty that will be paid by the 99.99%\nof users who don't have an issue here is going to be a hard sell.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 May 2023 09:45:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Sat, 13 May 2023 at 09:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > I could see an argument for a STRICT mode which would disallow partially\n> > quoted fields, although I'd like some evidence that we're dealing with a\n> > real problem here. Is there really a CSV producer that produces output\n> > like that you showed in your example? And if so has anyone objected to\n> > them about the insanity of that?\n>\n> I think you'd want not just \"some evidence\" but \"compelling evidence\".\n> Any such option is going to add cycles into the low-level input parser\n> for COPY, which we know is a hot spot and we've expended plenty of\n> sweat on. Adding a speed penalty that will be paid by the 99.99%\n> of users who don't have an issue here is going to be a hard sell.\n\nWell I'm not sure that follows. Joel specifically claimed that an\nimplementation that didn't accept inputs like this would actually be\nsimpler and that might mean it would actually be faster.\n\nAnd I don't think you have to look very hard for inputs like this --\nplenty of people generate CSV files from simple templates or script\noutputs that don't understand escaping quotation marks at all. Outputs\nlike that will be fine as long as there's no doublequotes in the\ninputs but then one day someone will enter a doublequote in a form\nsomewhere and blammo.\n\nSo I guess the real question is whether accepting inputs with\nunescapted quotes and interpreting them the way we do is really the\nbest interpretation. Is the user best served by a) assuming they\nintended to quote part of the field and not quote part of it b) assume\nthey failed to escape the quotation mark or c) assume something's gone\nwrong and the input is entirely untrustworthy.\n\n-- \ngreg\n\n\n", "msg_date": "Sat, 13 May 2023 23:11:11 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On 2023-05-13 Sa 23:11, Greg Stark wrote:\n> On Sat, 13 May 2023 at 09:46, Tom Lane<tgl@sss.pgh.pa.us> wrote:\n>> Andrew Dunstan<andrew@dunslane.net> writes:\n>>> I could see an argument for a STRICT mode which would disallow partially\n>>> quoted fields, although I'd like some evidence that we're dealing with a\n>>> real problem here. Is there really a CSV producer that produces output\n>>> like that you showed in your example? And if so has anyone objected to\n>>> them about the insanity of that?\n>> I think you'd want not just \"some evidence\" but \"compelling evidence\".\n>> Any such option is going to add cycles into the low-level input parser\n>> for COPY, which we know is a hot spot and we've expended plenty of\n>> sweat on. Adding a speed penalty that will be paid by the 99.99%\n>> of users who don't have an issue here is going to be a hard sell.\n> Well I'm not sure that follows. Joel specifically claimed that an\n> implementation that didn't accept inputs like this would actually be\n> simpler and that might mean it would actually be faster.\n>\n> And I don't think you have to look very hard for inputs like this --\n> plenty of people generate CSV files from simple templates or script\n> outputs that don't understand escaping quotation marks at all. Outputs\n> like that will be fine as long as there's no doublequotes in the\n> inputs but then one day someone will enter a doublequote in a form\n> somewhere and blammo.\n\n\nThe procedure described is plain wrong, and I don't have too much \nsympathy for people who implement it. Parsing CSV files might be a mild \nPITN, but constructing them is pretty darn simple. Something like this \nperl fragment should do it:\n\n do {\n   s/\"/\"\"/g;\n   $_ = qq{\"$_\"} if /[\",\\r\\n]/;\n } foreach @fields;\n print join(',',@fields),\"\\n\";\n\n\nAnd if people do follow the method you describe then their input with \nunescaped quotes will be rejected 999 times out of 1000. It's only cases \nwhere the field happens to have an even number of embedded quotes, like \nJoel's somewhat contrived example, that the input will be accepted.\n\n\n> So I guess the real question is whether accepting inputs with\n> unescapted quotes and interpreting them the way we do is really the\n> best interpretation. Is the user best served by a) assuming they\n> intended to quote part of the field and not quote part of it b) assume\n> they failed to escape the quotation mark or c) assume something's gone\n> wrong and the input is entirely untrustworthy.\n>\n\nAs I said earlier, I'm quite reluctant to break things that might have \nbeen working happily for people for many years, in order to accommodate \npeople who can't do the minimum required to produce correct CSVs. I have \nno idea how many are relying on it, but I would be slightly surprised if \nthe number were zero.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-13 Sa 23:11, Greg Stark\n wrote:\n\n\nOn Sat, 13 May 2023 at 09:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nI could see an argument for a STRICT mode which would disallow partially\nquoted fields, although I'd like some evidence that we're dealing with a\nreal problem here. Is there really a CSV producer that produces output\nlike that you showed in your example? And if so has anyone objected to\nthem about the insanity of that?\n\n\n\nI think you'd want not just \"some evidence\" but \"compelling evidence\".\nAny such option is going to add cycles into the low-level input parser\nfor COPY, which we know is a hot spot and we've expended plenty of\nsweat on. Adding a speed penalty that will be paid by the 99.99%\nof users who don't have an issue here is going to be a hard sell.\n\n\n\nWell I'm not sure that follows. Joel specifically claimed that an\nimplementation that didn't accept inputs like this would actually be\nsimpler and that might mean it would actually be faster.\n\nAnd I don't think you have to look very hard for inputs like this --\nplenty of people generate CSV files from simple templates or script\noutputs that don't understand escaping quotation marks at all. Outputs\nlike that will be fine as long as there's no doublequotes in the\ninputs but then one day someone will enter a doublequote in a form\nsomewhere and blammo.\n\n\n\nThe procedure described is plain wrong, and I don't have too much\n sympathy for people who implement it. Parsing CSV files might be a\n mild PITN, but constructing them is pretty darn simple. Something\n like this perl fragment should do it:\n\n\ndo {\n   s/\"/\"\"/g;\n   $_ = qq{\"$_\"} if /[\",\\r\\n]/;\n } foreach @fields;\n print join(',',@fields),\"\\n\";\n\n\n\n\nAnd if people do follow the method you describe then their input\n with unescaped quotes will be rejected 999 times out of 1000. It's\n only cases where the field happens to have an even number of\n embedded quotes, like Joel's somewhat contrived example, that the\n input will be accepted.\n\n\n\n\n\nSo I guess the real question is whether accepting inputs with\nunescapted quotes and interpreting them the way we do is really the\nbest interpretation. Is the user best served by a) assuming they\nintended to quote part of the field and not quote part of it b) assume\nthey failed to escape the quotation mark or c) assume something's gone\nwrong and the input is entirely untrustworthy.\n\n\n\n\n\nAs I said earlier, I'm quite reluctant to break things that might\n have been working happily for people for many years, in order to\n accommodate people who can't do the minimum required to produce\n correct CSVs. I have no idea how many are relying on it, but I\n would be slightly surprised if the number were zero.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 14 May 2023 10:58:37 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Sun, May 14, 2023, at 16:58, Andrew Dunstan wrote:\n> And if people do follow the method you describe then their input with\n> unescaped quotes will be rejected 999 times out of 1000. It's only cases where\n> the field happens to have an even number of embedded quotes, like Joel's\n> somewhat contrived example, that the input will be accepted.\n\nI concur with Andrew that my previous example might've been somewhat\ncontrived, as it deliberately included two instances of the term \"inches\".\nIt's a matter of time before someone submits a review featuring an odd number of\n\"inches\", leading to an error.\n\nHaving done some additional digging, I stumbled upon three instances [1] [2] [3]\nwhere users have misidentified their TSV/TEXT files as CSV. In these situations,\nusers have been shielded from failure due to an imbalance in quotes.\n\nHowever, in cases where a field is utilized to store text in which the double\nquotation mark is rarely used to denote inches, but instead, for quotation\nand/or HTML attributes, it's quite feasible that a large amount of\nuser-generated text could contain balanced quotes. Even more concerning is the\npotential for cases where the vast majority of inputs may not even contain\ndouble quotation marks at all. This would effectively render the issue invisible,\neven upon manual data inspection.\n\nHere's a problem scenario that I believe is plausible:\n\n1. The user wishes to import a .TXT file into PostgreSQL.\n\n2. The user examines the .TXT file and observes column headers separated by a\ndelimiter like TAB or semicolon, with subsequent rows of data also separated by\nthe same delimiter.\n\n3. The user is familiar with \"CSV\" (412M hits on Google) but not \"TSV\" (48M hits\non Google), leading to a false assumption that their file is in CSV format.\n\n4. A Google search for \"import csv into postgresql\" leads the user to a tutorial\ntitled \"Import CSV File Into PostgreSQL Table\". An example found therein:\n\nCOPY persons(first_name, last_name, dob, email)\nFROM 'C:\\sampledb\\persons.csv'\nDELIMITER ','\nCSV HEADER;\n\n5. The user, now confident, believes they understand how to import their \"CSV\"\nfile.\n\n6. In contrast to the \"ERROR: unterminated CSV quoted field\" examples below,\nthis user's .TXT file contains fields with balanced midfield quote-marks:\n\nblog_posts.txt:\nid message\n1 This is a <b>bold</b> statement\n\n7. The user copies the COPY command from the tutorial and modifies the file path\nand delimiter accordingly. The user then concludes that the code is functioning\nas expected and proceeds to deploy it.\n\n8, Weeks later, users complain about broken links in their blog posts. Upon\ninspection of the blog_posts table, the user identifies an issue:\n\nSELECT * FROM blog_posts;\nid | message\n----+------------------------------------------------------------------------\n1 | This is a <b>bold</b> statement\n2 | Check <a href=http://example.com/?param1=Midfield quoting>this</a> out\n(2 rows)\n\nOne of the users has used balanced quotes for the href attribute, which was\nimported successfully but the quotes were stripped, contrary to the intention of\npreserving them.\n\nContent of blog_posts.txt:\nid message\n1 This is a <b>bold</b> statement\n2 Check <a href=\"http://example.com/?param1=Midfield quoting\">this</a> out\n\nIf we made midfield quoting a CSV error, those users who are currently mistaken\nabout their TSV/TEXT files being CSV while also having balanced quotes in their\ndata, would encounter an error rather than a silent failure, which I believe\nwould be an enhancement.\n\n/Joel\n\n[1] https://www.postgresql.org/message-id/1upfg19cru2jigbm553fugj5k6iebtd4ps@4ax.com\n[2] https://stackoverflow.com/questions/44108286/unterminated-csv-quoted-field-in-postgres\n[3] https://dba.stackexchange.com/questions/306662/unterminated-csv-quoted-field-when-to-import-csv-data-file-into-postgresql\n\n\nOn Sun, May 14, 2023, at 16:58, Andrew Dunstan wrote:> And if people do follow the method you describe then their input with> unescaped quotes will be rejected 999 times out of 1000. It's only cases where> the field happens to have an even number of embedded quotes, like Joel's> somewhat contrived example, that the input will be accepted.I concur with Andrew that my previous example might've been somewhatcontrived, as it deliberately included two instances of the term \"inches\".It's a matter of time before someone submits a review featuring an odd number of\"inches\", leading to an error.Having done some additional digging, I stumbled upon three instances [1] [2] [3]where users have misidentified their TSV/TEXT files as CSV. In these situations,users have been shielded from failure due to an imbalance in quotes.However, in cases where a field is utilized to store text in which the doublequotation mark is rarely used to denote inches, but instead, for quotationand/or HTML attributes, it's quite feasible that a large amount ofuser-generated text could contain balanced quotes. Even more concerning is thepotential for cases where the vast majority of inputs may not even containdouble quotation marks at all. This would effectively render the issue invisible,even upon manual data inspection.Here's a problem scenario that I believe is plausible:1. The user wishes to import a .TXT file into PostgreSQL.2. The user examines the .TXT file and observes column headers separated by adelimiter like TAB or semicolon, with subsequent rows of data also separated bythe same delimiter.3. The user is familiar with \"CSV\" (412M hits on Google) but not \"TSV\" (48M hitson Google), leading to a false assumption that their file is in CSV format.4. A Google search for \"import csv into postgresql\" leads the user to a tutorialtitled \"Import CSV File Into PostgreSQL Table\". An example found therein:COPY persons(first_name, last_name, dob, email)FROM 'C:\\sampledb\\persons.csv'DELIMITER ','CSV HEADER;5. The user, now confident, believes they understand how to import their \"CSV\"file.6. In contrast to the \"ERROR: unterminated CSV quoted field\" examples below,this user's .TXT file contains fields with balanced midfield quote-marks:blog_posts.txt:id\tmessage1\tThis is a <b>bold</b> statement7. The user copies the COPY command from the tutorial and modifies the file pathand delimiter accordingly. The user then concludes that the code is functioningas expected and proceeds to deploy it.8, Weeks later, users complain about broken links in their blog posts. Uponinspection of the blog_posts table, the user identifies an issue:SELECT * FROM blog_posts;id | message----+------------------------------------------------------------------------1 | This is a <b>bold</b> statement2 | Check <a href=http://example.com/?param1=Midfield quoting>this</a> out(2 rows)One of the users has used balanced quotes for the href attribute, which wasimported successfully but the quotes were stripped, contrary to the intention ofpreserving them.Content of blog_posts.txt:id\tmessage1\tThis is a <b>bold</b> statement2\tCheck <a href=\"http://example.com/?param1=Midfield quoting\">this</a> outIf we made midfield quoting a CSV error, those users who are currently mistakenabout their TSV/TEXT files being CSV while also having balanced quotes in theirdata, would encounter an error rather than a silent failure, which I believewould be an enhancement./Joel[1] https://www.postgresql.org/message-id/1upfg19cru2jigbm553fugj5k6iebtd4ps@4ax.com[2] https://stackoverflow.com/questions/44108286/unterminated-csv-quoted-field-in-postgres[3] https://dba.stackexchange.com/questions/306662/unterminated-csv-quoted-field-when-to-import-csv-data-file-into-postgresql", "msg_date": "Tue, 16 May 2023 13:43:28 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Tue, May 16, 2023, at 13:43, Joel Jacobson wrote:\n>If we made midfield quoting a CSV error, those users who are currently mistaken\n>about their TSV/TEXT files being CSV while also having balanced quotes in their\n>data, would encounter an error rather than a silent failure, which I believe\n>would be an enhancement.\n\nFurthermore, I think it could be beneficial to add a HINT message for all type\nof CSV/TEXT parsing errors, since the precise ERROR messages might just cause\nthe user to tinker with the options until it works, instead of carefully reading\nthrough the documentation on the various formats.\n\nPerhaps something like this:\n\nHINT: Are you sure the FORMAT matches your input?\n\nAlso, the COPY documentation says nothing about TSV, and I know TEXT isn't\nexactly TSV, but it's at least much more TSV than CSV, so maybe we should\ndescribe the differences, such as \\N. I think the best advise to users would be\nto avoid exporting to .TSV and use .CSV instead, since I've noticed e.g.\nGoogle Sheets to replace newlines in fields with blank space when\nexporting .TSV, which effectively destroys data.\n\nThe first search results for \"postgresql tsv\" on Google link to postgresql.org\npages, but the COPY docs are not one of them unfortunately.\n\nThe first relevant hit is this one:\n\n\"Importing a TSV File into Postgres | by Riley Wong\" [1]\n\nSadly, this author has also misunderstood how to properly import a .TSV file,\nhe got it all wrong, and doesn't understand or at least doesn't mention there\nare more differences than just the delimiter:\n\nCOPY listings \nFROM '/home/ec2-user/list.tsv'\nDELIMITER E'\\t'\nCSV HEADER;\n\nI must confess I have used PostgreSQL for over two decades without having really\nunderstood the detailed differences between TEXT and CSV, until recently.\n\n[1] https://medium.com/@rlwong2/importing-a-tsv-file-into-postgres-364572a004bf\n\nOn Tue, May 16, 2023, at 13:43, Joel Jacobson wrote:>If we made midfield quoting a CSV error, those users who are currently mistaken>about their TSV/TEXT files being CSV while also having balanced quotes in their>data, would encounter an error rather than a silent failure, which I believe>would be an enhancement.Furthermore, I think it could be beneficial to add a HINT message for all typeof CSV/TEXT parsing errors, since the precise ERROR messages might just causethe user to tinker with the options until it works, instead of carefully readingthrough the documentation on the various formats.Perhaps something like this:HINT: Are you sure the FORMAT matches your input?Also, the COPY documentation says nothing about TSV, and I know TEXT isn'texactly TSV, but it's at least much more TSV than CSV, so maybe we shoulddescribe the differences, such as \\N. I think the best advise to users would beto avoid exporting to .TSV and use .CSV instead, since I've noticed e.g.Google Sheets to replace newlines in fields with blank space whenexporting .TSV, which effectively destroys data.The first search results for \"postgresql tsv\" on Google link to postgresql.orgpages, but the COPY docs are not one of them unfortunately.The first relevant hit is this one:\"Importing a TSV File into Postgres | by Riley Wong\" [1]Sadly, this author has also misunderstood how to properly import a .TSV file,he got it all wrong, and doesn't understand or at least doesn't mention thereare more differences than just the delimiter:COPY listings FROM '/home/ec2-user/list.tsv'DELIMITER E'\\t'CSV HEADER;I must confess I have used PostgreSQL for over two decades without having reallyunderstood the detailed differences between TEXT and CSV, until recently.[1] https://medium.com/@rlwong2/importing-a-tsv-file-into-postgres-364572a004bf", "msg_date": "Tue, 16 May 2023 19:15:03 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On 2023-05-16 Tu 13:15, Joel Jacobson wrote:\n> On Tue, May 16, 2023, at 13:43, Joel Jacobson wrote:\n> >If we made midfield quoting a CSV error, those users who are \n> currently mistaken\n> >about their TSV/TEXT files being CSV while also having balanced \n> quotes in their\n> >data, would encounter an error rather than a silent failure, which I \n> believe\n> >would be an enhancement.\n>\n> Furthermore, I think it could be beneficial to add a HINT message for \n> all type\n> of CSV/TEXT parsing errors, since the precise ERROR messages might \n> just cause\n> the user to tinker with the options until it works, instead of \n> carefully reading\n> through the documentation on the various formats.\n>\n> Perhaps something like this:\n>\n> HINT: Are you sure the FORMAT matches your input?\n>\n> Also, the COPY documentation says nothing about TSV, and I know TEXT isn't\n> exactly TSV, but it's at least much more TSV than CSV, so maybe we should\n> describe the differences, such as \\N. I think the best advise to users \n> would be\n> to avoid exporting to .TSV and use .CSV instead, since I've noticed e.g.\n> Google Sheets to replace newlines in fields with blank space when\n> exporting .TSV, which effectively destroys data.\n>\n> The first search results for \"postgresql tsv\" on Google link to \n> postgresql.org\n> pages, but the COPY docs are not one of them unfortunately.\n>\n> The first relevant hit is this one:\n>\n> \"Importing a TSV File into Postgres | by Riley Wong\" [1]\n>\n> Sadly, this author has also misunderstood how to properly import a \n> .TSV file,\n> he got it all wrong, and doesn't understand or at least doesn't \n> mention there\n> are more differences than just the delimiter:\n>\n> COPY listings\n> FROM '/home/ec2-user/list.tsv'\n> DELIMITER E'\\t'\n> CSV HEADER;\n>\n> I must confess I have used PostgreSQL for over two decades without \n> having really\n> understood the detailed differences between TEXT and CSV, until recently.\n>\n> [1] \n> https://medium.com/@rlwong2/importing-a-tsv-file-into-postgres-364572a004bf\n\n\nYou can use CSV mode pretty reliably for TSV files. The trick is to use \na quoting char that shouldn't appear, such as E'\\x01' as well as setting \nthe delimiter to E'\\t'. Yes, it's far from obvious.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-16 Tu 13:15, Joel Jacobson\n wrote:\n\n\n\n\n\nOn Tue, May 16, 2023, at 13:43, Joel Jacobson wrote:\n\n>If we made midfield quoting a CSV error, those users who\n are currently mistaken\n\n>about their TSV/TEXT files being CSV while also having\n balanced quotes in their\n\n>data, would encounter an error rather than a silent\n failure, which I believe\n\n>would be an enhancement.\n\n\n\nFurthermore, I think it could be beneficial to add a HINT\n message for all type\n\nof CSV/TEXT parsing errors, since the precise ERROR messages\n might just cause\n\nthe user to tinker with the options until it works, instead\n of carefully reading\n\nthrough the documentation on the various formats.\n\n\n\nPerhaps something like this:\n\n\n\nHINT: Are you sure the FORMAT matches your input?\n\n\n\nAlso, the COPY documentation says nothing about TSV, and I\n know TEXT isn't\n\nexactly TSV, but it's at least much more TSV than CSV, so\n maybe we should\n\ndescribe the differences, such as \\N. I think the best advise\n to users would be\n\nto avoid exporting to .TSV and use .CSV instead, since I've\n noticed e.g.\n\nGoogle Sheets to replace newlines in fields with blank space\n when\n\nexporting .TSV, which effectively destroys data.\n\n\n\nThe first search results for \"postgresql tsv\" on Google link\n to postgresql.org\n\npages, but the COPY docs are not one of them unfortunately.\n\n\n\nThe first relevant hit is this one:\n\n\n\n\"Importing a TSV File into Postgres | by Riley Wong\" [1]\n\n\n\nSadly, this author has also misunderstood how to properly\n import a .TSV file,\n\nhe got it all wrong, and doesn't understand or at least\n doesn't mention there\n\nare more differences than just the delimiter:\n\n\n\nCOPY listings \n\nFROM '/home/ec2-user/list.tsv'\n\nDELIMITER E'\\t'\n\nCSV HEADER;\n\n\n\nI must confess I have used PostgreSQL for over two decades\n without having really\n\nunderstood the detailed differences between TEXT and CSV,\n until recently.\n\n\n\n[1] https://medium.com/@rlwong2/importing-a-tsv-file-into-postgres-364572a004bf\n\n\n\n\nYou can use CSV mode pretty reliably for TSV files. The trick is\n to use a quoting char that shouldn't appear, such as E'\\x01' as\n well as setting the delimiter to E'\\t'. Yes, it's far from\n obvious.\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 17 May 2023 13:42:08 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Wed, May 17, 2023, at 19:42, Andrew Dunstan wrote:\n> You can use CSV mode pretty reliably for TSV files. The trick is to use a\n> quoting char that shouldn't appear, such as E'\\x01' as well as setting the\n> delimiter to E'\\t'. Yes, it's far from obvious.\n\nI've been using that trick myself many times in the past, but thanks to this\ndeep-dive into this topic, it looks to me like TEXT would be a better format\nfit when dealing with unquoted TSV files, or?\n\nOTOH, one would then need to inspect the TSV file doesn't contain \\. on an empty\nline...\n\nI was about to suggest we perhaps should consider adding a TSV format, that\nis like TEXT excluding the PostgreSQL specific things like \\. and \\N,\nbut then I tested exporting TSV from Numbers on Mac and Google Sheets,\nand I can see there are incompatible differences. Numbers quote fields\nthat contain double-quote marks, while Google Sheets doesn't.\nNone of them (unsurpringly) uses midfield quoting though.\n\nAnyone using Excel that could try exporting the following example as CSV/TSV?\n\nCREATE TABLE t (a text, b text, c text, d text);\nINSERT INTO t (a, b, c, d)\nVALUES ('unquoted','a \"quoted\" string', 'field, with a comma', E'field\\t with a tab');\n\nI agree with you that it's unwise to change something that's been working\ngreat for such a long time, and I agree CSV-files are probably not a problem\nper se, but I think you will agree with me TSV-files is a different story,\nfrom a user-friendliness and correctness perspective. Sure, we could just say\n\"Don't use TSV! Use CSV instead!\" in the docs, that would be an improvement\nI think, but there is currently nothing on \"TSV\" in the docs, so users will\ngoogle and find all these broken dangerous suggestions on work-arounds.\n\nThoughts?\n\n/Joel\nOn Wed, May 17, 2023, at 19:42, Andrew Dunstan wrote:> You can use CSV mode pretty reliably for TSV files. The trick is to use a> quoting char that shouldn't appear, such as E'\\x01' as well as setting the> delimiter to E'\\t'. Yes, it's far from obvious.I've been using that trick myself many times in the past, but thanks to thisdeep-dive into this topic, it looks to me like TEXT would be a better formatfit when dealing with unquoted TSV files, or?OTOH, one would then need to inspect the TSV file doesn't contain \\. on an emptyline...I was about to suggest we perhaps should consider adding a TSV format, thatis like TEXT excluding the PostgreSQL specific things like \\. and \\N,but then I tested exporting TSV from Numbers on Mac and Google Sheets,and I can see there are incompatible differences. Numbers quote fieldsthat contain double-quote marks, while Google Sheets doesn't.None of them (unsurpringly) uses midfield quoting though.Anyone using Excel that could try exporting the following example as CSV/TSV?CREATE TABLE t (a text, b text, c text, d text);INSERT INTO t (a, b, c, d)VALUES ('unquoted','a \"quoted\" string', 'field, with a comma', E'field\\t with a tab');I agree with you that it's unwise to change something that's been workinggreat for such a long time, and I agree CSV-files are probably not a problemper se, but I think you will agree with me TSV-files is a different story,from a user-friendliness and correctness perspective. Sure, we could just say\"Don't use TSV! Use CSV instead!\" in the docs, that would be an improvementI think, but there is currently nothing on \"TSV\" in the docs, so users willgoogle and find all these broken dangerous suggestions on work-arounds.Thoughts?/Joel", "msg_date": "Wed, 17 May 2023 23:45:54 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Wed, May 17, 2023 at 5:47 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> On Wed, May 17, 2023, at 19:42, Andrew Dunstan wrote:\n> > You can use CSV mode pretty reliably for TSV files. The trick is to use a\n> > quoting char that shouldn't appear, such as E'\\x01' as well as setting\n> the\n> > delimiter to E'\\t'. Yes, it's far from obvious.\n>\n> I've been using that trick myself many times in the past, but thanks to\n> this\n> deep-dive into this topic, it looks to me like TEXT would be a better\n> format\n> fit when dealing with unquoted TSV files, or?\n>\n> OTOH, one would then need to inspect the TSV file doesn't contain \\. on an\n> empty\n> line...\n>\n> I was about to suggest we perhaps should consider adding a TSV format, that\n> is like TEXT excluding the PostgreSQL specific things like \\. and \\N,\n> but then I tested exporting TSV from Numbers on Mac and Google Sheets,\n> and I can see there are incompatible differences. Numbers quote fields\n> that contain double-quote marks, while Google Sheets doesn't.\n> None of them (unsurpringly) uses midfield quoting though.\n>\n> Anyone using Excel that could try exporting the following example as\n> CSV/TSV?\n>\n> CREATE TABLE t (a text, b text, c text, d text);\n> INSERT INTO t (a, b, c, d)\n> VALUES ('unquoted','a \"quoted\" string', 'field, with a comma', E'field\\t\n> with a tab');\n>\n>\nHere you go. Not horrible handling. (I use DataGrip so I saved it from\nthere directly as TSV,\njust for an extra datapoint).\n\nFWIW, if you copy/paste in windows, the data, the field with the tab gets\nsplit into another column in Excel.\nBut saving it as a file, and opening it.\nSaving it as XLSX, and then having Excel save it as a TSV (versus opening a\ntext file, and saving it back)\n\nKirk...", "msg_date": "Wed, 17 May 2023 18:18:05 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Thu, May 18, 2023, at 00:18, Kirk Wolak wrote:\n> Here you go. Not horrible handling. (I use DataGrip so I saved it from there\n> directly as TSV, just for an extra datapoint).\n>\n> FWIW, if you copy/paste in windows, the data, the field with the tab gets\n> split into another column in Excel. But saving it as a file, and opening it.\n> Saving it as XLSX, and then having Excel save it as a TSV (versus opening a\n> text file, and saving it back)\n\nVery useful, thanks.\n\nInteresting, DataGrip contrary to Excel doesn't quote fields with commas in TSV.\nAll the DataGrip/Excel TSV variants uses quoting when necessary,\ncontrary to Google Sheets's TSV-format, that doesn't quote fields at all.\n\nDataGrip/Excel terminate also the last record with newline,\nwhile Google Sheets omit the newline for the last record,\n(which is bad, since then a streaming reader wouldn't know\nif the last record is completed or not.)\n\nThis makes me think we probably shouldn't add a new TSV format,\nsince there is no consistency between vendors.\nIt's impossible to deduce with certainty if a TSV-field that\nbegins with a double quotation mark is quoted or unquoted.\n\nTwo alternative ideas:\n\n1. How about adding a `WITHOUT QUOTE` or `QUOTE NONE` option in conjunction\nwith `COPY ... WITH CSV`?\n\nInternally, it would just set\n\n quotec = '\\0';`\n\nso it would't affect performance at all.\n\n2. How about adding a note on the complexities of dealing with TSV files in the\nCOPY documentation?\n\n/Joel\n\nOn Thu, May 18, 2023, at 00:18, Kirk Wolak wrote:> Here you go. Not horrible handling.  (I use DataGrip so I saved it from there> directly as TSV, just for an extra datapoint).>> FWIW, if you copy/paste in windows, the data, the field with the tab gets> split into another column in Excel. But saving it as a file, and opening it.> Saving it as XLSX, and then having Excel save it as a TSV (versus opening a> text file, and saving it back)Very useful, thanks.Interesting, DataGrip contrary to Excel doesn't quote fields with commas in TSV.All the DataGrip/Excel TSV variants uses quoting when necessary,contrary to Google Sheets's TSV-format, that doesn't quote fields at all.DataGrip/Excel terminate also the last record with newline,while Google Sheets omit the newline for the last record,(which is bad, since then a streaming reader wouldn't knowif the last record is completed or not.)This makes me think we probably shouldn't add a new TSV format,since there is no consistency between vendors.It's impossible to deduce with certainty if a TSV-field thatbegins with a double quotation mark is quoted or unquoted.Two alternative ideas:1. How about adding a `WITHOUT QUOTE` or `QUOTE NONE` option in conjunctionwith `COPY ... WITH CSV`?Internally, it would just set    quotec = '\\0';`so it would't affect performance at all.2. How about adding a note on the complexities of dealing with TSV files in theCOPY documentation?/Joel", "msg_date": "Thu, 18 May 2023 08:00:28 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Thu, May 18, 2023, at 08:00, Joel Jacobson wrote:\n> 1. How about adding a `WITHOUT QUOTE` or `QUOTE NONE` option in conjunction\n> with `COPY ... WITH CSV`?\n\nMore ideas:\n[ QUOTE 'quote_character' | UNQUOTED ]\nor\n[ QUOTE 'quote_character' | NO_QUOTE ]\n\nThinking about it, I recall another hack;\nspecifying a non-existing char as the delimiter to force the entire line into a\nsingle column table. For that use-case, we could also provide an option that\nwould internally set:\n\n delimc = '\\0';\n\nHow about:\n\n[DELIMITER 'delimiter_character' | UNDELIMITED ]\nor\n[DELIMITER 'delimiter_character' | NO_DELIMITER ]\nor it should be more use-case-based and intuitive:\n[DELIMITER 'delimiter_character' | WHOLE_LINE_AS_RECORD ]\n\n/Joel\nOn Thu, May 18, 2023, at 08:00, Joel Jacobson wrote:> 1. How about adding a `WITHOUT QUOTE` or `QUOTE NONE` option in conjunction> with `COPY ... WITH CSV`?More ideas:[ QUOTE 'quote_character' | UNQUOTED ]or[ QUOTE 'quote_character' | NO_QUOTE ]Thinking about it, I recall another hack;specifying a non-existing char as the delimiter to force the entire line into asingle column table. For that use-case, we could also provide an option thatwould internally set:    delimc = '\\0';How about:[DELIMITER 'delimiter_character' | UNDELIMITED ]or[DELIMITER 'delimiter_character' | NO_DELIMITER ]or it should be more use-case-based and intuitive:[DELIMITER 'delimiter_character' | WHOLE_LINE_AS_RECORD ]/Joel", "msg_date": "Thu, 18 May 2023 08:19:24 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "čt 18. 5. 2023 v 8:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, May 18, 2023, at 00:18, Kirk Wolak wrote:\n> > Here you go. Not horrible handling. (I use DataGrip so I saved it from\n> there\n> > directly as TSV, just for an extra datapoint).\n> >\n> > FWIW, if you copy/paste in windows, the data, the field with the tab gets\n> > split into another column in Excel. But saving it as a file, and opening\n> it.\n> > Saving it as XLSX, and then having Excel save it as a TSV (versus\n> opening a\n> > text file, and saving it back)\n>\n> Very useful, thanks.\n>\n> Interesting, DataGrip contrary to Excel doesn't quote fields with commas\n> in TSV.\n> All the DataGrip/Excel TSV variants uses quoting when necessary,\n> contrary to Google Sheets's TSV-format, that doesn't quote fields at all.\n>\n\nMaybe there is another third implementation in Libre Office.\n\nGenerally TSV is not well specified, and then the implementations are not\nconsistent.\n\n\n\n>\n> DataGrip/Excel terminate also the last record with newline,\n> while Google Sheets omit the newline for the last record,\n> (which is bad, since then a streaming reader wouldn't know\n> if the last record is completed or not.)\n>\n> This makes me think we probably shouldn't add a new TSV format,\n> since there is no consistency between vendors.\n> It's impossible to deduce with certainty if a TSV-field that\n> begins with a double quotation mark is quoted or unquoted.\n>\n> Two alternative ideas:\n>\n> 1. How about adding a `WITHOUT QUOTE` or `QUOTE NONE` option in conjunction\n> with `COPY ... WITH CSV`?\n>\n> Internally, it would just set\n>\n> quotec = '\\0';`\n>\n> so it would't affect performance at all.\n>\n> 2. How about adding a note on the complexities of dealing with TSV files\n> in the\n> COPY documentation?\n>\n> /Joel\n>\n>\n\nčt 18. 5. 2023 v 8:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, May 18, 2023, at 00:18, Kirk Wolak wrote:> Here you go. Not horrible handling.  (I use DataGrip so I saved it from there> directly as TSV, just for an extra datapoint).>> FWIW, if you copy/paste in windows, the data, the field with the tab gets> split into another column in Excel. But saving it as a file, and opening it.> Saving it as XLSX, and then having Excel save it as a TSV (versus opening a> text file, and saving it back)Very useful, thanks.Interesting, DataGrip contrary to Excel doesn't quote fields with commas in TSV.All the DataGrip/Excel TSV variants uses quoting when necessary,contrary to Google Sheets's TSV-format, that doesn't quote fields at all.Maybe there is another third implementation in Libre Office.Generally TSV is not well specified, and then the implementations are not consistent. DataGrip/Excel terminate also the last record with newline,while Google Sheets omit the newline for the last record,(which is bad, since then a streaming reader wouldn't knowif the last record is completed or not.)This makes me think we probably shouldn't add a new TSV format,since there is no consistency between vendors.It's impossible to deduce with certainty if a TSV-field thatbegins with a double quotation mark is quoted or unquoted.Two alternative ideas:1. How about adding a `WITHOUT QUOTE` or `QUOTE NONE` option in conjunctionwith `COPY ... WITH CSV`?Internally, it would just set    quotec = '\\0';`so it would't affect performance at all.2. How about adding a note on the complexities of dealing with TSV files in theCOPY documentation?/Joel", "msg_date": "Thu, 18 May 2023 08:35:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Thu, May 18, 2023, at 08:35, Pavel Stehule wrote:\n> Maybe there is another third implementation in Libre Office.\n>\n> Generally TSV is not well specified, and then the implementations are not consistent.\n\nThanks Pavel, that was a very interesting case indeed:\n\nLibre Office (tested on Mac) doesn't have a separate TSV format,\nbut its CSV format allows specifying custom \"Field delimiter\" and\n\"String delimiter\".\n\nHow peculiar, in Libre Office, when trying to write double quotation marks\n(using Shift+2 on my keyboard) you actually don't get the normal double\nquotation marks, but some special type of Unicode-quoting,\ne2 80 9c (\"LEFT DOUBLE QUOTATION MARK\") and\ne2 80 9d (\"RIGHT DOUBLE QUOTATION MARK\"),\nand in the .CSV file you get the normal double quotation marks as\n\"String delimiter\":\n\na,b,c,d,e\nunquoted,“this field is quoted”,this “word” is quoted,\"field with , comma\",field with tab\n\nSo, my \"this field is quoted\" experiment was exported unquoted since their\nquotation marks don't need to be quoted.\n\nOn Thu, May 18, 2023, at 08:35, Pavel Stehule wrote:> Maybe there is another third implementation in Libre Office.>> Generally TSV is not well specified, and then the implementations are not consistent.Thanks Pavel, that was a very interesting case indeed:Libre Office (tested on Mac) doesn't have a separate TSV format,but its CSV format allows specifying custom \"Field delimiter\" and\"String delimiter\".How peculiar, in Libre Office, when trying to write double quotation marks(using Shift+2 on my keyboard) you actually don't get the normal doublequotation marks, but some special type of Unicode-quoting,e2 80 9c (\"LEFT DOUBLE QUOTATION MARK\") ande2 80 9d (\"RIGHT DOUBLE QUOTATION MARK\"),and in the .CSV file you get the normal double quotation marks as\"String delimiter\":a,b,c,d,eunquoted,“this field is quoted”,this “word” is quoted,\"field with , comma\",field with  tabSo, my \"this field is quoted\" experiment was exported unquoted since theirquotation marks don't need to be quoted.", "msg_date": "Thu, 18 May 2023 09:51:13 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On 2023-05-18 Th 02:19, Joel Jacobson wrote:\n> On Thu, May 18, 2023, at 08:00, Joel Jacobson wrote:\n> > 1. How about adding a `WITHOUT QUOTE` or `QUOTE NONE` option in \n> conjunction\n> > with `COPY ... WITH CSV`?\n>\n> More ideas:\n> [ QUOTE 'quote_character' | UNQUOTED ]\n> or\n> [ QUOTE 'quote_character' | NO_QUOTE ]\n>\n> Thinking about it, I recall another hack;\n> specifying a non-existing char as the delimiter to force the entire \n> line into a\n> single column table. For that use-case, we could also provide an \n> option that\n> would internally set:\n>\n>     delimc = '\\0';\n>\n> How about:\n>\n> [DELIMITER 'delimiter_character' | UNDELIMITED ]\n> or\n> [DELIMITER 'delimiter_character' | NO_DELIMITER ]\n> or it should be more use-case-based and intuitive:\n> [DELIMITER 'delimiter_character' | WHOLE_LINE_AS_RECORD ]\n>\n>\n\nQUOTE NONE and DELIMITER NONE should work fine. NONE is already a \nkeyword, so the disturbance should be minimal.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-18 Th 02:19, Joel Jacobson\n wrote:\n\n\n\n\n\nOn Thu, May 18, 2023, at 08:00, Joel Jacobson wrote:\n\n> 1. How about adding a `WITHOUT QUOTE` or `QUOTE NONE`\n option in conjunction\n\n> with `COPY ... WITH CSV`?\n\n\n\nMore ideas:\n\n[ QUOTE 'quote_character' | UNQUOTED ]\n\nor\n\n[ QUOTE 'quote_character' | NO_QUOTE ]\n\n\n\nThinking about it, I recall another hack;\n\nspecifying a non-existing char as the delimiter to force the\n entire line into a\n\nsingle column table. For that use-case, we could also provide\n an option that\n\nwould internally set:\n\n\n\n    delimc = '\\0';\n\n\n\nHow about:\n\n\n\n[DELIMITER 'delimiter_character' | UNDELIMITED ]\n\nor\n\n[DELIMITER 'delimiter_character' | NO_DELIMITER ]\n\nor it should be more use-case-based and intuitive:\n\n[DELIMITER 'delimiter_character' | WHOLE_LINE_AS_RECORD ]\n\n\n\n\n\n\n\n\nQUOTE NONE and DELIMITER NONE should work fine. NONE is already a\n keyword, so the disturbance should be minimal.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 18 May 2023 08:24:30 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "\tJoel Jacobson wrote:\n\n> I've been using that trick myself many times in the past, but thanks to this\n> deep-dive into this topic, it looks to me like TEXT would be a better format\n> fit when dealing with unquoted TSV files, or?\n> \n> OTOH, one would then need to inspect the TSV file doesn't contain \\. on an\n> empty line...\n\nNote that this is the case for valid CSV contents, since backslash-dot\non a line by itself is both an end-of-data marker for COPY FROM and a\nvalid CSV line.\nHaving this line in the data results in either an error or having the\nrest of the data silently discarded, depending on the context. There\nis some previous discussion about this in [1].\nSince the TEXT format doesn't have this kind of problem, one solution\nis to filter the data through PROGRAM with an [untrusted CSV]->TEXT\nfilter. This is to be preferred over direct CSV loading when\nstrictness or robustness are more important than convenience.\n\n\n[1]\nhttps://www.postgresql.org/message-id/10e3eff6-eb04-4b3f-aeb4-b920192b977a@manitou-mail.org\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 18 May 2023 18:48:39 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Thu, May 18, 2023, at 18:48, Daniel Verite wrote:\n> Joel Jacobson wrote:\n>> OTOH, one would then need to inspect the TSV file doesn't contain \\. on an\n>> empty line...\n>\n> Note that this is the case for valid CSV contents, since backslash-dot\n> on a line by itself is both an end-of-data marker for COPY FROM and a\n> valid CSV line.\n> Having this line in the data results in either an error or having the\n> rest of the data silently discarded, depending on the context. There\n> is some previous discussion about this in [1].\n> Since the TEXT format doesn't have this kind of problem, one solution\n> is to filter the data through PROGRAM with an [untrusted CSV]->TEXT\n> filter. This is to be preferred over direct CSV loading when\n> strictness or robustness are more important than convenience.\n>\n>\n> [1]\n> https://www.postgresql.org/message-id/10e3eff6-eb04-4b3f-aeb4-b920192b977a@manitou-mail.org\n\nThanks for sharing the old thread, very useful.\nI see I've failed miserably to understand all the details of the COPY command.\n \nUpon reading the thread, I'm still puzzled about one thing:\n\nWhy does \\. need to have a special meaning when using COPY FROM with files?\n\nI understand its necessity for STDIN, given that the end of input needs to be\nexplicitly defined.\nHowever, for files, we have a known file size and the end-of-file can be\ndetected without the need for special markers.\n\nAlso, is the difference in how server-side COPY CSV is capable of dealing\nwith \\. but apparently not the client-side \\COPY CSV documented somewhere?\n\nCREATE TABLE t (c text);\nINSERT INTO t (c) VALUES ('foo'), (E'\\n\\\\.\\n'), ('bar');\n\n-- Works OK:\nCOPY t TO '/tmp/t.csv' WITH CSV;\nTRUNCATE t;\nCOPY t FROM '/tmp/t.csv' WITH CSV;\n\n-- Doesn't work:\n\\COPY t TO '/tmp/t.csv' WITH CSV;\nTRUNCATE t;\n\\COPY t FROM '/tmp/t.csv' WITH CSV;\nERROR: unterminated CSV quoted field\nCONTEXT: COPY t, line 4: \"\"\n\\.\n\"\n\n/Joel\n\n\n", "msg_date": "Fri, 19 May 2023 08:19:30 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "\tJoel Jacobson wrote:\n\n> I understand its necessity for STDIN, given that the end of input needs to\n> be explicitly defined.\n> However, for files, we have a known file size and the end-of-file can be\n> detected without the need for special markers.\n> \n> Also, is the difference in how server-side COPY CSV is capable of dealing\n> with \\. but apparently not the client-side \\COPY CSV documented somewhere?\n\npsql implements the client-side \"\\copy table from file...\" with\nCOPY table FROM STDIN ...\n\nCOPY FROM file CSV somewhat differs as your example shows,\nbut it still mishandle \\. when unquoted. For instance, consider this\nfile to load with COPY\tt FROM '/tmp/t.csv' WITH CSV\n$ cat /tmp/t.csv\nline 1\n\\.\nline 3\nline 4\n\nIt results in having only \"line 1\" being imported.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 19 May 2023 18:06:34 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Fri, May 19, 2023, at 18:06, Daniel Verite wrote:\n> COPY FROM file CSV somewhat differs as your example shows,\n> but it still mishandle \\. when unquoted. For instance, consider this\n> file to load with COPY\tt FROM '/tmp/t.csv' WITH CSV\n> $ cat /tmp/t.csv\n> line 1\n> \\.\n> line 3\n> line 4\n>\n> It results in having only \"line 1\" being imported.\n\nHmm, this is a problem for one of the new use-cases I brought up that would be\npossible with DELIMITER NONE QUOTE NONE, i.e. to import unstructured log files,\nwhere each raw line should be imported \"as is\" into a single text column.\n\nIs there a valid reason why \\. is needed for COPY FROM filename?\nIt seems to me it would only be necessary for the COPY FROM STDIN case,\nsince files have a natural end-of-file and a known file size.\n\n/Joel\n\n\n", "msg_date": "Sat, 20 May 2023 09:16:30 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "\tJoel Jacobson wrote:\n\n> Is there a valid reason why \\. is needed for COPY FROM filename?\n> It seems to me it would only be necessary for the COPY FROM STDIN case,\n> since files have a natural end-of-file and a known file size.\n\nLooking at CopyReadLineText() over at [1], I don't see a reason why\nthe unquoted \\. could not be handled with COPY FROM file.\nEven COPY FROM STDIN looks like it could be benefit, so that\n\\copy from file csv would hopefully not choke or truncate the data.\nThere's still the case when the CSV data is embedded in a psql script\n(psql is unable to know where it ends), but for that, \"don't do that\"\nmight be a reasonable answer.\n\n\n[1]\nhttps://doxygen.postgresql.org/copyfromparse_8c.html#a90201f711221dd82d0c08deedd91e1b3\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 22 May 2023 18:12:53 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Mon, May 22, 2023 at 12:13 PM Daniel Verite <daniel@manitou-mail.org>\nwrote:\n\n> Joel Jacobson wrote:\n>\n> > Is there a valid reason why \\. is needed for COPY FROM filename?\n> > It seems to me it would only be necessary for the COPY FROM STDIN case,\n> > since files have a natural end-of-file and a known file size.\n>\n> Looking at CopyReadLineText() over at [1], I don't see a reason why\n> the unquoted \\. could not be handled with COPY FROM file.\n> Even COPY FROM STDIN looks like it could be benefit, so that\n> \\copy from file csv would hopefully not choke or truncate the data.\n> There's still the case when the CSV data is embedded in a psql script\n> (psql is unable to know where it ends), but for that, \"don't do that\"\n> might be a reasonable answer.\n>\n\nDon't have what looks like a pg_dump script?\nWe specifically create such SQL files with embedded data. Depending on the\ncircumstances,\nwe either confirm that indexes dropped and triggers are disabled... [Or we\ncreate a dynamic name,\nand insert it into a queue table for later processing], and then we COPY\nthe data, ending in\n\\.\n\nWe do NOT do \"CSV\", we mimic pg_dump.\n\nNow, if you are talking about only impacting a fixed data file format...\nSure. But impacting how psql\nprocesses these \\i included files... (that could hurt)\n\nOn Mon, May 22, 2023 at 12:13 PM Daniel Verite <daniel@manitou-mail.org> wrote:        Joel Jacobson wrote:\n\n> Is there a valid reason why \\. is needed for COPY FROM filename?\n> It seems to me it would only be necessary for the COPY FROM STDIN case,\n> since files have a natural end-of-file and a known file size.\n\nLooking at CopyReadLineText() over at [1], I don't see a reason why\nthe unquoted \\. could not be handled with COPY FROM file.\nEven COPY FROM STDIN looks like it could be benefit, so that\n\\copy from file csv would hopefully not choke or truncate the data.\nThere's still the case when the CSV data is embedded in a psql script\n(psql is unable to know where it ends), but for that, \"don't do that\"\nmight be a reasonable answer.Don't have what looks like a pg_dump script?We specifically create such SQL files with embedded data.  Depending on the circumstances,we either confirm that indexes dropped and triggers are disabled...  [Or we create a dynamic name,and insert it into a queue table for later processing], and then we COPY the data, ending in\\.We do NOT do \"CSV\", we mimic pg_dump.Now, if you are talking about only impacting a fixed data file format... Sure.  But impacting how psqlprocesses these \\i  included files...  (that could hurt)", "msg_date": "Mon, 22 May 2023 12:31:43 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "\tKirk Wolak wrote:\n\n> We do NOT do \"CSV\", we mimic pg_dump.\n\npg_dump uses the text format (as opposed to csv), where\n\\. on a line by itself cannot appear in the data, so there's\nno problem. The problem is limited to the csv format.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 22 May 2023 22:24:27 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" }, { "msg_contents": "On Sat, May 20, 2023 at 09:16:30AM +0200, Joel Jacobson wrote:\n> On Fri, May 19, 2023, at 18:06, Daniel Verite wrote:\n> > COPY FROM file CSV somewhat differs as your example shows,\n> > but it still mishandle \\. when unquoted. For instance, consider this\n> > file to load with COPY\tt FROM '/tmp/t.csv' WITH CSV\n> > $ cat /tmp/t.csv\n> > line 1\n> > \\.\n> > line 3\n> > line 4\n> >\n> > It results in having only \"line 1\" being imported.\n> \n> Hmm, this is a problem for one of the new use-cases I brought up that would be\n> possible with DELIMITER NONE QUOTE NONE, i.e. to import unstructured log files,\n> where each raw line should be imported \"as is\" into a single text column.\n> \n> Is there a valid reason why \\. is needed for COPY FROM filename?\n\nNo.\n\n> It seems to me it would only be necessary for the COPY FROM STDIN case,\n> since files have a natural end-of-file and a known file size.\n\nRight. Even for COPY FROM STDIN, it's not needed anymore since the removal of\nprotocol v2. psql would still use it to find the end of inline COPY data,\nthough. Here's another relevant thread:\nhttps://postgr.es/m/flat/bfcd57e4-8f23-4c3e-a5db-2571d09208e2%40beta.fastmail.com\n\n\n", "msg_date": "Sat, 1 Jul 2023 22:45:31 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Should CSV parsing be stricter about mid-field quotes?" } ]
[ { "msg_contents": "I happened to notice that the query below can be inefficient.\n\n# explain (costs off)\nselect * from\n int8_tbl a left join\n (int8_tbl b inner join\n lateral (select *, b.q2 as x from int8_tbl c) ss on b.q2 = ss.q1)\n on a.q1 = b.q1;\n QUERY PLAN\n------------------------------------\n Hash Right Join\n Hash Cond: (b.q1 = a.q1)\n -> Nested Loop\n -> Seq Scan on int8_tbl b\n -> Seq Scan on int8_tbl c\n Filter: (b.q2 = q1)\n -> Hash\n -> Seq Scan on int8_tbl a\n(8 rows)\n\nFor B/C join, currently we only have one option, i.e., nestloop with\nparameterized inner path. This could be extremely inefficient in some\ncases, such as when C does not have any indexes, or when B is very\nlarge. I believe the B/C join can actually be performed with hashjoin\nor mergejoin here, as it is an inner join.\n\nThis happens because when we pull up the lateral subquery, we notice\nthat Var 'x' is a lateral reference to 'b.q2' which is outside the\nsubquery. So we wrap it in a PlaceHolderVar. This is necessary for\ncorrectness if it is a lateral reference from nullable side to\nnon-nullable item. But here in this case, the referenced item is also\nnullable, so actually we can omit the PlaceHolderVar with no harm. The\ncomment in pullup_replace_vars_callback() also explains this point.\n\n * (Even then, we could omit the PlaceHolderVar if\n * the referenced rel is under the same lowest outer join, but\n * it doesn't seem worth the trouble to check that.)\n\nAll such PHVs would imply lateral dependencies which would make us have\nno choice but nestloop. I think we should avoid such PHVs as much as\npossible. So IMO it may 'worth the trouble to check that'.\n\nAttached is a patch to check that for simple Vars. Maybe we can extend\nit to avoid PHVs for more complex expressions, but that requires some\ncodes because for now we always wrap non-var expressions to PHVs in\norder to have a place to insert nulling bitmap. As a first step, let's\ndo it for simple Vars only.\n\nAny thoughts?\n\nThanks\nRichard", "msg_date": "Fri, 12 May 2023 14:35:33 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "An inefficient query caused by unnecessary PlaceHolderVar" }, { "msg_contents": "On Fri, May 12, 2023 at 2:35 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> I happened to notice that the query below can be inefficient.\n>\n> # explain (costs off)\n> select * from\n> int8_tbl a left join\n> (int8_tbl b inner join\n> lateral (select *, b.q2 as x from int8_tbl c) ss on b.q2 = ss.q1)\n> on a.q1 = b.q1;\n> QUERY PLAN\n> ------------------------------------\n> Hash Right Join\n> Hash Cond: (b.q1 = a.q1)\n> -> Nested Loop\n> -> Seq Scan on int8_tbl b\n> -> Seq Scan on int8_tbl c\n> Filter: (b.q2 = q1)\n> -> Hash\n> -> Seq Scan on int8_tbl a\n> (8 rows)\n>\n> For B/C join, currently we only have one option, i.e., nestloop with\n> parameterized inner path. This could be extremely inefficient in some\n> cases, such as when C does not have any indexes, or when B is very\n> large. I believe the B/C join can actually be performed with hashjoin\n> or mergejoin here, as it is an inner join.\n>\n> This happens because when we pull up the lateral subquery, we notice\n> that Var 'x' is a lateral reference to 'b.q2' which is outside the\n> subquery. So we wrap it in a PlaceHolderVar. This is necessary for\n> correctness if it is a lateral reference from nullable side to\n> non-nullable item. But here in this case, the referenced item is also\n> nullable, so actually we can omit the PlaceHolderVar with no harm. The\n> comment in pullup_replace_vars_callback() also explains this point.\n>\n> * (Even then, we could omit the PlaceHolderVar if\n> * the referenced rel is under the same lowest outer join, but\n> * it doesn't seem worth the trouble to check that.)\n\nIt's nice that someone already thought about this and left us this comment :)\n\n> All such PHVs would imply lateral dependencies which would make us have\n> no choice but nestloop. I think we should avoid such PHVs as much as\n> possible. So IMO it may 'worth the trouble to check that'.\n>\n> Attached is a patch to check that for simple Vars. Maybe we can extend\n> it to avoid PHVs for more complex expressions, but that requires some\n> codes because for now we always wrap non-var expressions to PHVs in\n> order to have a place to insert nulling bitmap. As a first step, let's\n> do it for simple Vars only.\n>\n> Any thoughts?\n\nThis looks good to me.\n\nA few small tweaks suggested to comment wording:\n\n+-- lateral reference for simple Var can escape PlaceHolderVar if the\n+-- referenced rel is under the same lowest nulling outer join\n+explain (verbose, costs off)\n\nI think this is clearer: \"lateral references to simple Vars do not\nneed a PlaceHolderVar when the referenced rel is part of the same\nlowest nulling outer join\"?\n\n * lateral references to something outside the subquery being\n- * pulled up. (Even then, we could omit the PlaceHolderVar if\n- * the referenced rel is under the same lowest outer join, but\n- * it doesn't seem worth the trouble to check that.)\n+ * pulled up. Even then, we could omit the PlaceHolderVar if\n+ * the referenced rel is under the same lowest nulling outer\n+ * join.\n\nI think this is clearer: \"references something outside the subquery\nbeing pulled up and is not under the same lowest outer join.\"\n\nOne other thing: it would be helpful to have the test query output be\nstable between HEAD and this patch; perhaps add:\n\norder by 1, 2, 3, 4, 5, 6, 7\n\nto ensure stability?\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Tue, 30 May 2023 13:26:55 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: An inefficient query caused by unnecessary PlaceHolderVar" }, { "msg_contents": "On Wed, May 31, 2023 at 1:27 AM James Coleman <jtc331@gmail.com> wrote:\n\n> This looks good to me.\n\n\nThanks for the review!\n\n\n> A few small tweaks suggested to comment wording:\n>\n> +-- lateral reference for simple Var can escape PlaceHolderVar if the\n> +-- referenced rel is under the same lowest nulling outer join\n> +explain (verbose, costs off)\n>\n> I think this is clearer: \"lateral references to simple Vars do not\n> need a PlaceHolderVar when the referenced rel is part of the same\n> lowest nulling outer join\"?\n\n\nThanks for the suggestion! How about we go with \"lateral references to\nsimple Vars do not need a PlaceHolderVar when the referenced rel is\nunder the same lowest nulling outer join\"? This seems a little more\nconsistent with the comment in prepjointree.c.\n\n\n> * lateral references to something outside the subquery\n> being\n> - * pulled up. (Even then, we could omit the\n> PlaceHolderVar if\n> - * the referenced rel is under the same lowest outer\n> join, but\n> - * it doesn't seem worth the trouble to check that.)\n> + * pulled up. Even then, we could omit the\n> PlaceHolderVar if\n> + * the referenced rel is under the same lowest nulling\n> outer\n> + * join.\n>\n> I think this is clearer: \"references something outside the subquery\n> being pulled up and is not under the same lowest outer join.\"\n\n\nAgreed. Will use this one.\n\n\n> One other thing: it would be helpful to have the test query output be\n> stable between HEAD and this patch; perhaps add:\n>\n> order by 1, 2, 3, 4, 5, 6, 7\n>\n> to ensure stability?\n\n\nThanks for the suggestion! I wondered about that too but I'm a bit\nconfused about whether we should add ORDER BY in test case. I checked\n'sql/join.sql' and found that some queries are using ORDER BY but some\nare not. Not sure what the criteria are.\n\nThanks\nRichard\n\nOn Wed, May 31, 2023 at 1:27 AM James Coleman <jtc331@gmail.com> wrote:\nThis looks good to me.Thanks for the review! \nA few small tweaks suggested to comment wording:\n\n+-- lateral reference for simple Var can escape PlaceHolderVar if the\n+-- referenced rel is under the same lowest nulling outer join\n+explain (verbose, costs off)\n\nI think this is clearer: \"lateral references to simple Vars do not\nneed a PlaceHolderVar when the referenced rel is part of the same\nlowest nulling outer join\"?Thanks for the suggestion!  How about we go with \"lateral references tosimple Vars do not need a PlaceHolderVar when the referenced rel isunder the same lowest nulling outer join\"?  This seems a little moreconsistent with the comment in prepjointree.c. \n                  * lateral references to something outside the subquery being\n-                 * pulled up.  (Even then, we could omit the PlaceHolderVar if\n-                 * the referenced rel is under the same lowest outer join, but\n-                 * it doesn't seem worth the trouble to check that.)\n+                 * pulled up.  Even then, we could omit the PlaceHolderVar if\n+                 * the referenced rel is under the same lowest nulling outer\n+                 * join.\n\nI think this is clearer: \"references something outside the subquery\nbeing pulled up and is not under the same lowest outer join.\"Agreed.  Will use this one. \nOne other thing: it would be helpful to have the test query output be\nstable between HEAD and this patch; perhaps add:\n\norder by 1, 2, 3, 4, 5, 6, 7\n\nto ensure stability?Thanks for the suggestion!  I wondered about that too but I'm a bitconfused about whether we should add ORDER BY in test case.  I checked'sql/join.sql' and found that some queries are using ORDER BY but someare not.  Not sure what the criteria are.ThanksRichard", "msg_date": "Thu, 1 Jun 2023 10:30:16 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: An inefficient query caused by unnecessary PlaceHolderVar" }, { "msg_contents": "On Wed, May 31, 2023 at 10:30 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Wed, May 31, 2023 at 1:27 AM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> This looks good to me.\n>\n>\n> Thanks for the review!\n\nSure thing!\n\n>>\n>> A few small tweaks suggested to comment wording:\n>>\n>> +-- lateral reference for simple Var can escape PlaceHolderVar if the\n>> +-- referenced rel is under the same lowest nulling outer join\n>> +explain (verbose, costs off)\n>>\n>> I think this is clearer: \"lateral references to simple Vars do not\n>> need a PlaceHolderVar when the referenced rel is part of the same\n>> lowest nulling outer join\"?\n>\n>\n> Thanks for the suggestion! How about we go with \"lateral references to\n> simple Vars do not need a PlaceHolderVar when the referenced rel is\n> under the same lowest nulling outer join\"? This seems a little more\n> consistent with the comment in prepjointree.c.\n\nThat sounds good to me.\n\n>>\n>> * lateral references to something outside the subquery being\n>> - * pulled up. (Even then, we could omit the PlaceHolderVar if\n>> - * the referenced rel is under the same lowest outer join, but\n>> - * it doesn't seem worth the trouble to check that.)\n>> + * pulled up. Even then, we could omit the PlaceHolderVar if\n>> + * the referenced rel is under the same lowest nulling outer\n>> + * join.\n>>\n>> I think this is clearer: \"references something outside the subquery\n>> being pulled up and is not under the same lowest outer join.\"\n>\n>\n> Agreed. Will use this one.\n>\n>>\n>> One other thing: it would be helpful to have the test query output be\n>> stable between HEAD and this patch; perhaps add:\n>>\n>> order by 1, 2, 3, 4, 5, 6, 7\n>>\n>> to ensure stability?\n>\n>\n> Thanks for the suggestion! I wondered about that too but I'm a bit\n> confused about whether we should add ORDER BY in test case. I checked\n> 'sql/join.sql' and found that some queries are using ORDER BY but some\n> are not. Not sure what the criteria are.\n\nI think it's just \"is this helpful in this test\". Obviously we don't\nneed it for correctness of this particular check, but as long as the\nplan change still occurs as desired (i.e., the ORDER BY doesn't change\nthe plan from what you're testing) I think it's fine to consider it\nauthor's choice.\n\nJames\n\n\n", "msg_date": "Thu, 1 Jun 2023 13:33:41 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: An inefficient query caused by unnecessary PlaceHolderVar" }, { "msg_contents": "On Fri, Jun 2, 2023 at 1:33 AM James Coleman <jtc331@gmail.com> wrote:\n\n> On Wed, May 31, 2023 at 10:30 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> > Thanks for the review!\n>\n> Sure thing!\n\n\nI've updated the patch according to the reviews as attached. But I did\nnot add ORDER BY clause in the test, as we don't need it for correctness\nfor this test query and the surrounding queries in join.sql don't have\nORDER BY either.\n\nThanks\nRichard", "msg_date": "Tue, 18 Jul 2023 15:17:15 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: An inefficient query caused by unnecessary PlaceHolderVar" }, { "msg_contents": "Updated this patch over 29f114b6ff, which indicates that we should apply\nthe same rules for PHVs.\n\nThanks\nRichard", "msg_date": "Mon, 15 Jan 2024 13:50:07 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: An inefficient query caused by unnecessary PlaceHolderVar" }, { "msg_contents": "On Mon, Jan 15, 2024 at 1:50 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> Updated this patch over 29f114b6ff, which indicates that we should apply\n> the same rules for PHVs.\n\nHere is a new rebase of this patch, with some tweaks to comments. I've\nalso updated the commit message to better explain the context.\n\nTo recap, this patch tries to avoid wrapping Vars and PHVs from subquery\noutput that are lateral references to something outside the subquery.\nTypically this kind of wrapping is necessary when the Var/PHV references\nthe non-nullable side of the outer join from the nullable side, because\nwe need to ensure that it is evaluated at the right place and hence is\nforced to null when the outer join should do so. But if the referenced\nrel is under the same lowest nulling outer join, we can actually omit\nthe wrapping. The PHVs generated from such kind of wrapping imply\nlateral dependencies and force us to resort to nestloop joins, so we'd\nbetter get rid of them.\n\nThis patch performs this check by remembering the relids of the nullable\nside of the lowest outer join the subquery is within. So I think it\nimproves the overall plan in the related cases with very little extra\ncost.\n\nThanks\nRichard", "msg_date": "Fri, 21 Jun 2024 10:35:30 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: An inefficient query caused by unnecessary PlaceHolderVar" } ]
[ { "msg_contents": "HN had a thread regarding the challenges faced by new users during the\nadoption of Postgres in 2023.\n\nOne particular issue that garnered significant votes was the lack of a\n\"SHOW CREATE TABLE\" command, and seems like it would be an easy one to\nimplement: https://news.ycombinator.com/item?id=35908991\n\nConsidering the popularity of this request and its potential ease of\nimplementation, I wanted to bring it to your attention, as it would likely\nenhance the user experience and alleviate some of the difficulties\nencountered by newcomers.\n\nHN had a thread regarding the challenges faced by new users during the adoption of Postgres in 2023.One particular issue that garnered significant votes was the lack of a \"SHOW CREATE TABLE\" command, and seems like it would be an easy one to implement: https://news.ycombinator.com/item?id=35908991Considering the popularity of this request and its potential ease of implementation, I wanted to bring it to your attention, as it would likely enhance the user experience and alleviate some of the difficulties encountered by newcomers.", "msg_date": "Fri, 12 May 2023 04:29:09 -0700", "msg_from": "Nathaniel Sabanski <sabanski.n@gmail.com>", "msg_from_op": true, "msg_subject": "Adding SHOW CREATE TABLE" }, { "msg_contents": "Greetings,\n\n* Nathaniel Sabanski (sabanski.n@gmail.com) wrote:\n> HN had a thread regarding the challenges faced by new users during the\n> adoption of Postgres in 2023.\n> \n> One particular issue that garnered significant votes was the lack of a\n> \"SHOW CREATE TABLE\" command, and seems like it would be an easy one to\n> implement: https://news.ycombinator.com/item?id=35908991\n> \n> Considering the popularity of this request and its potential ease of\n> implementation, I wanted to bring it to your attention, as it would likely\n> enhance the user experience and alleviate some of the difficulties\n> encountered by newcomers.\n\nThis isn't as easy as it seems actually ... \n\nNote that using pg_dump for this purpose works quite well and also works\nto address cross-version issues. Consider that pg_dump v15 is able to\nconnect to v14, v13, v12, v11, and more, and produce a CREATE TABLE\ncommand that will work with *v15*. If you connected to a v14 database\nand did a SHOW CREATE TABLE, there's no guarantee that the CREATE TABLE\nstatement returned would work for PG v15 due to keyword changes and\nother differences that can cause issues between major versions of PG.\n\nNow, that said, we have started ending up with some similar code between\npg_dump and postgres_fdw in the form of IMPORT FOREIGN SCHEMA and maybe\nwe should consider if that code could be moved into the common library\nand made available to pg_dump, postgres_fdw, and as a SHOW CREATE TABLE\ncommand with the caveat that the produced CREATE TABLE command may not\nwork with newer versions of PG. There's an interesting question around\nif we'd consider it a bug worthy of fixing if IMPORT FOREIGN SCHEMA in\nv14 doesn't work when connecting to a v15 PG instance. Not sure if\nanyone's contemplated that. There's certainly going to be cases that we\nwouldn't accept fixing (we wouldn't add some new partitioning strategy\nto v14 just because it's in v15, for example, to make IMPORT FOREIGN\nSCHEMA work...).\n\nThanks,\n\nStephen", "msg_date": "Fri, 12 May 2023 09:47:23 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "I believe most users would anticipate a CREATE TABLE statement that aligns\nwith the currently installed version- this is the practical solution for\nthe vast majority.\n\nIn situations where a CREATE TABLE statement compatible with an older\nversion of Postgres is required, users can opt for an additional step of\nusing tools like pg_dump or an older version of Postgres itself. This\nallows them to ensure compatibility without compromising the practicality\nof the process.\n\nOn Fri, 12 May 2023 at 06:47, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Nathaniel Sabanski (sabanski.n@gmail.com) wrote:\n> > HN had a thread regarding the challenges faced by new users during the\n> > adoption of Postgres in 2023.\n> >\n> > One particular issue that garnered significant votes was the lack of a\n> > \"SHOW CREATE TABLE\" command, and seems like it would be an easy one to\n> > implement: https://news.ycombinator.com/item?id=35908991\n> >\n> > Considering the popularity of this request and its potential ease of\n> > implementation, I wanted to bring it to your attention, as it would\n> likely\n> > enhance the user experience and alleviate some of the difficulties\n> > encountered by newcomers.\n>\n> This isn't as easy as it seems actually ...\n>\n> Note that using pg_dump for this purpose works quite well and also works\n> to address cross-version issues. Consider that pg_dump v15 is able to\n> connect to v14, v13, v12, v11, and more, and produce a CREATE TABLE\n> command that will work with *v15*. If you connected to a v14 database\n> and did a SHOW CREATE TABLE, there's no guarantee that the CREATE TABLE\n> statement returned would work for PG v15 due to keyword changes and\n> other differences that can cause issues between major versions of PG.\n>\n> Now, that said, we have started ending up with some similar code between\n> pg_dump and postgres_fdw in the form of IMPORT FOREIGN SCHEMA and maybe\n> we should consider if that code could be moved into the common library\n> and made available to pg_dump, postgres_fdw, and as a SHOW CREATE TABLE\n> command with the caveat that the produced CREATE TABLE command may not\n> work with newer versions of PG. There's an interesting question around\n> if we'd consider it a bug worthy of fixing if IMPORT FOREIGN SCHEMA in\n> v14 doesn't work when connecting to a v15 PG instance. Not sure if\n> anyone's contemplated that. There's certainly going to be cases that we\n> wouldn't accept fixing (we wouldn't add some new partitioning strategy\n> to v14 just because it's in v15, for example, to make IMPORT FOREIGN\n> SCHEMA work...).\n>\n> Thanks,\n>\n> Stephen\n>\n\nI believe most users would anticipate a CREATE TABLE statement that aligns with the currently installed version- this is the practical solution for the vast majority.In situations where a CREATE TABLE statement compatible with an older version of Postgres is required, users can opt for an additional step of using tools like pg_dump or an older version of Postgres itself. This allows them to ensure compatibility without compromising the practicality of the process.On Fri, 12 May 2023 at 06:47, Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Nathaniel Sabanski (sabanski.n@gmail.com) wrote:\n> HN had a thread regarding the challenges faced by new users during the\n> adoption of Postgres in 2023.\n> \n> One particular issue that garnered significant votes was the lack of a\n> \"SHOW CREATE TABLE\" command, and seems like it would be an easy one to\n> implement: https://news.ycombinator.com/item?id=35908991\n> \n> Considering the popularity of this request and its potential ease of\n> implementation, I wanted to bring it to your attention, as it would likely\n> enhance the user experience and alleviate some of the difficulties\n> encountered by newcomers.\n\nThis isn't as easy as it seems actually ... \n\nNote that using pg_dump for this purpose works quite well and also works\nto address cross-version issues.  Consider that pg_dump v15 is able to\nconnect to v14, v13, v12, v11, and more, and produce a CREATE TABLE\ncommand that will work with *v15*.  If you connected to a v14 database\nand did a SHOW CREATE TABLE, there's no guarantee that the CREATE TABLE\nstatement returned would work for PG v15 due to keyword changes and\nother differences that can cause issues between major versions of PG.\n\nNow, that said, we have started ending up with some similar code between\npg_dump and postgres_fdw in the form of IMPORT FOREIGN SCHEMA and maybe\nwe should consider if that code could be moved into the common library\nand made available to pg_dump, postgres_fdw, and as a SHOW CREATE TABLE\ncommand with the caveat that the produced CREATE TABLE command may not\nwork with newer versions of PG.  There's an interesting question around\nif we'd consider it a bug worthy of fixing if IMPORT FOREIGN SCHEMA in\nv14 doesn't work when connecting to a v15 PG instance.  Not sure if\nanyone's contemplated that.  There's certainly going to be cases that we\nwouldn't accept fixing (we wouldn't add some new partitioning strategy\nto v14 just because it's in v15, for example, to make IMPORT FOREIGN\nSCHEMA work...).\n\nThanks,\n\nStephen", "msg_date": "Fri, 12 May 2023 07:34:44 -0700", "msg_from": "Nathaniel Sabanski <sabanski.n@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Greetings,\n\nPlease don't top-post on these lists.\n\n* Nathaniel Sabanski (sabanski.n@gmail.com) wrote:\n> I believe most users would anticipate a CREATE TABLE statement that aligns\n> with the currently installed version- this is the practical solution for\n> the vast majority.\n\nPerhaps a bit more discussion about what exactly the use-case is would\nbe helpful- what would you use this feature for?\n\n> In situations where a CREATE TABLE statement compatible with an older\n> version of Postgres is required, users can opt for an additional step of\n> using tools like pg_dump or an older version of Postgres itself. This\n> allows them to ensure compatibility without compromising the practicality\n> of the process.\n\nThe issue is really both older and newer versions, not just older ones\nand not just newer ones.\n\nTo the extent you're interested in this, I pointed out where you could\ngo look at the existing code as well as an idea for how to move this\nforward.\n\nThanks,\n\nStephen", "msg_date": "Fri, 12 May 2023 10:40:38 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, 12 May 2023, Nathaniel Sabanski wrote:\n\n>I believe most users would anticipate a CREATE TABLE statement that aligns\n>with the currently installed version- this is the practical solution for\n\nThe currently installed version of what, the server or the client?\n\nbye,\n//mirabilos\n-- \n15:41⎜<Lo-lan-do:#fusionforge> Somebody write a testsuite for helloworld :-)\n\n\n", "msg_date": "Fri, 12 May 2023 17:35:45 +0200 (CEST)", "msg_from": "Thorsten Glaser <tg@evolvis.org>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Nathaniel Sabanski schrieb am 12.05.2023 um 13:29:\n\n> HN had a thread regarding the challenges faced by new users during\n> the adoption of Postgres in 2023.\n>\n> One particular issue that garnered significant votes was the lack of\n> a \"SHOW CREATE TABLE\" command, and seems like it would be an easy one\n> to implement: https://news.ycombinator.com/item?id=35908991\n>\n> Considering the popularity of this request and its potential ease of\n> implementation, I wanted to bring it to your attention, as it would\n> likely enhance the user experience and alleviate some of the\n> difficulties encountered by newcomers.\nWhile it would be nice to have something like that, I don't think\nit isn't really necessary. Pretty much every (GUI) SQL client provides\na way to see the DDL for objects in the database.\n\nFor psql fans \\d typically is enough, and they would probably not mind\nrunning pg_dump to get the full DDL.\n\nI would think that especially newcomers start with a GUI client\nthat can do this.\n\nIf you check the source of any of the popular SQL clients that generates\nthe DDL for a table, then you will also quickly realize that this isn't\na trivial thing to do.\n\nThomas\n\n\n\n", "msg_date": "Fri, 12 May 2023 17:54:00 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, May 12, 2023, 08:35 Thorsten Glaser <tg@evolvis.org> wrote:\n\n> On Fri, 12 May 2023, Nathaniel Sabanski wrote:\n>\n> >I believe most users would anticipate a CREATE TABLE statement that aligns\n> >with the currently installed version- this is the practical solution for\n>\n> The currently installed version of what, the server or the client?\n>\n\nIt's an SQL Command, no specific client can/should be presumed.\n\nDavid J.\n\n>\n\nOn Fri, May 12, 2023, 08:35 Thorsten Glaser <tg@evolvis.org> wrote:On Fri, 12 May 2023, Nathaniel Sabanski wrote:\n\n>I believe most users would anticipate a CREATE TABLE statement that aligns\n>with the currently installed version- this is the practical solution for\n\nThe currently installed version of what, the server or the client?It's an SQL Command, no specific client can/should be presumed.David J.", "msg_date": "Fri, 12 May 2023 09:12:14 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, 12 May 2023 at 09:12, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n>\n>\n> On Fri, May 12, 2023, 08:35 Thorsten Glaser <tg@evolvis.org> wrote:\n>\n>> On Fri, 12 May 2023, Nathaniel Sabanski wrote:\n>>\n>> >I believe most users would anticipate a CREATE TABLE statement that\n>> aligns\n>> >with the currently installed version- this is the practical solution for\n>>\n>> The currently installed version of what, the server or the client?\n>>\n>\n> It's an SQL Command, no specific client can/should be presumed.\n>\n> David J.\n>\n>>\n\nOn Fri, 12 May 2023 at 09:12, David G. Johnston <david.g.johnston@gmail.com> wrote:On Fri, May 12, 2023, 08:35 Thorsten Glaser <tg@evolvis.org> wrote:On Fri, 12 May 2023, Nathaniel Sabanski wrote:\n\n>I believe most users would anticipate a CREATE TABLE statement that aligns\n>with the currently installed version- this is the practical solution for\n\nThe currently installed version of what, the server or the client?It's an SQL Command, no specific client can/should be presumed.David J.", "msg_date": "Fri, 12 May 2023 11:58:24 -0700", "msg_from": "Nathaniel Sabanski <sabanski.n@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "> Perhaps a bit more discussion about what exactly the use-case is would\n> be helpful- what would you use this feature for?\n\nApp writers: To facilitate table creation and simplify schema verification,\nwithout relying on a GUI tool or ORM (or system calls out to pg_dump).\n\nTool writers: Would drastically cut down the implementation time and\ncomplexity to support Postgres. I am one of the devs of Piccolo ORM (Python\nlib supporting Postgres) and we have a lot of code dedicated to\nre-generating the CREATE TABLE statements (creation, during migrations,\netc) that could be done better by Postgres itself.\n\nEcosystem cohesion: SHOW CREATE TABLE has already been implemented in\nCockroachDB, a popular Postgres derivative.\n\nMoving to Postgres: It would help ease migrations for developers wanting to\nmove from MySQL / Percona / MariaDB to Postgres. Also it's a nice developer\nexperience to see how Postgres generates X table without extra tooling.\n\nThe intention of SHOW CREATE TABLE is not to replace the existing suite of\n\\d in psql but rather to be a developer friendly complement within SQL\nitself.\n\n> Perhaps a bit more discussion about what exactly the use-case is would> be helpful- what would you use this feature for? App writers: To facilitate table creation and simplify schema verification, without relying on a GUI tool or ORM (or system calls out to pg_dump).Tool writers: Would drastically cut down the implementation time and complexity to support Postgres. I am one of the devs of Piccolo ORM (Python lib supporting Postgres) and we have a lot of code dedicated to re-generating the CREATE TABLE statements (creation, during migrations, etc) that could be done better by Postgres itself.Ecosystem cohesion: SHOW CREATE TABLE has already been implemented in CockroachDB, a popular Postgres derivative.Moving to Postgres: It would help ease migrations for developers wanting to move from MySQL / Percona / MariaDB to Postgres. Also it's a nice developer experience to see how Postgres generates X table without extra tooling.The intention of SHOW CREATE TABLE is not to replace the existing suite of \\d in psql but rather to be a developer friendly complement within SQL itself.", "msg_date": "Fri, 12 May 2023 13:04:00 -0700", "msg_from": "Nathaniel Sabanski <sabanski.n@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Greetings,\n\n* Nathaniel Sabanski (sabanski.n@gmail.com) wrote:\n> > Perhaps a bit more discussion about what exactly the use-case is would\n> > be helpful- what would you use this feature for?\n> \n> App writers: To facilitate table creation and simplify schema verification,\n> without relying on a GUI tool or ORM (or system calls out to pg_dump).\n\nNot sure how it would simplify schema verification?\n\n> Tool writers: Would drastically cut down the implementation time and\n> complexity to support Postgres. I am one of the devs of Piccolo ORM (Python\n> lib supporting Postgres) and we have a lot of code dedicated to\n> re-generating the CREATE TABLE statements (creation, during migrations,\n> etc) that could be done better by Postgres itself.\n\nI'm curious- have you compared what you're doing to pg_dump's output?\nAre you confident that there aren't any distinctions between those that,\nfor whatever reason, need to exist?\n\n> Moving to Postgres: It would help ease migrations for developers wanting to\n> move from MySQL / Percona / MariaDB to Postgres. Also it's a nice developer\n> experience to see how Postgres generates X table without extra tooling.\n\nSeems unlikely that this would actually be all that helpful there- tools\nlike ora2pg and similar know how to query other database systems and\nwrite appropriate CREATE TABLE statements for PostgreSQL.\n\n> The intention of SHOW CREATE TABLE is not to replace the existing suite of\n> \\d in psql but rather to be a developer friendly complement within SQL\n> itself.\n\nSure, I get that.\n\nAgain, would be great to see someone actually work on this. There's\nalready a good chunk of code in core in pg_dump and in the postgres_fdw\nfor doing exactly this and it'd be great to consolidate that and at the\nsame time expose it via SQL.\n\nAnother possible option would be to add this to libpq, which is used by\npostgres_fdw, psql, pg_dump, and lots of other drivers and client\nutilities. If it's all broadly the same, personally I'd prefer it to be\nin the common library and available as a backend SQL command too, but\nperhaps there's reasons that it would be easier to implement in libpq\ninstead.\n\nThanks,\n\nStephen", "msg_date": "Fri, 12 May 2023 16:14:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Again, would be great to see someone actually work on this. There's\n> already a good chunk of code in core in pg_dump and in the postgres_fdw\n> for doing exactly this and it'd be great to consolidate that and at the\n> same time expose it via SQL.\n\nNote that this is hardly new ground: we've heard more-or-less the same\nproposal many times before. I think the reason it's gone nowhere is\nthat most of the existing infrastructure is either in pg_dump or designed\nto support pg_dump, and pg_dump is *extremely* opinionated about what\nit wants and how it wants the data sliced up, for very good reasons.\nReconciling those requirements with a typical user's \"just give me a\nreconstructed CREATE TABLE command\" request seems fairly difficult.\n\nAlso, since pg_dump will still need to support old servers, it's hard\nto believe we'd accept any proposal to move that functionality into\nthe server side, which in turn means that it's not going to be an easy\nSQL command.\n\nThese issues probably could be surmounted with enough hard work, but\nplease understand that just coming along with a request is not going\nto cause it to happen. People have already done that. (Searching\nthe mailing list archives might be edifying.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 May 2023 16:27:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Again, would be great to see someone actually work on this. There's\n> > already a good chunk of code in core in pg_dump and in the postgres_fdw\n> > for doing exactly this and it'd be great to consolidate that and at the\n> > same time expose it via SQL.\n> \n> Note that this is hardly new ground: we've heard more-or-less the same\n> proposal many times before. I think the reason it's gone nowhere is\n> that most of the existing infrastructure is either in pg_dump or designed\n> to support pg_dump, and pg_dump is *extremely* opinionated about what\n> it wants and how it wants the data sliced up, for very good reasons.\n> Reconciling those requirements with a typical user's \"just give me a\n> reconstructed CREATE TABLE command\" request seems fairly difficult.\n\nYet we're already duplicating much of this in postgres_fdw. If we don't\nwant to get involved in pg_dump's feelings on the subject, we could look\nto postgres_fdw's independent implementation which might be more\nin-line with what users are expecting. Having two separate copies of\ncode that does this and continuing to refuse to give users a way to ask\nfor it themselves seems at the least like an odd choice.\n\n> Also, since pg_dump will still need to support old servers, it's hard\n> to believe we'd accept any proposal to move that functionality into\n> the server side, which in turn means that it's not going to be an easy\n> SQL command.\n\nNo, it won't make sense to have yet another copy that's for the\ncurrently-running-server-only, which is why I suggested it go into\neither a common library or maybe into libpq. I don't feel it would\nbe bad for the common code to have the multi-version understanding even\nif the currently running backend will only ever have the option to ask\nfor the code path that matches its version.\n\n> These issues probably could be surmounted with enough hard work, but\n> please understand that just coming along with a request is not going\n> to cause it to happen. People have already done that. (Searching\n> the mailing list archives might be edifying.)\n\nAgreed- someone needs to have a fair bit of time and willingness to push\non this to make it happen.\n\nThanks,\n\nStephen", "msg_date": "Fri, 12 May 2023 16:36:51 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, May 12, 2023 at 4:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > Again, would be great to see someone actually work on this. There's\n> > > already a good chunk of code in core in pg_dump and in the postgres_fdw\n> > > for doing exactly this and it'd be great to consolidate that and at the\n> > > same time expose it via SQL.\n> ...\n> No, it won't make sense to have yet another copy that's for the\n> currently-running-server-only, which is why I suggested it go into\n> either a common library or maybe into libpq. I don't feel it would\n> be bad for the common code to have the multi-version understanding even\n> if the currently running backend will only ever have the option to ask\n> for the code path that matches its version.\n>\n> Hmmm... What's wrong with only being for the currently running server?\nThat's all I would expect. Also, if it was there, it limits the\nexpectations to DDL that\nworks for that server version.\n\nAlso, if it's on the backend (or an extension), then it's available to\neverything.\n\n\n> Agreed- someone needs to have a fair bit of time and willingness to push\n> on this to make it happen.\n>\n\nIf we can work through a CLEAR discussion of what it is, and is not. I\nwould be\nhappy to work on this. I like referencing the FDW. I also thought of\nreferencing\nthe CREATE TABLE xyz(LIKE abc INCLUDING ALL). While it's not doing DDL,\nit certainly has to be checking options, etc. And pg_dump is the \"gold\nstandard\".\n\nMy approach would be to get a version working. Then figure out how to\ngenerate \"literally\" all table options, and work the process. The good news\nis that at a certain point the resulting DDL should be \"comparable\" against\na\nton of test tables.\n\nWhere do we draw the lines? Does Table DDL include all indexes?\nIt should include constraints, clearly. I would not think it should have\ntriggers.\nLiterally everything within the <<CREATE TABLE X(...);>>. (ie, no ALTER ..\nOWNER TO...)\n\nNext, I would want psql \\st to simply call this?\n\nFWIW, we parse our pg_dump output, and store the objects as individual DDL\nfiles.\nSo, I have about 1,000 tables to play with, for which I already know the\nDDL that pg_dump uses.\n\nBut it's a big commitment. I don't mind if it has a reasonable chance of\nbeing accepted.\nI accept that I will make a few mistakes (and learn) along the way.\nIf there are ANY deal killers that would prevent a reasonable solution from\nbeing accepted,\nplease let me know.\n\nKirk...\n\nOn Fri, May 12, 2023 at 4:37 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Again, would be great to see someone actually work on this.  There's\n> > already a good chunk of code in core in pg_dump and in the postgres_fdw\n> > for doing exactly this and it'd be great to consolidate that and at the\n> > same time expose it via SQL....\nNo, it won't make sense to have yet another copy that's for the\ncurrently-running-server-only, which is why I suggested it go into\neither a common library or maybe into libpq.  I don't feel it would\nbe bad for the common code to have the multi-version understanding even\nif the currently running backend will only ever have the option to ask\nfor the code path that matches its version.\nHmmm...    What's wrong with only being for the currently running server?That's all I would expect.  Also, if it was there, it limits the expectations to DDL thatworks for that server version.Also, if it's on the backend (or an extension), then it's available to everything.\nAgreed- someone needs to have a fair bit of time and willingness to push\non this to make it happen.If we can work through a CLEAR discussion of what it is, and is not.  I would behappy to work on this.  I like referencing the FDW.  I also thought of referencingthe CREATE TABLE xyz(LIKE abc INCLUDING ALL).  While it's not doing DDL,it certainly has to be checking options, etc.  And pg_dump is the \"gold standard\".My approach would be to get a version working.  Then figure out how togenerate \"literally\" all table options, and work the process.  The good newsis that at a certain point the resulting DDL should be \"comparable\" against aton of test tables.Where do we draw the lines?  Does Table DDL include all indexes?It should include constraints, clearly.  I would not think it should have triggers.Literally everything within the <<CREATE TABLE X(...);>>.  (ie, no ALTER .. OWNER TO...)Next, I would want psql \\st to simply call this?FWIW, we parse our pg_dump output, and store the objects as individual DDL files.So, I have about 1,000 tables to play with, for which I already know the DDL that pg_dump uses.But it's a big commitment.  I don't mind if it has a reasonable chance of being accepted.I accept that I will make a few mistakes (and learn) along the way.If there are ANY deal killers that would prevent a reasonable solution from being accepted,please let me know.Kirk...", "msg_date": "Fri, 12 May 2023 19:00:23 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Greetings,\n\n* Kirk Wolak (wolakk@gmail.com) wrote:\n> On Fri, May 12, 2023 at 4:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > Stephen Frost <sfrost@snowman.net> writes:\n> > > > Again, would be great to see someone actually work on this. There's\n> > > > already a good chunk of code in core in pg_dump and in the postgres_fdw\n> > > > for doing exactly this and it'd be great to consolidate that and at the\n> > > > same time expose it via SQL.\n> > ...\n> > No, it won't make sense to have yet another copy that's for the\n> > currently-running-server-only, which is why I suggested it go into\n> > either a common library or maybe into libpq. I don't feel it would\n> > be bad for the common code to have the multi-version understanding even\n> > if the currently running backend will only ever have the option to ask\n> > for the code path that matches its version.\n> >\n> Hmmm... What's wrong with only being for the currently running server?\n> That's all I would expect. Also, if it was there, it limits the\n> expectations to DDL that\n> works for that server version.\n\nI didn't say anything was wrong with that, merely pointing out that\nhaving the same set of code for these various use-cases would be better\nthan having multiple copies of it. The existing code works just fine to\nanswer the question of \"when on v15, what is the v15 query?\", it just\nhappens to *also* answer \"when on v15, what is the v14 query?\" and we\nneed that already for postgres_fdw and for pg_dump.\n\n> Also, if it's on the backend (or an extension), then it's available to\n> everything.\n\nI mean ... it's already in postgres_fdw, just not in a way that can be\nreturned to the user. I don't think I'd want this functionality to\ndepend on postgres_fdw or generally on an extension though, it should\nbe part of core in some fashion.\n\n> > Agreed- someone needs to have a fair bit of time and willingness to push\n> > on this to make it happen.\n> \n> If we can work through a CLEAR discussion of what it is, and is not. I\n> would be\n> happy to work on this. I like referencing the FDW. I also thought of\n> referencing\n> the CREATE TABLE xyz(LIKE abc INCLUDING ALL). While it's not doing DDL,\n> it certainly has to be checking options, etc. And pg_dump is the \"gold\n> standard\".\n\nI'd think the FDW code would be the best starting point, but, sure, look\nat all the options.\n\n> My approach would be to get a version working. Then figure out how to\n> generate \"literally\" all table options, and work the process. The good news\n> is that at a certain point the resulting DDL should be \"comparable\" against\n> a ton of test tables.\n> \n> Where do we draw the lines? Does Table DDL include all indexes?\n> It should include constraints, clearly. I would not think it should have\n> triggers.\n> Literally everything within the <<CREATE TABLE X(...);>>. (ie, no ALTER ..\n> OWNER TO...)\n\nI'd look at the IMPORT FOREIGN SCHEMA stuff in postgres_fdw. We're\nalready largely answering these questions by what options that takes.\nTo some extent, the same is true of pg_dump, but at least postgres_fdw\nis already backend code and probably a bit simpler than the pg_dump\ncode. Still, looking at both would be a good idea.\n\n> Next, I would want psql \\st to simply call this?\n\nEh, that's an independent discussion and effort, especially because\npeople are possibly going to want that to generate the necessary ALTER\nTABLE commands from the result and not just a DROP/CREATE TABLE.\n \n> FWIW, we parse our pg_dump output, and store the objects as individual DDL\n> files.\n> So, I have about 1,000 tables to play with, for which I already know the\n> DDL that pg_dump uses.\n\nSure.\n\n> But it's a big commitment. I don't mind if it has a reasonable chance of\n> being accepted.\n\nYes, it's a large effort, no doubt.\n\n> I accept that I will make a few mistakes (and learn) along the way.\n> If there are ANY deal killers that would prevent a reasonable solution from\n> being accepted, please let me know.\n\nI don't think we can say one way or the other on this ...\n\nThanks,\n\nStephen", "msg_date": "Fri, 12 May 2023 20:36:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On 5/12/23 18:00, Kirk Wolak wrote:\n[snip]\n> Where do we draw the lines?\n\nAt other tables.\n\n> Does Table DDL include all indexes?\n\nAbsolutely!\n\n> It should include constraints, clearly.  I would not think it should have \n> triggers.\n\nDefinitely triggers.  And foreign keys.\n\n> Literally everything within the <<CREATE TABLE X(...);>>.  (ie, no ALTER \n> .. OWNER TO...)\n>\n\nALTER statements, too.  If CREATE TABLE ... LIKE ... { INCLUDING | EXCLUDING \n} { COMMENTS | COMPRESSION | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | \nINDEXES | STATISTICS | STORAGE | ALL } can do it, then so should SHOW CREATE \nTABLE.\n\n-- \nBorn in Arizona, moved to Babylonia.\n\n\n\n\n\n On 5/12/23 18:00, Kirk Wolak wrote:\n\n\n\n [snip]\n\n\n\nWhere do we draw the lines?\n\n\n\n\n At other tables.\n\n\n\n\nDoes Table DDL include all indexes?\n\n\n\n\n\n Absolutely!\n\n\n\n\nIt should include constraints, clearly.  I would not\n think it should have triggers.\n\n\n\n\n\n Definitely triggers.  And foreign keys.\n\n\n\n\nLiterally everything within the <<CREATE TABLE\n X(...);>>.  (ie, no ALTER .. OWNER TO...)\n\n\n\n\n\n\n ALTER statements, too.  If CREATE TABLE ... LIKE ... { INCLUDING |\n EXCLUDING } { COMMENTS | COMPRESSION | CONSTRAINTS | DEFAULTS |\n GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL } can do\n it, then so should SHOW CREATE TABLE.\n\n-- \n Born in Arizona, moved to Babylonia.", "msg_date": "Sat, 13 May 2023 00:02:48 -0500", "msg_from": "Ron <ronljohnsonjr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Sat, May 13, 2023 at 1:03 AM Ron <ronljohnsonjr@gmail.com> wrote:\n\n> On 5/12/23 18:00, Kirk Wolak wrote:\n>\n> [snip]\n>\n> Where do we draw the lines?\n>\n>\n> At other tables.\n>\n> Does Table DDL include all indexes?\n>\n>\n> Absolutely!\n>\n> It should include constraints, clearly. I would not think it should have\n> triggers.\n>\n>\n> Definitely triggers. And foreign keys.\n>\n> Literally everything within the <<CREATE TABLE X(...);>>. (ie, no ALTER\n> .. OWNER TO...)\n>\n>\n> ALTER statements, too. If CREATE TABLE ... LIKE ... { INCLUDING |\n> EXCLUDING } { COMMENTS | COMPRESSION | CONSTRAINTS | DEFAULTS | GENERATED |\n> IDENTITY | INDEXES | STATISTICS | STORAGE | ALL } can do it, then so should\n> SHOW CREATE TABLE.\n>\n> --\n> Born in Arizona, moved to Babylonia.\n>\n\nI can see the ALTER statements now. Which is why I asked.\nI don't like the idea of including the trigger DDL, because that would\nnever execute in a clean environment.\n(I've never used a tool that tried to do that when I've wanted the DDL)\nI can go either way on index creation.\n\nDoes this imply SQL SYNTAX like:\n\nSHOW CREATE TABLE <table_name>\n [ INCLUDING { ALL | INDEXES | SEQUENCES | ??? }]\n [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]\n [FOR {V11 | V12 | V13 | V14 | V15 }] ??\n?\n\nThe goal for me is to open the discussion, and then CONSTRAIN the focus.\n\nPersonally, the simple syntax:\nSHOW CREATE TABLE table1;\n\nShould give me a create table command with the table attributes and the\ncolumn attributes, FKs, PKs, Defaults. Etc.\nBut I would not expect it to generate index commands, etc.\n\nOn Sat, May 13, 2023 at 1:03 AM Ron <ronljohnsonjr@gmail.com> wrote:\n\n On 5/12/23 18:00, Kirk Wolak wrote:\n\n\n [snip]\n\n\n\nWhere do we draw the lines?\n\n\n\n\n At other tables.\n\n\n\n\nDoes Table DDL include all indexes?\n\n\n\n\n\n Absolutely!\n\n\n\n\nIt should include constraints, clearly.  I would not\n think it should have triggers.\n\n\n\n\n\n Definitely triggers.  And foreign keys.\n\n\n\n\nLiterally everything within the <<CREATE TABLE\n X(...);>>.  (ie, no ALTER .. OWNER TO...)\n\n\n\n\n\n\n ALTER statements, too.  If CREATE TABLE ... LIKE ... { INCLUDING |\n EXCLUDING } { COMMENTS | COMPRESSION | CONSTRAINTS | DEFAULTS |\n GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL } can do\n it, then so should SHOW CREATE TABLE.\n\n-- \n Born in Arizona, moved to Babylonia.I can see the ALTER statements now.  Which is why I asked.I don't like the idea of including the trigger DDL, because that would never execute in a clean environment.(I've never used a tool that tried to do that when I've wanted the DDL)I can go either way on index creation.Does this imply SQL SYNTAX like:SHOW CREATE TABLE <table_name>   [ INCLUDING { ALL | INDEXES |  SEQUENCES | ??? }]   [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]   [FOR {V11 | V12 | V13 | V14 | V15 }] ???The goal for me  is to open the discussion, and then CONSTRAIN the focus.Personally, the simple syntax:SHOW CREATE TABLE table1;Should give me a create table command with the table attributes and the column attributes, FKs, PKs, Defaults.  Etc.But I would not expect it to generate index commands, etc.", "msg_date": "Sat, 13 May 2023 03:25:11 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, May 12, 2023 at 8:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n> ..\n> I mean ... it's already in postgres_fdw, just not in a way that can be\n> returned to the user. I don't think I'd want this functionality to\n> depend on postgres_fdw or generally on an extension though, it should\n> be part of core in some fashion.\n>\n\nI will start with postgres_fdw then, but gladly review the other source...\nJust thinking about the essence of the syntax.\nSHOW CREATE TABLE abc(LIKE real_table); -- Output CREATE TABLE abc();\nusing real_table?\n\n\nI'd look at the IMPORT FOREIGN SCHEMA stuff in postgres_fdw. We're\n> already largely answering these questions by what options that takes.\n>\n\nWill do.\n\n> > But it's a big commitment. I don't mind if it has a reasonable chance of\n> > being accepted.\n>\n\n\n> Yes, it's a large effort, no doubt.\n>\n\nAt least there is a base of code to start with.\nI see a strong need to come up with a shell script to that could:\n\nFOR <every schema.table> DO\n psql -c \"SHOW... \\g | cat > <schema.table.DDL> \"\n pg_dump -- <schema.table> only | remove_comments_normalize | cat\n<schema.table.pg_dump>\n DIFF <schema.table.DDL> <schema.table.pg_dump>\n\nOf course, since our tests are usually all perl, a perl version.\nBut I would clearly want some heavy testing/validation.\n\nOn Fri, May 12, 2023 at 8:37 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,..\nI mean ... it's already in postgres_fdw, just not in a way that can be\nreturned to the user.  I don't think I'd want this functionality to\ndepend on postgres_fdw or generally on an extension though, it should\nbe part of core in some fashion. I will start with postgres_fdw then, but gladly review the other source...Just thinking about the essence of the syntax.SHOW CREATE TABLE abc(LIKE real_table); -- Output CREATE TABLE abc();  using real_table?\nI'd look at the IMPORT FOREIGN SCHEMA stuff in postgres_fdw.  We're\nalready largely answering these questions by what options that takes.Will do. > But it's a big commitment.  I don't mind if it has a reasonable chance of\n> being accepted. \nYes, it's a large effort, no doubt. At least there is a base of code to start with.I see a strong need to come up with a shell script to that could:FOR <every schema.table> DO  psql -c \"SHOW... \\g | cat > <schema.table.DDL> \"  pg_dump -- <schema.table> only | remove_comments_normalize  | cat <schema.table.pg_dump>  DIFF <schema.table.DDL> <schema.table.pg_dump>Of course, since our tests are usually all perl, a perl version.But I would clearly want some heavy testing/validation.", "msg_date": "Sat, 13 May 2023 03:59:35 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On 5/13/23 02:25, Kirk Wolak wrote:\n> On Sat, May 13, 2023 at 1:03 AM Ron <ronljohnsonjr@gmail.com> wrote:\n>\n> On 5/12/23 18:00, Kirk Wolak wrote:\n> [snip]\n>> Where do we draw the lines?\n>\n> At other tables.\n>\n>> Does Table DDL include all indexes?\n>\n> Absolutely!\n>\n>> It should include constraints, clearly.  I would not think it should\n>> have triggers.\n>\n> Definitely triggers.  And foreign keys.\n>\n>> Literally everything within the <<CREATE TABLE X(...);>>.  (ie, no\n>> ALTER .. OWNER TO...)\n>>\n>\n> ALTER statements, too.  If CREATE TABLE ... LIKE ... { INCLUDING |\n> EXCLUDING } { COMMENTS | COMPRESSION | CONSTRAINTS | DEFAULTS |\n> GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL } can do\n> it, then so should SHOW CREATE TABLE.\n>\n> -- \n> Born in Arizona, moved to Babylonia.\n>\n>\n> I can see the ALTER statements now.  Which is why I asked.\n> I don't like the idea of including the trigger DDL, because that would \n> never execute in a clean environment.\n\nI would not be grumpy if trigger statements weren't included.\n\n> (I've never used a tool that tried to do that when I've wanted the DDL)\n> I can go either way on index creation.\n>\n> Does this imply SQL SYNTAX like:\n>\n> SHOW CREATE TABLE <table_name>\n>   [ INCLUDING { ALL | INDEXES |  SEQUENCES | ??? }]\n>   [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]\n>   [FOR {V11 | V12 | V13 | V14 | V15 }] ??\n> ?\n\n\"FOR {V...}\" is a complication too far, IMO.  No one expects \"pg_dump \n--schema-only\" to have a --version= option, so one should not expect SHOW \nCREATE TABLE to have a \"FOR {V...}\" clause.\n\n> The goal for me  is to open the discussion, and then CONSTRAIN the focus.\n>\n> Personally, the simple syntax:\n> SHOW CREATE TABLE table1;\n>\n> Should give me a create table command with the table attributes and the \n> column attributes, FKs, PKs, Defaults. Etc.\n> But I would not expect it to generate index commands, etc.\n\n-- \nBorn in Arizona, moved to Babylonia.\n\n\n\n\n\n On 5/13/23 02:25, Kirk Wolak wrote:\n\n\n\nOn Sat, May 13, 2023 at 1:03 AM Ron <ronljohnsonjr@gmail.com>\n wrote:\n\n\n\n On 5/12/23 18:00, Kirk Wolak wrote:\n \n [snip]\n\n\n\nWhere do we draw the lines?\n\n\n\n\n At other tables.\n\n\n\n\nDoes Table DDL include all indexes?\n\n\n\n\n\n Absolutely!\n\n\n\n\nIt should include constraints, clearly.  I\n would not think it should have triggers.\n\n\n\n\n\n Definitely triggers.  And foreign keys.\n\n\n\n\nLiterally everything within the <<CREATE\n TABLE X(...);>>.  (ie, no ALTER .. OWNER\n TO...)\n\n\n\n\n\n\n ALTER statements, too.  If CREATE TABLE ... LIKE ... {\n INCLUDING | EXCLUDING } { COMMENTS | COMPRESSION |\n CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES |\n STATISTICS | STORAGE | ALL } can do it, then so should\n SHOW CREATE TABLE.\n\n-- \n Born in Arizona, moved to Babylonia.\n\n\n\n\nI can see the ALTER statements now.  Which is why I\n asked.\nI don't like the idea of including the trigger DDL,\n because that would never execute in a clean environment.\n\n\n\n\n I would not be grumpy if trigger statements weren't included.\n\n\n\n\n(I've never used a tool that tried to do that when I've\n wanted the DDL)\nI can go either way on index creation.\n\n\nDoes this imply SQL SYNTAX like:\n\n SHOW CREATE TABLE <table_name> \n  [ INCLUDING { ALL | INDEXES |  SEQUENCES | ??? }] \n  [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ] \n  [FOR {V11 | V12 | V13 | V14 | V15 }] ??\n?\n\n\n\n\n\n \"FOR {V...}\" is a complication too far, IMO.  No one expects\n \"pg_dump --schema-only\" to have a --version= option, so one should\n not expect SHOW CREATE TABLE to have a \"FOR {V...}\" clause.\n\n\n\n\nThe goal for me  is to open the discussion, and then\n CONSTRAIN the focus.\n\n Personally, the simple syntax:\n SHOW CREATE TABLE table1;\n\n Should give me a create table command with the table\n attributes and the column attributes, FKs, PKs, Defaults. \n Etc.\n But I would not expect it to generate index commands, etc.\n\n\n\n\n\n-- \n Born in Arizona, moved to Babylonia.", "msg_date": "Sat, 13 May 2023 13:28:11 -0500", "msg_from": "Ron <ronljohnsonjr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Sat, May 13, 2023, 3:25 AM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> Does this imply SQL SYNTAX like:\n>\n> SHOW CREATE TABLE <table_name>\n> [ INCLUDING { ALL | INDEXES | SEQUENCES | ??? }]\n> [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]\n> [FOR {V11 | V12 | V13 | V14 | V15 }] ??\n> ?\n>\n\nPersonally, I would expect a function, like pg_get_tabledef(oid), to match\nthe other pg_get_*def functions instead of overloading SHOW. To me, this\nalso argues that we shouldn't include indexes because we already have a\npg_get_indexdef function.\n\n -Jeremy\n\n>\n>\n\nOn Sat, May 13, 2023, 3:25 AM Kirk Wolak <wolakk@gmail.com> wrote:Does this imply SQL SYNTAX like:SHOW CREATE TABLE <table_name>   [ INCLUDING { ALL | INDEXES |  SEQUENCES | ??? }]   [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]   [FOR {V11 | V12 | V13 | V14 | V15 }] ???Personally, I would expect a function, like pg_get_tabledef(oid), to match the other pg_get_*def functions instead of overloading SHOW.  To me, this also argues that we shouldn't include indexes because we already have a pg_get_indexdef function.      -Jeremy", "msg_date": "Sat, 13 May 2023 15:34:44 -0400", "msg_from": "Jeremy Smith <jeremy@musicsmith.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Sat, May 13, 2023 at 3:34 PM Jeremy Smith <jeremy@musicsmith.net> wrote:\n\n>\n>\n> On Sat, May 13, 2023, 3:25 AM Kirk Wolak <wolakk@gmail.com> wrote:\n>\n>> Does this imply SQL SYNTAX like:\n>>\n>> SHOW CREATE TABLE <table_name>\n>> [ INCLUDING { ALL | INDEXES | SEQUENCES | ??? }]\n>> [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]\n>> [FOR {V11 | V12 | V13 | V14 | V15 }] ??\n>> ?\n>>\n>\n> Personally, I would expect a function, like pg_get_tabledef(oid), to match\n> the other pg_get_*def functions instead of overloading SHOW. To me, this\n> also argues that we shouldn't include indexes because we already have a\n> pg_get_indexdef function.\n>\n> -Jeremy\n>\n+1\n\nIn fact, making it a function will make my life easier for testing, that's\nfor certain. I don't need to involve the parser,etc. Others can help with\nthat after the function works.\nThanks for the suggestion!\n\nOn Sat, May 13, 2023 at 3:34 PM Jeremy Smith <jeremy@musicsmith.net> wrote:On Sat, May 13, 2023, 3:25 AM Kirk Wolak <wolakk@gmail.com> wrote:Does this imply SQL SYNTAX like:SHOW CREATE TABLE <table_name>   [ INCLUDING { ALL | INDEXES |  SEQUENCES | ??? }]   [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]   [FOR {V11 | V12 | V13 | V14 | V15 }] ???Personally, I would expect a function, like pg_get_tabledef(oid), to match the other pg_get_*def functions instead of overloading SHOW.  To me, this also argues that we shouldn't include indexes because we already have a pg_get_indexdef function.      -Jeremy+1In fact, making it a function will make my life easier for testing, that's for certain.  I don't need to involve the parser,etc.  Others can help with that after the function works.Thanks for the suggestion!", "msg_date": "Sun, 14 May 2023 02:20:07 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, May 12, 2023 at 8:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n> ...\n> Yes, it's a large effort, no doubt.\n>\n>\nStephen, I started looking at the code.\nAnd I have the queries from \\set SHOW_HIDDEN\nthat psql uses. And also the pg_dump output.\n\n My first table was an ID bigint NOT NULL PRIMARY KEY GENERATED ALWAYS AS\nIDENTITY\n\npg_dump puts the decorations on the SEQUENCE\n\\dt puts that text as the \"Default\" value\n\nBut the STRANGE part for me is the query I Assembled from the FDW returns\nnothing for extra attributes.\nAnd only seems to care about the \"GENERATED AS (%s) STORED\" syntax.\n\nFor me, generating the INLINE syntax will produce the SEQUENCE\nautomatically, so this is my preference.\nLet me know if I am missing anything... Please.\n\nFinally, I cannot GRASP this additional syntax:\nappendStringInfo(&buf, \"\\n) SERVER %s\\nOPTIONS (\",\n\nThis is at the end of the create table syntax:\nCREATE TABLE %s ( ... ) SERVER %s\\n OPTIONS (\" ...\");\n\nIs this special \"FDW\" Decorations because I don't see those on the create\ntable documentation?\nIt's easy enough to ignore, but I don't want to miss something.\n\nKirk...\n\nOn Fri, May 12, 2023 at 8:37 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,...\nYes, it's a large effort, no doubt.Stephen, I started looking at the code.And I have the queries from \\set SHOW_HIDDENthat psql uses.  And also the pg_dump output. My first table was an ID bigint NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITYpg_dump puts the decorations on the SEQUENCE\\dt puts that text as the \"Default\" valueBut the STRANGE part for me is the query I Assembled from the FDW returns nothing for extra attributes.And only seems to care about the \"GENERATED AS (%s) STORED\" syntax.For me, generating the INLINE syntax will produce the SEQUENCE automatically, so this is my preference.Let me know if I am missing anything... Please.Finally, I cannot GRASP this additional syntax:appendStringInfo(&buf, \"\\n) SERVER %s\\nOPTIONS (\",This is at the end of the create table syntax:CREATE TABLE %s ( ...  ) SERVER %s\\n OPTIONS (\"  ...\");Is this special \"FDW\" Decorations because I don't see those on the create table documentation?It's easy enough to ignore, but I don't want to miss something.Kirk...", "msg_date": "Mon, 15 May 2023 21:26:16 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Sun, May 14, 2023 at 2:20 AM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> On Sat, May 13, 2023 at 3:34 PM Jeremy Smith <jeremy@musicsmith.net>\n> wrote:\n>\n>>\n>>\n>> On Sat, May 13, 2023, 3:25 AM Kirk Wolak <wolakk@gmail.com> wrote:\n>>\n>>> Does this imply SQL SYNTAX like:\n>>>\n>>> SHOW CREATE TABLE <table_name>\n>>> [ INCLUDING { ALL | INDEXES | SEQUENCES | ??? }]\n>>> [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]\n>>> [FOR {V11 | V12 | V13 | V14 | V15 }] ??\n>>> ?\n>>>\n>>\n>> Personally, I would expect a function, like pg_get_tabledef(oid), to\n>> match the other pg_get_*def functions instead of overloading SHOW. To me,\n>> this also argues that we shouldn't include indexes because we already have\n>> a pg_get_indexdef function.\n>>\n>> -Jeremy\n>>\n> +1\n>\n> In fact, making it a function will make my life easier for testing, that's\n> for certain. I don't need to involve the parser,etc. Others can help with\n> that after the function works.\n> Thanks for the suggestion!\n>\n\nI am moving this over to the Hackers Group.\nMy approach for now is to develop this as the \\st command.\nAfter reviewing the code/output from the 3 sources (psql, fdw, and\npg_dump). This trivializes the approach,\nand requires the smallest set of changes (psql is already close with\nexisting queries, etc).\n\nAnd frankly, I would rather have an \\st feature that handles 99% of the use\ncases then go 15yrs waiting for a perfect solution.\nOnce this works well for the group. Then, IMO, that would be the time to\ndiscuss moving it.\n\nSupport Or Objections Appreciated.\nKirk...\n\nOn Sun, May 14, 2023 at 2:20 AM Kirk Wolak <wolakk@gmail.com> wrote:On Sat, May 13, 2023 at 3:34 PM Jeremy Smith <jeremy@musicsmith.net> wrote:On Sat, May 13, 2023, 3:25 AM Kirk Wolak <wolakk@gmail.com> wrote:Does this imply SQL SYNTAX like:SHOW CREATE TABLE <table_name>   [ INCLUDING { ALL | INDEXES |  SEQUENCES | ??? }]   [EXCLUDING { PK | FK | COMMENTS | STORAGE | } ]   [FOR {V11 | V12 | V13 | V14 | V15 }] ???Personally, I would expect a function, like pg_get_tabledef(oid), to match the other pg_get_*def functions instead of overloading SHOW.  To me, this also argues that we shouldn't include indexes because we already have a pg_get_indexdef function.      -Jeremy+1In fact, making it a function will make my life easier for testing, that's for certain.  I don't need to involve the parser,etc.  Others can help with that after the function works.Thanks for the suggestion! I am moving this over to the Hackers Group.My approach for now is to develop this as the \\st command.After reviewing the code/output from the 3 sources (psql, fdw, and pg_dump).  This trivializes the approach,and requires the smallest set of changes (psql is already close with existing queries, etc).And frankly, I would rather have an \\st feature that handles 99% of the use cases then go 15yrs waiting for a perfect solution.Once this works well for the group.  Then, IMO, that would be the time to discuss moving it.Support Or Objections Appreciated. Kirk...", "msg_date": "Tue, 16 May 2023 11:33:47 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Greetings,\n\n* Kirk Wolak (wolakk@gmail.com) wrote:\n> My approach for now is to develop this as the \\st command.\n> After reviewing the code/output from the 3 sources (psql, fdw, and\n> pg_dump). This trivializes the approach,\n> and requires the smallest set of changes (psql is already close with\n> existing queries, etc).\n> \n> And frankly, I would rather have an \\st feature that handles 99% of the use\n> cases then go 15yrs waiting for a perfect solution.\n> Once this works well for the group. Then, IMO, that would be the time to\n> discuss moving it.\n\nHaving this only available via psql seems like the least desirable\noption as then it wouldn't be available to any other callers..\n\nHaving it in libpq, on the other hand, would make it available to psql\nas well as any other utility, library, or language / driver which uses\nlibpq, including pg_dump..\n\nUsing libpq would also make sense from the perspective that libpq can be\nused to connect to a number of different major versions of PG and this\ncould work work for them all in much the way that pg_dump does.\n\nThe downside with this apporach is that drivers which don't use libpq\n(eg: JDBC) would have to re-implement this if they wanted to keep\nfeature parity with libpq..\n\nThanks,\n\nStephen", "msg_date": "Thu, 18 May 2023 19:53:58 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On 2023-05-18 Th 19:53, Stephen Frost wrote:\n> Greetings,\n>\n> * Kirk Wolak (wolakk@gmail.com) wrote:\n>> My approach for now is to develop this as the \\st command.\n>> After reviewing the code/output from the 3 sources (psql, fdw, and\n>> pg_dump). This trivializes the approach,\n>> and requires the smallest set of changes (psql is already close with\n>> existing queries, etc).\n>>\n>> And frankly, I would rather have an \\st feature that handles 99% of the use\n>> cases then go 15yrs waiting for a perfect solution.\n>> Once this works well for the group. Then, IMO, that would be the time to\n>> discuss moving it.\n> Having this only available via psql seems like the least desirable\n> option as then it wouldn't be available to any other callers..\n>\n> Having it in libpq, on the other hand, would make it available to psql\n> as well as any other utility, library, or language / driver which uses\n> libpq, including pg_dump..\n>\n> Using libpq would also make sense from the perspective that libpq can be\n> used to connect to a number of different major versions of PG and this\n> could work work for them all in much the way that pg_dump does.\n>\n> The downside with this apporach is that drivers which don't use libpq\n> (eg: JDBC) would have to re-implement this if they wanted to keep\n> feature parity with libpq..\n\n\nI think the ONLY place we should have this is in server side functions. \nMore than ten years ago I did some work in this area (see below), but \nit's one of those things that have been on my ever growing personal TODO \nlist\n\nSee <https://bitbucket.org/adunstan/retailddl/src/master/> and \n<https://www.youtube.com/watch?v=fBarFKOL3SI>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-18 Th 19:53, Stephen Frost\n wrote:\n\n\nGreetings,\n\n* Kirk Wolak (wolakk@gmail.com) wrote:\n\n\nMy approach for now is to develop this as the \\st command.\nAfter reviewing the code/output from the 3 sources (psql, fdw, and\npg_dump). This trivializes the approach,\nand requires the smallest set of changes (psql is already close with\nexisting queries, etc).\n\nAnd frankly, I would rather have an \\st feature that handles 99% of the use\ncases then go 15yrs waiting for a perfect solution.\nOnce this works well for the group. Then, IMO, that would be the time to\ndiscuss moving it.\n\n\n\nHaving this only available via psql seems like the least desirable\noption as then it wouldn't be available to any other callers..\n\nHaving it in libpq, on the other hand, would make it available to psql\nas well as any other utility, library, or language / driver which uses\nlibpq, including pg_dump..\n\nUsing libpq would also make sense from the perspective that libpq can be\nused to connect to a number of different major versions of PG and this\ncould work work for them all in much the way that pg_dump does.\n\nThe downside with this apporach is that drivers which don't use libpq\n(eg: JDBC) would have to re-implement this if they wanted to keep\nfeature parity with libpq..\n\n\n\nI think the ONLY place we should have this is in server side\n functions. More than ten years ago I did some work in this area\n (see below), but it's one of those things that have been on my\n ever growing personal TODO list\nSee <https://bitbucket.org/adunstan/retailddl/src/master/>\n and <https://www.youtube.com/watch?v=fBarFKOL3SI>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 19 May 2023 13:08:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": ">\n> I think the ONLY place we should have this is in server side functions.\n> More than ten years ago I did some work in this area (see below), but it's\n> one of those things that have been on my ever growing personal TODO list\n>\n> See <https://bitbucket.org/adunstan/retailddl/src/master/>\n> <https://bitbucket.org/adunstan/retailddl/src/master/> and\n> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n>\n>\n>\n> Some additional backstory, as this has come up before...\n\nA direction discussion of a previous SHOW CREATE effort:\nhttps://www.postgresql.org/message-id/20190705163203.GD24679@fetter.org\n\nOther databases' implementations of SHOW CREATE came up in this discussion\nas well:\nhttps://www.postgresql.org/message-id/CADkLM=fxfsrHASKk_bY_A4uomJ1Te5MfGgD_rwwQfV8wP68ewg@mail.gmail.com\n\nClearly, there is customer demand for something like this, and has been for\na long time.\n\nI think the ONLY place we should have this is in server side\n functions. More than ten years ago I did some work in this area\n (see below), but it's one of those things that have been on my\n ever growing personal TODO list\nSee <https://bitbucket.org/adunstan/retailddl/src/master/>\n and <https://www.youtube.com/watch?v=fBarFKOL3SI>\n\n\nSome additional backstory, as this has come up before...A direction discussion of a previous SHOW CREATE effort:https://www.postgresql.org/message-id/20190705163203.GD24679@fetter.orgOther databases' implementations of SHOW CREATE came up in this discussion as well:https://www.postgresql.org/message-id/CADkLM=fxfsrHASKk_bY_A4uomJ1Te5MfGgD_rwwQfV8wP68ewg@mail.gmail.comClearly, there is customer demand for something like this, and has been for a long time.", "msg_date": "Fri, 19 May 2023 16:57:44 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, 2023-05-19 at 13:08 -0400, Andrew Dunstan wrote:\n> On 2023-05-18 Th 19:53, Stephen Frost wrote:\n> > * Kirk Wolak (wolakk@gmail.com) wrote:\n> > > My approach for now is to develop this as the \\st command.\n> > > After reviewing the code/output from the 3 sources (psql, fdw, and\n> > > pg_dump).\n> >\n> > Having this only available via psql seems like the least desirable\n> > option as then it wouldn't be available to any other callers..\n> > \n> > Having it in libpq, on the other hand, would make it available to psql\n> > as well as any other utility, library, or language / driver which uses\n> > libpq, including pg_dump..\n> \n> I think the ONLY place we should have this is in server side functions.\n\n+1\n\nA function \"pg_get_tabledef\" would blend nicely into the following list:\n\n\\df pg_get_*def\n List of functions\n Schema │ Name │ Result data type │ Argument data types │ Type \n════════════╪════════════════════════════════╪══════════════════╪═══════════════════════╪══════\n pg_catalog │ pg_get_constraintdef │ text │ oid │ func\n pg_catalog │ pg_get_constraintdef │ text │ oid, boolean │ func\n pg_catalog │ pg_get_functiondef │ text │ oid │ func\n pg_catalog │ pg_get_indexdef │ text │ oid │ func\n pg_catalog │ pg_get_indexdef │ text │ oid, integer, boolean │ func\n pg_catalog │ pg_get_partition_constraintdef │ text │ oid │ func\n pg_catalog │ pg_get_partkeydef │ text │ oid │ func\n pg_catalog │ pg_get_ruledef │ text │ oid │ func\n pg_catalog │ pg_get_ruledef │ text │ oid, boolean │ func\n pg_catalog │ pg_get_statisticsobjdef │ text │ oid │ func\n pg_catalog │ pg_get_triggerdef │ text │ oid │ func\n pg_catalog │ pg_get_triggerdef │ text │ oid, boolean │ func\n pg_catalog │ pg_get_viewdef │ text │ oid │ func\n pg_catalog │ pg_get_viewdef │ text │ oid, boolean │ func\n pg_catalog │ pg_get_viewdef │ text │ oid, integer │ func\n pg_catalog │ pg_get_viewdef │ text │ text │ func\n pg_catalog │ pg_get_viewdef │ text │ text, boolean │ func\n(17 rows)\n\n\nA server function can be conveniently called from any client code.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 19 May 2023 23:06:00 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "\nOn 19/05/2023 19:08, Andrew Dunstan wrote:\n> I think the ONLY place we should have this is in server side \n> functions. More than ten years ago I did some work in this area (see \n> below), but it's one of those things that have been on my ever growing \n> personal TODO list\n>\n> See <https://bitbucket.org/adunstan/retailddl/src/master/> and \n> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n>\nI have my own implementation of SHOW CREATE as a ddlx extension \navailable at https://github.com/lacanoid/pgddl\n\nFar from perfect but getting better and it seems good enough to be used \nby some people...\n\nŽ.\n\n\n\n\n\n", "msg_date": "Fri, 19 May 2023 23:31:00 +0200", "msg_from": "Ziga <ziga@ljudmila.org>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> On Fri, 2023-05-19 at 13:08 -0400, Andrew Dunstan wrote:\n> > On 2023-05-18 Th 19:53, Stephen Frost wrote:\n> > > * Kirk Wolak (wolakk@gmail.com) wrote:\n> > > > My approach for now is to develop this as the \\st command.\n> > > > After reviewing the code/output from the 3 sources (psql, fdw, and\n> > > > pg_dump).\n> > >\n> > > Having this only available via psql seems like the least desirable\n> > > option as then it wouldn't be available to any other callers..\n> > > \n> > > Having it in libpq, on the other hand, would make it available to psql\n> > > as well as any other utility, library, or language / driver which uses\n> > > libpq, including pg_dump..\n> > \n> > I think the ONLY place we should have this is in server side functions.\n> \n> +1\n\n... but it already exists in pg_dump, so I'm unclear why folks seem to\nbe pushing to have a duplicate of it in core? We certainly can't remove\nit from pg_dump even if we add it to core because pg_dump has to\nunderstand the changes between major versions.\n\n> A function \"pg_get_tabledef\" would blend nicely into the following list:\n> \n> \\df pg_get_*def\n> List of functions\n> Schema │ Name │ Result data type │ Argument data types │ Type \n> ════════════╪════════════════════════════════╪══════════════════╪═══════════════════════╪══════\n> pg_catalog │ pg_get_constraintdef │ text │ oid │ func\n> pg_catalog │ pg_get_constraintdef │ text │ oid, boolean │ func\n> pg_catalog │ pg_get_functiondef │ text │ oid │ func\n> pg_catalog │ pg_get_indexdef │ text │ oid │ func\n> pg_catalog │ pg_get_indexdef │ text │ oid, integer, boolean │ func\n> pg_catalog │ pg_get_partition_constraintdef │ text │ oid │ func\n> pg_catalog │ pg_get_partkeydef │ text │ oid │ func\n> pg_catalog │ pg_get_ruledef │ text │ oid │ func\n> pg_catalog │ pg_get_ruledef │ text │ oid, boolean │ func\n> pg_catalog │ pg_get_statisticsobjdef │ text │ oid │ func\n> pg_catalog │ pg_get_triggerdef │ text │ oid │ func\n> pg_catalog │ pg_get_triggerdef │ text │ oid, boolean │ func\n> pg_catalog │ pg_get_viewdef │ text │ oid │ func\n> pg_catalog │ pg_get_viewdef │ text │ oid, boolean │ func\n> pg_catalog │ pg_get_viewdef │ text │ oid, integer │ func\n> pg_catalog │ pg_get_viewdef │ text │ text │ func\n> pg_catalog │ pg_get_viewdef │ text │ text, boolean │ func\n> (17 rows)\n> \n> \n> A server function can be conveniently called from any client code.\n\nClearly any client using libpq can conveniently call code which is in\nlibpq.\n\nThanks,\n\nStephen", "msg_date": "Sat, 20 May 2023 13:26:29 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Sat, May 20, 2023 at 10:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> > A server function can be conveniently called from any client code.\n>\n> Clearly any client using libpq can conveniently call code which is in\n> libpq.\n>\n>\nClearly there are clients that don't use libpq. JDBC comes to mind.\n\nDavid J.\n\nOn Sat, May 20, 2023 at 10:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n> A server function can be conveniently called from any client code.\n\nClearly any client using libpq can conveniently call code which is in\nlibpq.Clearly there are clients that don't use libpq.  JDBC comes to mind.David J.", "msg_date": "Sat, 20 May 2023 10:31:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sat, May 20, 2023 at 10:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> Clearly any client using libpq can conveniently call code which is in\n>> libpq.\n\n> Clearly there are clients that don't use libpq. JDBC comes to mind.\n\nYeah. I'm also rather concerned about the bloat factor; it's\nhardly unlikely that this line of development would double the\nsize of libpq, to add functionality that only a small minority\nof applications would use. A client-side implementation also has\nno choice but to cope with multiple server versions, and to\nfigure out what it's going to do about a too-new server.\nUp to now we broke compatibility between libpq and server maybe\nonce every couple decades, but it'd be once a year for this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 May 2023 14:19:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "Greetings,\n\nOn Sat, May 20, 2023 at 13:32 David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Sat, May 20, 2023 at 10:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n>> > A server function can be conveniently called from any client code.\n>>\n>> Clearly any client using libpq can conveniently call code which is in\n>> libpq.\n>>\n>\n> Clearly there are clients that don't use libpq. JDBC comes to mind.\n>\n\nIndeed … as I mentioned up-thread already.\n\nAre we saying that we want this to be available server side, and largely\nduplicated, specifically to cater to non-libpq users? I’ll put out there,\nagain, the idea that perhaps we put it into the common library then and\nmake it available via both libpq and as a server side function ..?\n\nWe also have similar code in postgres_fdw.. ideally, imv anyway, we’d not\nend up with three copies of it.\n\nThanks,\n\nStephen\n\nGreetings,On Sat, May 20, 2023 at 13:32 David G. Johnston <david.g.johnston@gmail.com> wrote:On Sat, May 20, 2023 at 10:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n> A server function can be conveniently called from any client code.\n\nClearly any client using libpq can conveniently call code which is in\nlibpq.Clearly there are clients that don't use libpq.  JDBC comes to mind.Indeed … as I mentioned up-thread already.Are we saying that we want this to be available server side, and largely duplicated, specifically to cater to non-libpq users?  I’ll put out there, again, the idea that perhaps we put it into the common library then and make it available via both libpq and as a server side function ..?We also have similar code in postgres_fdw.. ideally, imv anyway, we’d not end up with three copies of it.Thanks,Stephen", "msg_date": "Sat, 20 May 2023 14:33:19 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Sat, May 20, 2023 at 2:33 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> On Sat, May 20, 2023 at 13:32 David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>\n>> On Sat, May 20, 2023 at 10:26 AM Stephen Frost <sfrost@snowman.net>\n>> wrote:\n>>\n>>> > A server function can be conveniently called from any client code.\n>>>\n>>> Clearly any client using libpq can conveniently call code which is in\n>>> libpq.\n>>>\n>>\n>> Clearly there are clients that don't use libpq. JDBC comes to mind.\n>>\n>\n> Indeed … as I mentioned up-thread already.\n>\n> Are we saying that we want this to be available server side, and largely\n> duplicated, specifically to cater to non-libpq users? I’ll put out there,\n> again, the idea that perhaps we put it into the common library then and\n> make it available via both libpq and as a server side function ..?\n>\n> We also have similar code in postgres_fdw.. ideally, imv anyway, we’d not\n> end up with three copies of it.\n>\n> Thanks,\n>\n> Stephen\n>\n\nFirst, as the person chasing this down, and a JDBC user, I really would\nprefer pg_get_tabledef() as Laurenz mentioned.\n\nNext, I have reviewed all 3 implementations (pg_dump [most appropriate],\npsql \\d (very similar), and the FDW which is \"way off\",\nsince it actually focuses on \"CREATE FOREIGN TABLE\" exclusively, and\nalready fails to handle many pieces not required in\ncreating a \"real\" table, as it creates a \"reflection\" of table.\n\nI am using pg_dump as my source of truth. But I noticed it does not create\n\"TEMPORARY\" tables with that syntax.\n[Leading to a question on mutating the pg_temp_# schema name back to\npg_temp. or just stripping it, in favor of the TEMPORARY]\n\nI was surprised to see ~ 2,000 lines of code in the FDW and in psql...\nWhereas pg_dump is shorter because it gets more detailed\ntable information in a structure passed in.\n\nI would love to leverage existing code, in the end. But I want to take my\ntime on this, and become intimate with the details.\nEach of the above 3 approaches have different goals. And I would prefer\nthe lowest risk:reward possible, and the least expensive\nmaintenance. Having it run server side hides a ton of details, and as Tom\npointed out, obviates DDL versioning control for other server versions.\n\nThanks for the references to the old discussions. I have queued them up to\nreview.\n\nKirk...\n\nOn Sat, May 20, 2023 at 2:33 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,On Sat, May 20, 2023 at 13:32 David G. Johnston <david.g.johnston@gmail.com> wrote:On Sat, May 20, 2023 at 10:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n> A server function can be conveniently called from any client code.\n\nClearly any client using libpq can conveniently call code which is in\nlibpq.Clearly there are clients that don't use libpq.  JDBC comes to mind.Indeed … as I mentioned up-thread already.Are we saying that we want this to be available server side, and largely duplicated, specifically to cater to non-libpq users?  I’ll put out there, again, the idea that perhaps we put it into the common library then and make it available via both libpq and as a server side function ..?We also have similar code in postgres_fdw.. ideally, imv anyway, we’d not end up with three copies of it.Thanks,StephenFirst, as the person chasing this down, and a JDBC user, I really would prefer pg_get_tabledef() as Laurenz mentioned.Next, I have reviewed all 3 implementations (pg_dump [most appropriate], psql \\d (very similar), and the FDW which is \"way off\",since it actually focuses on \"CREATE FOREIGN TABLE\" exclusively, and already fails to handle many pieces not required increating a \"real\" table, as it creates a \"reflection\" of table.I am using pg_dump as my source of truth.  But I noticed it does not create \"TEMPORARY\" tables with that syntax.[Leading to a question on mutating the pg_temp_# schema name back to pg_temp. or just stripping it, in favor of the TEMPORARY]I was surprised to see ~ 2,000 lines of code in the FDW and in psql...  Whereas pg_dump is shorter because it gets more detailed table information in a structure passed in.I would love to leverage existing code, in the end.  But I want to take my time on this, and become intimate with the details.Each of the above 3 approaches have different goals.  And I would prefer the lowest risk:reward possible, and the least expensivemaintenance.  Having it run server side hides a ton of details, and as Tom pointed out, obviates DDL versioning control for other server versions.Thanks for the references to the old discussions.  I have queued them up to review.Kirk...", "msg_date": "Sun, 21 May 2023 01:58:53 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, May 19, 2023 at 1:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> I think the ONLY place we should have this is in server side functions.\n> More than ten years ago I did some work in this area (see below), but it's\n> one of those things that have been on my ever growing personal TODO list\n>\n> See <https://bitbucket.org/adunstan/retailddl/src/master/>\n> <https://bitbucket.org/adunstan/retailddl/src/master/> and\n> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n>\nAndrew,\n Thanks for sharing that. I reviewed your code. 10yrs, clearly it's not\nworking (as-is, but close), something interesting about the\nstructure you ended up in. You check the type of the object and redirect\naccordingly at the top level. Hmmm...\nWhat I liked was that each type gets handled (I was focused on \"table\"),\nbut I realized similarities.\n\n I don't know what the group would think, but I like the thought of\ncalling this, and having it \"Correct\" to call the appropriate function.\nBut not sure it will stand. It does make obvious that some of these should\nbe spun out as \"pg_get_typedef\"..\npg_get_typedef\npg_get_domaindef\npg_get_sequencedef\n\n Finally, since you started this a while back, part of me is \"leaning\"\ntowards a function:\npg_get_columndef\n\n Which returns a properly formatted column for a table, type, or domain?\n(one of the reasons for this, is that this is\nthe function with the highest probability to change, and potentially the\neasiest to share reusability).\n\n Finally, I am curious about your opinion. I noticed you used the\ninternal pg_ tables, versus the information_schema...\nI am *thinking* that the information_schema will be more stable over\ntime... Thoughts?\n\nThank you for sharing your thoughts...\nKirk...\n\nOn Fri, May 19, 2023 at 1:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\nI think the ONLY place we should have this is in server side\n functions. More than ten years ago I did some work in this area\n (see below), but it's one of those things that have been on my\n ever growing personal TODO list\nSee <https://bitbucket.org/adunstan/retailddl/src/master/>\n and <https://www.youtube.com/watch?v=fBarFKOL3SI>Andrew,  Thanks for sharing that.  I reviewed your code.  10yrs, clearly it's not working (as-is, but close), something interesting about thestructure you ended up in.  You check the type of the object and redirect accordingly at the top level.  Hmmm...What I liked was that each type gets handled (I was focused on \"table\"), but I realized similarities.  I don't know what the group would think, but I like the thought of calling this, and having it \"Correct\" to call the appropriate function.But not sure it will stand.  It does make obvious that some of these should be spun out as \"pg_get_typedef\"..pg_get_typedefpg_get_domaindefpg_get_sequencedef  Finally, since you started this a while back, part of me is \"leaning\" towards a function:pg_get_columndef  Which returns a properly formatted column for a table, type, or domain? (one of the reasons for this, is that this isthe function with the highest probability to change, and potentially the easiest to share reusability).  Finally, I am curious about your opinion.  I noticed you used the internal pg_ tables, versus the information_schema...I am *thinking* that the information_schema will be more stable over time... Thoughts?Thank you for sharing your thoughts...Kirk...", "msg_date": "Mon, 22 May 2023 01:19:21 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "po 22. 5. 2023 v 7:19 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> On Fri, May 19, 2023 at 1:08 PM Andrew Dunstan <andrew@dunslane.net>\n> wrote:\n>\n>> I think the ONLY place we should have this is in server side functions.\n>> More than ten years ago I did some work in this area (see below), but it's\n>> one of those things that have been on my ever growing personal TODO list\n>>\n>> See <https://bitbucket.org/adunstan/retailddl/src/master/>\n>> <https://bitbucket.org/adunstan/retailddl/src/master/> and\n>> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n>> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n>>\n> Andrew,\n> Thanks for sharing that. I reviewed your code. 10yrs, clearly it's not\n> working (as-is, but close), something interesting about the\n> structure you ended up in. You check the type of the object and redirect\n> accordingly at the top level. Hmmm...\n> What I liked was that each type gets handled (I was focused on \"table\"),\n> but I realized similarities.\n>\n> I don't know what the group would think, but I like the thought of\n> calling this, and having it \"Correct\" to call the appropriate function.\n> But not sure it will stand. It does make obvious that some of these\n> should be spun out as \"pg_get_typedef\"..\n> pg_get_typedef\n> pg_get_domaindef\n> pg_get_sequencedef\n>\n> Finally, since you started this a while back, part of me is \"leaning\"\n> towards a function:\n> pg_get_columndef\n>\n> Which returns a properly formatted column for a table, type, or domain?\n> (one of the reasons for this, is that this is\n> the function with the highest probability to change, and potentially the\n> easiest to share reusability).\n>\n> Finally, I am curious about your opinion. I noticed you used the\n> internal pg_ tables, versus the information_schema...\n> I am *thinking* that the information_schema will be more stable over\n> time... Thoughts?\n>\n\nI think inside the core, the information schema is never used. And there\nwas a performance issue (fixed in PostgreSQL 12), that blocked index usage.\n\nRegards\n\nPavel\n\n\n\n> Thank you for sharing your thoughts...\n> Kirk...\n>\n>\n\npo 22. 5. 2023 v 7:19 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, May 19, 2023 at 1:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\nI think the ONLY place we should have this is in server side\n functions. More than ten years ago I did some work in this area\n (see below), but it's one of those things that have been on my\n ever growing personal TODO list\nSee <https://bitbucket.org/adunstan/retailddl/src/master/>\n and <https://www.youtube.com/watch?v=fBarFKOL3SI>Andrew,  Thanks for sharing that.  I reviewed your code.  10yrs, clearly it's not working (as-is, but close), something interesting about thestructure you ended up in.  You check the type of the object and redirect accordingly at the top level.  Hmmm...What I liked was that each type gets handled (I was focused on \"table\"), but I realized similarities.  I don't know what the group would think, but I like the thought of calling this, and having it \"Correct\" to call the appropriate function.But not sure it will stand.  It does make obvious that some of these should be spun out as \"pg_get_typedef\"..pg_get_typedefpg_get_domaindefpg_get_sequencedef  Finally, since you started this a while back, part of me is \"leaning\" towards a function:pg_get_columndef  Which returns a properly formatted column for a table, type, or domain? (one of the reasons for this, is that this isthe function with the highest probability to change, and potentially the easiest to share reusability).  Finally, I am curious about your opinion.  I noticed you used the internal pg_ tables, versus the information_schema...I am *thinking* that the information_schema will be more stable over time... Thoughts?I think inside the core, the information schema is never used.  And there was a performance issue (fixed in PostgreSQL 12), that blocked index usage.RegardsPavelThank you for sharing your thoughts...Kirk...", "msg_date": "Mon, 22 May 2023 11:24:21 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On 2023-05-22 Mo 05:24, Pavel Stehule wrote:\n>\n>\n> po 22. 5. 2023 v 7:19 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>\n> On Fri, May 19, 2023 at 1:08 PM Andrew Dunstan\n> <andrew@dunslane.net> wrote:\n>\n> I think the ONLY place we should have this is in server side\n> functions. More than ten years ago I did some work in this\n> area (see below), but it's one of those things that have been\n> on my ever growing personal TODO list\n>\n> See <https://bitbucket.org/adunstan/retailddl/src/master/>\n> <https://bitbucket.org/adunstan/retailddl/src/master/> and\n> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n> <https://www.youtube.com/watch?v=fBarFKOL3SI>\n>\n> Andrew,\n>   Thanks for sharing that.  I reviewed your code. 10yrs, clearly\n> it's not working (as-is, but close), something interesting about the\n> structure you ended up in.  You check the type of the object and\n> redirect accordingly at the top level. Hmmm...\n> What I liked was that each type gets handled (I was focused on\n> \"table\"), but I realized similarities.\n>\n>   I don't know what the group would think, but I like the thought\n> of calling this, and having it \"Correct\" to call the appropriate\n> function.\n> But not sure it will stand.  It does make obvious that some of\n> these should be spun out as \"pg_get_typedef\"..\n> pg_get_typedef\n> pg_get_domaindef\n> pg_get_sequencedef\n>\n>   Finally, since you started this a while back, part of me is\n> \"leaning\" towards a function:\n> pg_get_columndef\n>\n>   Which returns a properly formatted column for a table, type, or\n> domain? (one of the reasons for this, is that this is\n> the function with the highest probability to change, and\n> potentially the easiest to share reusability).\n>\n>   Finally, I am curious about your opinion.  I noticed you used\n> the internal pg_ tables, versus the information_schema...\n> I am *thinking* that the information_schema will be more stable\n> over time... Thoughts?\n>\n>\n> I think inside the core, the information schema is never used.  And \n> there was a performance issue (fixed in PostgreSQL 12), that blocked \n> index usage.\n>\n>\n\nA performant server side set of functions would be written in C and \nfollow the patterns in ruleutils.c.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-22 Mo 05:24, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npo 22. 5. 2023 v 7:19\n odesílatel Kirk Wolak <wolakk@gmail.com>\n napsal:\n\n\n\nOn Fri, May 19, 2023 at 1:08 PM Andrew\n Dunstan <andrew@dunslane.net>\n wrote:\n\n\n\n\nI think the ONLY place we should have this is in\n server side functions. More than ten years ago I\n did some work in this area (see below), but it's\n one of those things that have been on my ever\n growing personal TODO list\n\nSee <https://bitbucket.org/adunstan/retailddl/src/master/>\n and <https://www.youtube.com/watch?v=fBarFKOL3SI>\n\n\nAndrew,\n  Thanks for sharing that.  I reviewed your code. \n 10yrs, clearly it's not working (as-is, but close),\n something interesting about the\nstructure you ended up in.  You check the type of\n the object and redirect accordingly at the top level. \n Hmmm...\nWhat I liked was that each type gets handled (I was\n focused on \"table\"), but I realized similarities.\n\n\n  I don't know what the group would think, but I\n like the thought of calling this, and having it\n \"Correct\" to call the appropriate function.\nBut not sure it will stand.  It does make obvious\n that some of these should be spun out as\n \"pg_get_typedef\"..\npg_get_typedef\npg_get_domaindef\npg_get_sequencedef\n\n\n  Finally, since you started this a while back,\n part of me is \"leaning\" towards a function:\n pg_get_columndef\n\n\n  Which returns a properly formatted column for a\n table, type, or domain? (one of the reasons for this,\n is that this is\nthe function with the highest probability to\n change, and potentially the easiest to share\n reusability).\n\n\n  Finally, I am curious about your opinion.  I\n noticed you used the internal pg_ tables, versus the\n information_schema...\nI am *thinking* that the information_schema will be\n more stable over time... Thoughts?\n\n\n\n\n\nI think inside the core, the information schema is never\n used.  And there was a performance issue (fixed in\n PostgreSQL 12), that blocked index usage.\n\n\n\n\n\n\n\n\n\nA performant server side set of functions would be written in C\n and follow the patterns in ruleutils.c.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 22 May 2023 07:52:24 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Mon, 22 May 2023 at 13:52, Andrew Dunstan <andrew@dunslane.net> wrote:\n> A performant server side set of functions would be written in C and follow the patterns in ruleutils.c.\n\nWe have lots of DDL ruleutils in our Citus codebase:\nhttps://github.com/citusdata/citus/blob/main/src/backend/distributed/deparser/citus_ruleutils.c\n\nI'm pretty sure we'd be happy to upstream those if that meant, we\nwouldn't have to update them for every postgres release.\n\nWe also have the master_get_table_ddl_events UDF, which does what SHOW\nCREATE TABLE would do.\n\n\n", "msg_date": "Thu, 25 May 2023 15:23:40 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On 2023-05-25 Th 09:23, Jelte Fennema wrote:\n> On Mon, 22 May 2023 at 13:52, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> A performant server side set of functions would be written in C and follow the patterns in ruleutils.c.\n> We have lots of DDL ruleutils in our Citus codebase:\n> https://github.com/citusdata/citus/blob/main/src/backend/distributed/deparser/citus_ruleutils.c\n>\n> I'm pretty sure we'd be happy to upstream those if that meant, we\n> wouldn't have to update them for every postgres release.\n>\n> We also have the master_get_table_ddl_events UDF, which does what SHOW\n> CREATE TABLE would do.\n\n\nSounds promising. I'd love to see a patch.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-25 Th 09:23, Jelte Fennema\n wrote:\n\n\nOn Mon, 22 May 2023 at 13:52, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nA performant server side set of functions would be written in C and follow the patterns in ruleutils.c.\n\n\n\nWe have lots of DDL ruleutils in our Citus codebase:\nhttps://github.com/citusdata/citus/blob/main/src/backend/distributed/deparser/citus_ruleutils.c\n\nI'm pretty sure we'd be happy to upstream those if that meant, we\nwouldn't have to update them for every postgres release.\n\nWe also have the master_get_table_ddl_events UDF, which does what SHOW\nCREATE TABLE would do.\n\n\n\n\nSounds promising. I'd love to see a patch.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 26 May 2023 08:33:19 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Thu, May 25, 2023 at 9:23 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n\n> On Mon, 22 May 2023 at 13:52, Andrew Dunstan <andrew@dunslane.net> wrote:\n> > A performant server side set of functions would be written in C and\n> follow the patterns in ruleutils.c.\n>\n> We have lots of DDL ruleutils in our Citus codebase:\n>\n> https://github.com/citusdata/citus/blob/main/src/backend/distributed/deparser/citus_ruleutils.c\n>\n> I'm pretty sure we'd be happy to upstream those if that meant, we\n> wouldn't have to update them for every postgres release.\n>\n> We also have the master_get_table_ddl_events UDF, which does what SHOW\n> CREATE TABLE would do.\n>\n\nJelte, this looks promising, although it is a radically different approach\n(Querying from it to get the details).\n\nI was just getting ready to write up a bit of an RFC... On the following\napproach...\n\nI have been trying to determine how to \"focus\" this effort to move it\nforward. Here is where I am at:\n1) It should be 100% server side (and psql \\st would only work by calling\nthe server side code, if it was there)\n In reviewing... This simplifies the implementation to the current\nversion of PG DDL being generated.\n Also, as others have mentioned, it should be C based code, and use\nonly the internal tables.\n2) Since pg_get_{ triggerdef | indexdef | constraintdef } already exists, I\nwas strongly recommending to not include those.\n -- Although including the inlined constraints would be fine by me\n(potentially a boolean to turn it off?)\n3) Then focusing the reloptions WITH (%s)\n\nIt appears CITUS code handles ALL of this on a cursory review!\n\nThe ONLY thing I did not see was \"CREATE TEMPORARY \" syntax? If you did\nthis on a TEMP table,\ndoes it generate normal table syntax or TEMPORARY TABLE syntax???\n\nSo, from my take... This is a great example of solving the problem with\nexisting \"Production Quality\" Code...\nI like it...\n\nCan this get turned into a Patch? Were you offering this code up for\nothers (me?) to pull, and work into a patch?\n[If I do the patch, I am not sure it gives you the value of reducing what\nCITUS has to maintain. But it dawns on\nme that you might be pushing a much bigger patch... But I would take that,\nas I think there is other value in there]\n\nOthers???\n\nThanks,\n\nKirk...\nPS: It dawned on me that if pg_dump had used server side code to generate\nits DDL, its complexity would drop.\n\nOn Thu, May 25, 2023 at 9:23 AM Jelte Fennema <postgres@jeltef.nl> wrote:On Mon, 22 May 2023 at 13:52, Andrew Dunstan <andrew@dunslane.net> wrote:\n> A performant server side set of functions would be written in C and follow the patterns in ruleutils.c.\n\nWe have lots of DDL ruleutils in our Citus codebase:\nhttps://github.com/citusdata/citus/blob/main/src/backend/distributed/deparser/citus_ruleutils.c\n\nI'm pretty sure we'd be happy to upstream those if that meant, we\nwouldn't have to update them for every postgres release.\n\nWe also have the master_get_table_ddl_events UDF, which does what SHOW\nCREATE TABLE would do.Jelte, this looks promising, although it is a radically different approach (Querying from it to get the details).I was just getting ready to write up a bit of  an RFC... On the following approach...I have been trying to determine how to \"focus\" this effort to move it forward.  Here is where I am at:1) It should be 100% server side (and psql \\st would only work by calling the server side code, if it was there)     In reviewing... This simplifies the implementation to the current version of PG DDL being generated.     Also, as others have mentioned, it should be C based code, and use only the internal tables.2) Since pg_get_{ triggerdef | indexdef | constraintdef } already exists, I was strongly recommending to not include those.    -- Although including the inlined constraints would be fine by me (potentially a boolean to turn it off?)3) Then focusing the reloptions WITH (%s)It appears CITUS code handles ALL of this on a cursory review!The ONLY thing I did not see was \"CREATE TEMPORARY \" syntax?  If you did this on a  TEMP table,does it generate normal table syntax or TEMPORARY TABLE syntax???So, from my take... This is a great example of solving the problem with existing \"Production Quality\" Code...I like it...Can this get turned into a Patch?  Were you offering this code up for others (me?) to pull, and work into a patch?[If I do the patch, I am not sure it gives you the value of reducing what CITUS has to maintain.  But it dawns onme that you might be pushing a much bigger patch...  But I would take that, as I think there is other value in there]Others???Thanks,Kirk...PS: It dawned on me that if pg_dump had used server side code to generate its DDL, its complexity would drop.", "msg_date": "Thu, 1 Jun 2023 12:57:25 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On 2023-06-01 Th 12:57, Kirk Wolak wrote:\n>\n> PS: It dawned on me that if pg_dump had used server side code to \n> generate its DDL, its complexity would drop.\n\n\nMaybe, that remains to be seen. pg_dump needs to produce SQL that is \nsuitable for the target database, not the source database.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-01 Th 12:57, Kirk Wolak\n wrote:\n\n\n\n\n\nPS: It dawned on me that if pg_dump had used server side\n code to generate its DDL, its complexity would drop.\n\n\n\n\n\nMaybe, that remains to be seen. pg_dump needs to produce SQL that\n is suitable for the target database, not the source database.  \n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 1 Jun 2023 15:13:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Thu, Jun 1, 2023 at 3:13 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 2023-06-01 Th 12:57, Kirk Wolak wrote:\n>\n> PS: It dawned on me that if pg_dump had used server side code to generate\n> its DDL, its complexity would drop.\n>\n> Maybe, that remains to be seen. pg_dump needs to produce SQL that is\n> suitable for the target database, not the source database.\n>\n\nFirst, pg_dump has some special needs in addressing how it creates tables,\nto be able to load the data BEFORE indexing, and constraining (lest you\nhave to struggle with dependencies of FKs, etc etc)...\n\nBut version checking is interesting... Because I found no way to tell\npg_dump what DB to target. The code I did see was checking what server\noptions were available. (I clearly could have missed something)... But\nexactly how would that work? All it knows is who it is (pg_dump V14, for\nexample. So the best case is that it ignores new things that would be in\nV15, V16 because it doesn't even know what to check for or what to do).\nWhich, to me, makes it more of a side-effect than a feature, no?\n\nAnd that if it used a server side solution, it gets an interesting \"super\npower\". In that even pg_dump V20 could dump a V21 db with V21 syntax\nenhancements. (I cannot say if that is desirable or not as I lack 20yrs of\nhistory here). Finally, the thoughts of implementing this change at this\npoint, when it would impact anyone using the NEW version to dump an OLD\nversion without the server side support... So that migration may never\nhappen. Or only after the only supported DBs have the required server side\nfunctions...\n\nThe feedback is welcome...\n\nOn Thu, Jun 1, 2023 at 3:13 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\nOn 2023-06-01 Th 12:57, Kirk Wolak\n wrote:\n\n\nPS: It dawned on me that if pg_dump had used server side\n code to generate its DDL, its complexity would drop.\n\n\n\nMaybe, that remains to be seen. pg_dump needs to produce SQL that\n is suitable for the target database, not the source database. First, pg_dump has some special needs in addressing how it creates tables, to be able to load the data BEFORE indexing, and constraining (lest you have to struggle with dependencies of FKs, etc etc)...But version checking is interesting... Because I found no way to tell pg_dump what DB to target.  The code I did see was checking what server options were available. (I clearly could have missed something)...  But exactly how would that work?  All it knows is who it is (pg_dump V14, for example.  So the best case is that it ignores new things that would be in V15, V16 because it doesn't even know what to check for or what to do).  Which, to me, makes it more of a side-effect than a feature, no? And that if it used a server side solution, it gets an interesting \"super power\".  In that even pg_dump V20 could dump a V21 db with V21 syntax enhancements.  (I cannot say if that is desirable or not as I lack 20yrs of history here).  Finally, the thoughts of implementing this change at this point, when it would impact anyone using the NEW version to dump an OLD version without the server side support...  So that migration may never happen.  Or only after the only supported DBs have the required server side functions...The feedback is welcome...", "msg_date": "Thu, 1 Jun 2023 16:39:14 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On 2023-06-01 Th 16:39, Kirk Wolak wrote:\n> On Thu, Jun 1, 2023 at 3:13 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2023-06-01 Th 12:57, Kirk Wolak wrote:\n>\n>> PS: It dawned on me that if pg_dump had used server side code to\n>> generate its DDL, its complexity would drop.\n>\n> Maybe, that remains to be seen. pg_dump needs to produce SQL that\n> is suitable for the target database, not the source database.\n>\n>\n> First, pg_dump has some special needs in addressing how it creates \n> tables, to be able to load the data BEFORE indexing, and constraining \n> (lest you have to struggle with dependencies of FKs, etc etc)...\n>\n> But version checking is interesting... Because I found no way to tell \n> pg_dump what DB to target.\n>\n\nThe target version is implicitly the version it's built from.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-01 Th 16:39, Kirk Wolak\n wrote:\n\n\n\n\nOn Thu, Jun 1, 2023 at 3:13 PM Andrew Dunstan\n <andrew@dunslane.net>\n wrote:\n\n\n\n\nOn 2023-06-01 Th 12:57, Kirk Wolak wrote:\n\n\n\n\nPS: It dawned on me that if pg_dump had used\n server side code to generate its DDL, its\n complexity would drop.\n\n\n\nMaybe, that remains to be seen. pg_dump needs to\n produce SQL that is suitable for the target database,\n not the source database. \n\n\n\n\n First, pg_dump has some special needs in addressing\n how it creates tables, to be able to load the data BEFORE\n indexing, and constraining (lest you have to struggle with\n dependencies of FKs, etc etc)...\n\n\nBut version checking is interesting... Because I found no\n way to tell pg_dump what DB to target.  \n\n\n\n\n\n\n\nThe target version is implicitly the version it's built from.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 1 Jun 2023 16:46:07 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Thu, Jun 1, 2023 at 4:46 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 2023-06-01 Th 16:39, Kirk Wolak wrote:\n>\n> On Thu, Jun 1, 2023 at 3:13 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>> On 2023-06-01 Th 12:57, Kirk Wolak wrote:\n>>\n>> PS: It dawned on me that if pg_dump had used server side code to generate\n>> its DDL, its complexity would drop.\n>>\n>> Maybe, that remains to be seen. pg_dump needs to produce SQL that is\n>> suitable for the target database, not the source database.\n>>\n>\n> First, pg_dump has some special needs in addressing how it creates tables,\n> to be able to load the data BEFORE indexing, and constraining (lest you\n> have to struggle with dependencies of FKs, etc etc)...\n>\n> But version checking is interesting... Because I found no way to tell\n> pg_dump what DB to target.\n>\n> The target version is implicitly the version it's built from.\n>\nAndrew,\n Thank you. Someone else confirmed for me that the code is designed to\ncreate accurate DDL for the pg_dump version.\nSo, for example, WITH OIDs are not included with later versions of pg_dump,\neven though they are hitting a server with them.\nGreat to know.\n\n I like that CITUS offered up something. I think that might be the\ncurrent best approach. It's a win-win. They get something,\nwe get something.\n\nKirk...\n\nOn Thu, Jun 1, 2023 at 4:46 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\nOn 2023-06-01 Th 16:39, Kirk Wolak\n wrote:\n\n\nOn Thu, Jun 1, 2023 at 3:13 PM Andrew Dunstan\n <andrew@dunslane.net>\n wrote:\n\n\n\n\nOn 2023-06-01 Th 12:57, Kirk Wolak wrote:\n\n\n\n\nPS: It dawned on me that if pg_dump had used\n server side code to generate its DDL, its\n complexity would drop.\n\n\n\nMaybe, that remains to be seen. pg_dump needs to\n produce SQL that is suitable for the target database,\n not the source database. \n\n\n\n\n First, pg_dump has some special needs in addressing\n how it creates tables, to be able to load the data BEFORE\n indexing, and constraining (lest you have to struggle with\n dependencies of FKs, etc etc)...\n\n\nBut version checking is interesting... Because I found no\n way to tell pg_dump what DB to target.  \n\nThe target version is implicitly the version it's built from.Andrew,  Thank you.  Someone else confirmed for me that the code is designed to create accurate DDL for the pg_dump version.So, for example, WITH OIDs are not included with later versions of pg_dump, even though they are hitting a server with them.Great to know.  I like that CITUS offered up something.  I think that might be the current best approach.  It's a win-win.  They get something,we get something.Kirk...", "msg_date": "Fri, 2 Jun 2023 10:57:06 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Thu, 1 Jun 2023 at 18:57, Kirk Wolak <wolakk@gmail.com> wrote:\n> Can this get turned into a Patch? Were you offering this code up for others (me?) to pull, and work into a patch?\n> [If I do the patch, I am not sure it gives you the value of reducing what CITUS has to maintain. But it dawns on\n> me that you might be pushing a much bigger patch... But I would take that, as I think there is other value in there]\n\nAttached is a patch which adds the relevant functions from Citus to\nPostgres (it compiles without warnings on my machine). I checked with\na few others on the Citus team at Microsoft and everyone thought that\nupstreaming this was a good idea, because it's quite a bit of work to\nupdate with every postgres release.\n\nTo set expectations though, I don't really have time to work on this\npatch. So if you can take it over from here that would be great.\n\nThe patch only contains the C functions which generate SQL based on\nsome oids. The wrappers such as the master_get_table_ddl_events\nfunction were too hard for me to pull out of Citus code, because they\nintegrated a lot with other pieces. But the bulk of the logic is in\nthe functions in this patch. Essentially all that\nmaster_get_table_ddl_events does is call the functions in this patch\nin the right order.\n\n> The ONLY thing I did not see was \"CREATE TEMPORARY \" syntax? If you did this on a TEMP table,\n> does it generate normal table syntax or TEMPORARY TABLE syntax???\n\nYeah, the Citus code only handles things that Citus supports in\ndistributed tables. Which is quite a lot, but indeed not everything\nyet. Temporary and inherited tables are not supported in this code\nafaik. Possibly more. See the commented out\nEnsureRelationKindSupported for what should be supported (normal\ntables and partitioned tables afaik).", "msg_date": "Mon, 5 Jun 2023 13:43:42 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Mon, Jun 5, 2023 at 7:43 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n\n> On Thu, 1 Jun 2023 at 18:57, Kirk Wolak <wolakk@gmail.com> wrote:\n> > Can this get turned into a Patch? Were you offering this code up for\n> others (me?) to pull, and work into a patch?\n> > [If I do the patch, I am not sure it gives you the value of reducing\n> what CITUS has to maintain. But it dawns on\n> > me that you might be pushing a much bigger patch... But I would take\n> that, as I think there is other value in there]\n>\n> Attached is a patch which adds the relevant functions from Citus to\n> Postgres (it compiles without warnings on my machine). I checked with\n> a few others on the Citus team at Microsoft and everyone thought that\n> upstreaming this was a good idea, because it's quite a bit of work to\n> update with every postgres release.\n>\n> To set expectations though, I don't really have time to work on this\n> patch. So if you can take it over from here that would be great.\n>\n> The patch only contains the C functions which generate SQL based on\n> some oids. The wrappers such as the master_get_table_ddl_events\n> function were too hard for me to pull out of Citus code, because they\n> integrated a lot with other pieces. But the bulk of the logic is in\n> the functions in this patch. Essentially all that\n> master_get_table_ddl_events does is call the functions in this patch\n> in the right order.\n>\n> > The ONLY thing I did not see was \"CREATE TEMPORARY \" syntax? If you did\n> this on a TEMP table,\n> > does it generate normal table syntax or TEMPORARY TABLE syntax???\n>\n> Yeah, the Citus code only handles things that Citus supports in\n> distributed tables. Which is quite a lot, but indeed not everything\n> yet. Temporary and inherited tables are not supported in this code\n> afaik. Possibly more. See the commented out\n> EnsureRelationKindSupported for what should be supported (normal\n> tables and partitioned tables afaik).\n>\n\nJelte,\n Thank you for this.\n Let me see what I can do with this, it seems like a solid starting point.\n At this point, based on previous feedback, finding a way to make\nget_tabledef() etc. to work as functions is my goal.\nI will see how inherited tables and temporary tables will be dealt with.\n\n Hopefully, this transfer works to please anyone concerned with\nintegrating this code into our project from the Citus code.\n\nKirk...\n\nOn Mon, Jun 5, 2023 at 7:43 AM Jelte Fennema <postgres@jeltef.nl> wrote:On Thu, 1 Jun 2023 at 18:57, Kirk Wolak <wolakk@gmail.com> wrote:\n> Can this get turned into a Patch?  Were you offering this code up for others (me?) to pull, and work into a patch?\n> [If I do the patch, I am not sure it gives you the value of reducing what CITUS has to maintain.  But it dawns on\n> me that you might be pushing a much bigger patch...  But I would take that, as I think there is other value in there]\n\nAttached is a patch which adds the relevant functions from Citus to\nPostgres (it compiles without warnings on my machine). I checked with\na few others on the Citus team at Microsoft and everyone thought that\nupstreaming this was a good idea, because it's quite a bit of work to\nupdate with every postgres release.\n\nTo set expectations though, I don't really have time to work on this\npatch. So if you can take it over from here that would be great.\n\nThe patch only contains the C functions which generate SQL based on\nsome oids. The wrappers such as the master_get_table_ddl_events\nfunction were too hard for me to pull out of Citus code, because they\nintegrated a lot with other pieces. But the bulk of the logic is in\nthe functions in this patch. Essentially all that\nmaster_get_table_ddl_events does is call the functions in this patch\nin the right order.\n\n> The ONLY thing I did not see was \"CREATE TEMPORARY \" syntax?  If you did this on a  TEMP table,\n> does it generate normal table syntax or TEMPORARY TABLE syntax???\n\nYeah, the Citus code only handles things that Citus supports in\ndistributed tables. Which is quite a lot, but indeed not everything\nyet. Temporary and inherited tables are not supported in this code\nafaik. Possibly more. See the commented out\nEnsureRelationKindSupported for what should be supported (normal\ntables and partitioned tables afaik). Jelte,  Thank you for this.    Let me see what I can do with this, it seems like a solid starting point.  At this point, based on previous feedback, finding a way to make get_tabledef() etc. to work as functions is my goal.I will see how inherited tables and temporary tables will be dealt with.  Hopefully, this transfer works to please anyone concerned with integrating this code into our project from the Citus code.Kirk...", "msg_date": "Wed, 21 Jun 2023 20:52:32 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Wed, Jun 21, 2023 at 8:52 PM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> On Mon, Jun 5, 2023 at 7:43 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n>> On Thu, 1 Jun 2023 at 18:57, Kirk Wolak <wolakk@gmail.com> wrote:\n>> > Can this get turned into a Patch? Were you offering this code up for\n>> others (me?) to pull, and work into a patch?\n>> > [If I do the patch, I am not sure it gives you the value of reducing\n>> what CITUS has to maintain. But it dawns on\n>> > me that you might be pushing a much bigger patch... But I would take\n>> that, as I think there is other value in there]\n>>\n>\n\n> Yeah, the Citus code only handles things that Citus supports in\n>> distributed tables. Which is quite a lot, but indeed not everything\n>> yet. Temporary and inherited tables are not supported in this code\n>> afaik. Possibly more. See the commented out\n>> EnsureRelationKindSupported for what should be supported (normal\n>> tables and partitioned tables afaik).\n>>\n>\n>\nOkay, apologies for the long delay on this. I have the code Jelte\nsubmitted working. And I have (almost) figured out how to add the function\nso it shows up in the pg_catalog... (I edited files I should not have, I\nneed to know the proper process... Anyone...)\n\nNot sure if it is customary to attach the code when asking about stuff.\nFor the most part, it was what Jelte Gave us with a pg_get_tabledef()\nwrapper to call...\n\nHere is the output it produces for *select\npg_get_tabledef('pg_class'::regclass); * (Feedback Welcome)\n\nCREATE TABLE pg_class (oid oid NOT NULL, relname name NOT NULL COLLATE \"C\",\nrelnamespace oid NOT NULL, reltype oid NOT NULL, reloftype oid NOT NULL,\nrelowner oid NOT NULL, relam oid NOT NULL, relfilenode oid NOT NULL,\nreltablespace oid NOT NULL, relpages integer NOT NULL, reltuples real NOT\nNULL, relallvisible integer NOT NULL, reltoastrelid oid NOT NULL,\nrelhasindex boolean NOT NULL, relisshared boolean NOT NULL, relpersistence\n\"char\" NOT NULL, relkind \"char\" NOT NULL, relnatts smallint NOT NULL,\nrelchecks smallint NOT NULL, relhasrules boolean NOT NULL, relhastriggers\nboolean NOT NULL, relhassubclass boolean NOT NULL, relrowsecurity boolean\nNOT NULL, relforcerowsecurity boolean NOT NULL, relispopulated boolean NOT\nNULL, relreplident \"char\" NOT NULL, relispartition boolean NOT NULL,\nrelrewrite oid NOT NULL, relfrozenxid xid NOT NULL, relminmxid xid NOT\nNULL, relacl aclitem[], reloptions text[] COLLATE \"C\", relpartbound\npg_node_tree COLLATE \"C\") USING heap\n\n==\nMy Comments/Questions:\n1) I would prefer Legible output, like below\n2) I would prefer to leave off COLLATE \"C\" IFF that is the DB Default\n3) The USING heap... I want to pull UNLESS the value is NOT the default\n(That's a theme in my comments)\n4) I *THINK* including the schema would be nice?\n5) This version will work with a TEMP table, but NOT EMIT \"TEMPORARY\"...\nThoughts? Is emitting [pg_temp.] good enough?\n6) This version enumerates sequence values (Drop always, or Drop if they\nare the default values?)\n7) Should I enable the pg_get_seqdef() code\n8) It does NOT handle Inheritance (Yet... Is this important? Is it okay to\njust give the table structure for this table?)\n9) I have not tested against Partitions, etc... I SIMPLY want initial\nfeedback on Formatting\n\n-- Legible:\nCREATE TABLE pg_class (oid oid NOT NULL,\n relname name NOT NULL COLLATE \"C\",\n relnamespace oid NOT NULL,\n reltype oid NOT NULL,\n ...\n reloptions text[] COLLATE \"C\",\n relpartbound pg_node_tree COLLATE \"C\"\n)\n\n-- Too verbose with \"*DEFAULT*\" Sequence Values:\nCREATE TABLE t1 (id bigint GENERATED BY DEFAULT AS IDENTITY *(INCREMENT BY\n1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE)*\nNOT NULL,\n f1 text\n) WITH (autovacuum_vacuum_cost_delay='0', fillfactor='80',\nautovacuum_vacuum_insert_threshold='-1',\nautovacuum_analyze_threshold='500000000',\nautovacuum_vacuum_threshold='500000000',\nautovacuum_vacuum_scale_factor='1.5')\n\nThanks,\n\nKirk...\nPS: After I get feedback on Formatting the output, etc. I will gladly\ngenerate a new .patch file and send it along. Otherwise Jelte gets 100% of\nthe credit, and I don't want to look like I am changing that.\n\nOn Wed, Jun 21, 2023 at 8:52 PM Kirk Wolak <wolakk@gmail.com> wrote:On Mon, Jun 5, 2023 at 7:43 AM Jelte Fennema <postgres@jeltef.nl> wrote:On Thu, 1 Jun 2023 at 18:57, Kirk Wolak <wolakk@gmail.com> wrote:\n> Can this get turned into a Patch?  Were you offering this code up for others (me?) to pull, and work into a patch?\n> [If I do the patch, I am not sure it gives you the value of reducing what CITUS has to maintain.  But it dawns on\n> me that you might be pushing a much bigger patch...  But I would take that, as I think there is other value in there] Yeah, the Citus code only handles things that Citus supports in\ndistributed tables. Which is quite a lot, but indeed not everything\nyet. Temporary and inherited tables are not supported in this code\nafaik. Possibly more. See the commented out\nEnsureRelationKindSupported for what should be supported (normal\ntables and partitioned tables afaik).Okay, apologies for the long delay on this.  I have the code Jelte submitted working.  And I have (almost) figured out how to add the function so it shows up in the pg_catalog...  (I edited files I should not have, I need to know the proper process... Anyone...)Not sure if it is customary to attach the code when asking about stuff.  For the most part, it was what Jelte Gave us with a pg_get_tabledef() wrapper to call...Here is the output it produces for select pg_get_tabledef('pg_class'::regclass);  (Feedback Welcome)CREATE TABLE pg_class (oid oid NOT NULL, relname name NOT NULL COLLATE \"C\", relnamespace oid NOT NULL, reltype oid NOT NULL, reloftype oid NOT NULL, relowner oid NOT NULL, relam oid NOT NULL, relfilenode oid NOT NULL, reltablespace oid NOT NULL, relpages integer NOT NULL, reltuples real NOT NULL, relallvisible integer NOT NULL, reltoastrelid oid NOT NULL, relhasindex boolean NOT NULL, relisshared boolean NOT NULL, relpersistence \"char\" NOT NULL, relkind \"char\" NOT NULL, relnatts smallint NOT NULL, relchecks smallint NOT NULL, relhasrules boolean NOT NULL, relhastriggers boolean NOT NULL, relhassubclass boolean NOT NULL, relrowsecurity boolean NOT NULL, relforcerowsecurity boolean NOT NULL, relispopulated boolean NOT NULL, relreplident \"char\" NOT NULL, relispartition boolean NOT NULL, relrewrite oid NOT NULL, relfrozenxid xid NOT NULL, relminmxid xid NOT NULL, relacl aclitem[], reloptions text[] COLLATE \"C\", relpartbound pg_node_tree COLLATE \"C\") USING heap==My Comments/Questions:1) I would prefer Legible output, like below2) I would prefer to leave off COLLATE \"C\"  IFF that is the DB Default3) The USING heap... I want to pull UNLESS the value is NOT the default (That's a theme in my comments)4) I *THINK* including the schema would be nice?5) This version will work with a TEMP table, but NOT EMIT \"TEMPORARY\"... Thoughts?  Is emitting [pg_temp.] good enough?6) This version enumerates sequence values (Drop always, or Drop if they are the default values?)7) Should I enable the pg_get_seqdef() code8) It does NOT handle Inheritance (Yet... Is this important?  Is it okay to just give the table structure for this table?)9) I have not tested against Partitions, etc...  I SIMPLY want initial feedback on Formatting-- Legible:CREATE TABLE pg_class (oid oid NOT NULL, relname name NOT NULL COLLATE \"C\", relnamespace oid NOT NULL, reltype oid NOT NULL,  ... reloptions text[] COLLATE \"C\", relpartbound pg_node_tree COLLATE \"C\")-- Too verbose with \"DEFAULT\" Sequence Values:CREATE TABLE t1 (id bigint GENERATED BY DEFAULT AS IDENTITY (INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START WITH 1 CACHE 1 NO CYCLE) NOT NULL, f1 text) WITH (autovacuum_vacuum_cost_delay='0', fillfactor='80', autovacuum_vacuum_insert_threshold='-1', autovacuum_analyze_threshold='500000000', autovacuum_vacuum_threshold='500000000', autovacuum_vacuum_scale_factor='1.5')Thanks,Kirk...PS: After I get feedback on Formatting the output, etc.  I will gladly generate a new .patch file and send it along.  Otherwise Jelte gets 100% of the credit, and I don't want to look like I am changing that.", "msg_date": "Fri, 30 Jun 2023 13:56:53 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" }, { "msg_contents": "On Fri, Jun 30, 2023 at 1:56 PM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> On Wed, Jun 21, 2023 at 8:52 PM Kirk Wolak <wolakk@gmail.com> wrote:\n>\n>> On Mon, Jun 5, 2023 at 7:43 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n>>\n>>> On Thu, 1 Jun 2023 at 18:57, Kirk Wolak <wolakk@gmail.com> wrote:\n>>>\n>>\nDefinitely have the questions from the previous email, but I CERTAINLY\nappreciate this output.\n(Don't like the +, but pg_get_viewdef() creates the view the same way)...\nWill psql doing \\st pg_class be able to just call/output this so that the\noutput is nice and clean?\n\nAt this point... I will keep pressing forward, cleaning things up. And\nthen send a patch for others to play with....\n(Probably bad timing with wrapping up V16)\n\n\n\n*select pg_get_tabledef('pg_class'::regclass);*\n pg_get_tabledef\n----------------------------------------\nCREATE TABLE pg_class (oid oid NOT NULL,+\n relname name NOT NULL COLLATE \"C\", +\n relnamespace oid NOT NULL, +\n reltype oid NOT NULL, +\n reloftype oid NOT NULL, +\n relowner oid NOT NULL, +\n relam oid NOT NULL, +\n relfilenode oid NOT NULL, +\n reltablespace oid NOT NULL, +\n relpages integer NOT NULL, +\n reltuples real NOT NULL, +\n relallvisible integer NOT NULL, +\n reltoastrelid oid NOT NULL, +\n relhasindex boolean NOT NULL, +\n relisshared boolean NOT NULL, +\n relpersistence \"char\" NOT NULL, +\n relkind \"char\" NOT NULL, +\n relnatts smallint NOT NULL, +\n relchecks smallint NOT NULL, +\n relhasrules boolean NOT NULL, +\n relhastriggers boolean NOT NULL, +\n relhassubclass boolean NOT NULL, +\n relrowsecurity boolean NOT NULL, +\n relforcerowsecurity boolean NOT NULL, +\n relispopulated boolean NOT NULL, +\n relreplident \"char\" NOT NULL, +\n relispartition boolean NOT NULL, +\n relrewrite oid NOT NULL, +\n relfrozenxid xid NOT NULL, +\n relminmxid xid NOT NULL, +\n relacl aclitem[], +\n reloptions text[] COLLATE \"C\", +\n relpartbound pg_node_tree COLLATE \"C\" +\n) USING heap\n\nOn Fri, Jun 30, 2023 at 1:56 PM Kirk Wolak <wolakk@gmail.com> wrote:On Wed, Jun 21, 2023 at 8:52 PM Kirk Wolak <wolakk@gmail.com> wrote:On Mon, Jun 5, 2023 at 7:43 AM Jelte Fennema <postgres@jeltef.nl> wrote:On Thu, 1 Jun 2023 at 18:57, Kirk Wolak <wolakk@gmail.com> wrote:Definitely have the questions from the previous email, but I CERTAINLY appreciate this output.(Don't like the +, but pg_get_viewdef() creates the view the same way)...Will psql doing \\st pg_class be able to just call/output this so that the output is nice and clean?At this point... I will keep pressing forward, cleaning things up.  And then send a patch for others to play with....(Probably bad timing with wrapping up V16)select pg_get_tabledef('pg_class'::regclass);            pg_get_tabledef              ----------------------------------------CREATE TABLE pg_class (oid oid NOT NULL,+ relname name NOT NULL COLLATE \"C\",     + relnamespace oid NOT NULL,             + reltype oid NOT NULL,                  + reloftype oid NOT NULL,                + relowner oid NOT NULL,                 + relam oid NOT NULL,                    + relfilenode oid NOT NULL,              + reltablespace oid NOT NULL,            + relpages integer NOT NULL,             + reltuples real NOT NULL,               + relallvisible integer NOT NULL,        + reltoastrelid oid NOT NULL,            + relhasindex boolean NOT NULL,          + relisshared boolean NOT NULL,          + relpersistence \"char\" NOT NULL,        + relkind \"char\" NOT NULL,               + relnatts smallint NOT NULL,            + relchecks smallint NOT NULL,           + relhasrules boolean NOT NULL,          + relhastriggers boolean NOT NULL,       + relhassubclass boolean NOT NULL,       + relrowsecurity boolean NOT NULL,       + relforcerowsecurity boolean NOT NULL,  + relispopulated boolean NOT NULL,       + relreplident \"char\" NOT NULL,          + relispartition boolean NOT NULL,       + relrewrite oid NOT NULL,               + relfrozenxid xid NOT NULL,             + relminmxid xid NOT NULL,               + relacl aclitem[],                      + reloptions text[] COLLATE \"C\",         + relpartbound pg_node_tree COLLATE \"C\"  +) USING heap", "msg_date": "Sat, 1 Jul 2023 17:41:25 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding SHOW CREATE TABLE" } ]
[ { "msg_contents": "Hi,\n\nCurrently the example uses the following order of calls:\n\n StartTransactionCommand();\n SPI_connect();\n PushActiveSnapshot(...);\n\n ...\n\n SPI_finish();\n PopActiveSnapshot();\n CommitTransactionCommand();\n\nThis could be somewhat misleading. Typically one expects something to be freed\nin a reverse order compared to initialization. This creates a false impression\nthat PushActiveSnapshot(...) _should_ be called after SPI_connect().\n\nThe patch changes the order to:\n\n StartTransactionCommand();\n PushActiveSnapshot(...);\n SPI_connect();\n\n ...\n\n SPI_finish();\n PopActiveSnapshot();\n CommitTransactionCommand();\n\n... and also clarifies that the order of PushActiveSnapshot(...) and\nSPI_connect() is not important.\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 12 May 2023 18:39:04 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "Hi,\n\n> The patch changes the order to:\n>\n> StartTransactionCommand();\n> PushActiveSnapshot(...);\n> SPI_connect();\n>\n> ...\n>\n> SPI_finish();\n> PopActiveSnapshot();\n> CommitTransactionCommand();\n>\n> ... and also clarifies that the order of PushActiveSnapshot(...) and\n> SPI_connect() is not important.\n\nAdditionally I noticed that the check:\n\n```\n if (!process_shared_preload_libraries_in_progress)\n return;\n```\n\n... was misplaced in _PG_init(). Here is the patch v2 which fixes this too.\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Sat, 3 Jun 2023 14:09:26 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "On Sat, Jun 03, 2023 at 02:09:26PM +0300, Aleksander Alekseev wrote:\n>\n> Additionally I noticed that the check:\n>\n> ```\n> if (!process_shared_preload_libraries_in_progress)\n> return;\n> ```\n>\n> ... was misplaced in _PG_init(). Here is the patch v2 which fixes this too.\n\nI'm pretty sure that this is intentional. The worker can be launched\ndynamically and in that case it still needs a GUC for the naptime.\n\n\n", "msg_date": "Sat, 3 Jun 2023 19:15:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "Hi Julien,\n\n> I'm pretty sure that this is intentional. The worker can be launched\n> dynamically and in that case it still needs a GUC for the naptime.\n\nThe dynamic worker also is going to need worker_spi_database, however\nthe corresponding GUC declaration is placed below the check.\n\nPerhaps we should just say that the extension shouldn't be used\nwithout shared_preload_libraies. We are not testing whether it works\nin such a case anyway.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 3 Jun 2023 14:38:26 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "On Sat, Jun 03, 2023 at 02:38:26PM +0300, Aleksander Alekseev wrote:\n> Hi Julien,\n>\n> > I'm pretty sure that this is intentional. The worker can be launched\n> > dynamically and in that case it still needs a GUC for the naptime.\n>\n> The dynamic worker also is going to need worker_spi_database, however\n> the corresponding GUC declaration is placed below the check.\n\nYes, and that's because that GUC is declared as PGC_POSTMASTER so you can't\ndeclare such a GUC dynamically.\n>\n> Perhaps we should just say that the extension shouldn't be used\n> without shared_preload_libraies. We are not testing whether it works\n> in such a case anyway.\n\nThe patch that added the database name clearly was committed without bothering\ntesting that scenario, but it would be better to fix it and add some coverage\nrather than remove some example of how to use dynamic bgworkers. Maybe with a\nassign hook to make sure it's never changed once assigned or something along\nthose lines.\n\nThat being said this module is really naive and has so many problems that I\ndon't think it's actually helpful as coding guidelines for anyone who wants to\ncreate a non toy extension using bgworkers.\n\n\n", "msg_date": "Sat, 3 Jun 2023 20:28:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "Hi,\n\n> That being said this module is really naive and has so many problems that I\n> don't think it's actually helpful as coding guidelines for anyone who wants to\n> create a non toy extension using bgworkers.\n\nAgree. It is a simple example and I don't think it's going to be\nuseful to make a complicated one out of it.\n\nThe order of the calls it currently uses however may be extremely\nconfusing for newcomers. It creates an impression that this particular\norder is extremely important while in fact it's not and it takes time\nto figure this out.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 3 Jun 2023 15:34:30 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "On Sat, Jun 03, 2023 at 03:34:30PM +0300, Aleksander Alekseev wrote:\n> Agree. It is a simple example and I don't think it's going to be\n> useful to make a complicated one out of it.\n\nIt does not have to be complicated, but I definitely agree that we'd\nbetter spend some efforts in improving it as a whole especially\nknowing that this is mentioned on the docs as an example that one\ncould rely on.\n\n> The order of the calls it currently uses however may be extremely\n> confusing for newcomers. It creates an impression that this particular\n> order is extremely important while in fact it's not and it takes time\n> to figure this out.\n\n+ * The order of PushActiveSnapshot() and SPI_connect() is not really\n+ * important.\n\nFWIW, looking at the patch, I don't think that this is particularly\nuseful.\n--\nMichael", "msg_date": "Sat, 3 Jun 2023 18:35:00 -0400", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "Hi Michael,\n\nThanks for your feedback.\n\n> + * The order of PushActiveSnapshot() and SPI_connect() is not really\n> + * important.\n>\n> FWIW, looking at the patch, I don't think that this is particularly\n> useful.\n\nFair enough, here is the corrected patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 5 Jun 2023 11:15:39 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "On Sat, Jun 03, 2023 at 06:35:00PM -0400, Michael Paquier wrote:\n> It does not have to be complicated, but I definitely agree that we'd\n> better spend some efforts in improving it as a whole especially\n> knowing that this is mentioned on the docs as an example that one\n> could rely on.\n\n+1. I know I've used worker_spi as a reference for writing background\nworkers before.\n\nIMHO it'd be better if the patch documented the places where the ordering\nreally does matter instead of hoping extension authors will understand the\nreasoning behind the proposed reordering. I agree that the current code\ncould lead folks to think that PushActiveSnapshot must go after\nSPI_connect, but wouldn't the reverse ordering just give folks the opposite\nimpression?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 12 Jun 2023 16:30:15 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "Hi,\n\n> On Sat, Jun 03, 2023 at 06:35:00PM -0400, Michael Paquier wrote:\n> > It does not have to be complicated, but I definitely agree that we'd\n> > better spend some efforts in improving it as a whole especially\n> > knowing that this is mentioned on the docs as an example that one\n> > could rely on.\n>\n> +1. I know I've used worker_spi as a reference for writing background\n> workers before.\n\nThanks for the feedback.\n\n> I agree that the current code\n> could lead folks to think that PushActiveSnapshot must go after\n> SPI_connect, but wouldn't the reverse ordering just give folks the opposite\n> impression?\n\nThis is the exact reason why the original patch had an explicit\ncomment that the ordering is not important in this case. It was argued\nhowever that the comment is redundant and thus it was removed.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 13 Jun 2023 12:34:09 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "On Tue, Jun 13, 2023 at 12:34:09PM +0300, Aleksander Alekseev wrote:\n>\n> > I agree that the current code\n> > could lead folks to think that PushActiveSnapshot must go after\n> > SPI_connect, but wouldn't the reverse ordering just give folks the opposite\n> > impression?\n>\n> This is the exact reason why the original patch had an explicit\n> comment that the ordering is not important in this case. It was argued\n> however that the comment is redundant and thus it was removed.\n\nI also don't think that a comment is really worthwhile. If there were any hard\ndependency, it should be mentioned in the various functions comments as that's\nthe first place one should look at when using a function they're not familiar\nwith.\n\nThat being said, I still don't understand why you focus on this tiny and not\nreally important detail while the module itself is actually broken (for dynamic\nbgworker without s_p_l) and also has some broken behaviors with regards to the\nnaptime that are way more likely to hurt third party code that was written\nusing this module as an example.\n\n\n", "msg_date": "Tue, 13 Jun 2023 19:58:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "On Tue, Jun 13, 2023 at 07:58:02PM +0800, Julien Rouhaud wrote:\n> That being said, I still don't understand why you focus on this tiny and not\n> really important detail while the module itself is actually broken (for dynamic\n> bgworker without s_p_l) and also has some broken behaviors with regards to the\n> naptime that are way more likely to hurt third party code that was written\n> using this module as an example.\n\nAre you or Aleksander interested in helping improve this module? I'm happy\nto help review and/or write patches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Jun 2023 11:15:45 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "Hi Nathan,\n\n> > That being said, I still don't understand why you focus on this tiny and not\n> > really important detail while the module itself is actually broken (for dynamic\n> > bgworker without s_p_l) and also has some broken behaviors with regards to the\n> > naptime that are way more likely to hurt third party code that was written\n> > using this module as an example.\n>\n> Are you or Aleksander interested in helping improve this module? I'm happy\n> to help review and/or write patches.\n\nUnfortunately I'm not familiar with the problem in respect of naptime\nJulien is referring to. If you know what this problem is and how to\nfix it, go for it. I'll review and test the code then. I can write the\npart of the patch that fixes the part regarding dynamic workers if\nnecessary.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 14 Jun 2023 14:08:03 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "On Wed, Jun 14, 2023 at 02:08:03PM +0300, Aleksander Alekseev wrote:\n>\n> Unfortunately I'm not familiar with the problem in respect of naptime\n> Julien is referring to. If you know what this problem is and how to\n> fix it, go for it. I'll review and test the code then. I can write the\n> part of the patch that fixes the part regarding dynamic workers if\n> necessary.\n\nOh, sorry I thought it was somewhat evident.\n\nThe naptime GUC description says:\n\n> Duration between each check (in seconds).\n\nand the associated code does a single\n\nWaitLatch(..., WL_LATCH_SET | WL_TIMEOUT, ...)\n\nSo unless I'm missing something nothing prevents the check being run way more\noften than expected if the latch keeps being set.\n\nSimilarly, my understanding of \"duration between checks\" is that a naptime of 1\nmin means that the check should be run a minute apart, assuming it's possible.\nAs is, the checks are run naptime + query execution time apart, which doesn't\nseem right. Obviously there's isn't much you can do if the query execution\nlasts for more than naptime, apart from detecting it and raising a warning to\nlet users know that their configuration isn't adequate (or that there's some\nother problem like some lock contention or something), similarly to e.g.\ncheckpoint_warning.\n\nNote I haven't looked closely at this module otherwise, so I can't say if there\nare some other problems around.\n\n\n", "msg_date": "Fri, 16 Jun 2023 10:17:00 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "> On 14 Jun 2023, at 13:08, Aleksander Alekseev <aleksander@timescale.com> wrote:\n\n>> Are you or Aleksander interested in helping improve this module? I'm happy\n>> to help review and/or write patches.\n> \n> Unfortunately I'm not familiar with the problem in respect of naptime\n> Julien is referring to. If you know what this problem is and how to\n> fix it, go for it. I'll review and test the code then. I can write the\n> part of the patch that fixes the part regarding dynamic workers if\n> necessary.\n\nHave you had a chance to look at these suggestions, and Juliens reply\ndownthread, in order to produce a new version of the patch?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 10 Jul 2023 10:18:09 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "Hi,\n\n> Have you had a chance to look at these suggestions, and Juliens reply\n> downthread, in order to produce a new version of the patch?\n\nThanks for the reminder. No I haven't. Please feel free marking this\nCF entry as RwF if this will be helpful. We may reopen it if and when\nnecessary.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 10 Jul 2023 15:40:44 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" }, { "msg_contents": "> On 10 Jul 2023, at 14:40, Aleksander Alekseev <aleksander@timescale.com> wrote:\n\n>> Have you had a chance to look at these suggestions, and Juliens reply\n>> downthread, in order to produce a new version of the patch?\n> \n> Thanks for the reminder. No I haven't. Please feel free marking this\n> CF entry as RwF if this will be helpful. We may reopen it if and when\n> necessary.\n\nOk, will do. Please feel free to resubmit to a future CF when there is a new\nversion for consideration.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 10 Jul 2023 14:44:31 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Slight improvement of worker_spi.c example" } ]
[ { "msg_contents": "This is intended as a follow-up to de4d456 [0]. I noticed that c3afe8c\nintroduced another \"must have privileges\" error message that I think should\nbe adjusted to use the new style introduced in de4d456. І've attached a\nsmall patch for this.\n\nWhile looking around for other such error messages, I found a few dozen\n\"must be superuser\" errors that might be improved with the new style. If\nfolks feel this is worthwhile, I'll put together a patch.\n\n[0] https://postgr.es/m/20230126002251.GA1506128%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 12 May 2023 13:37:21 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "improve more permissions-related error messages" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> This is intended as a follow-up to de4d456 [0]. I noticed that c3afe8c\n> introduced another \"must have privileges\" error message that I think should\n> be adjusted to use the new style introduced in de4d456. І've attached a\n> small patch for this.\n\n+1\n\n> While looking around for other such error messages, I found a few dozen\n> \"must be superuser\" errors that might be improved with the new style. If\n> folks feel this is worthwhile, I'll put together a patch.\n\nThe new style is better for cases where we've broken out a predefined role\nthat has the necessary privilege. I'm not sure it's worth troubling\nwith cases that are still just \"must be superuser\". It seems like\nyou'd mostly just be creating work for the translation team.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 May 2023 16:43:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improve more permissions-related error messages" }, { "msg_contents": "On Fri, May 12, 2023 at 04:43:08PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> While looking around for other such error messages, I found a few dozen\n>> \"must be superuser\" errors that might be improved with the new style. If\n>> folks feel this is worthwhile, I'll put together a patch.\n> \n> The new style is better for cases where we've broken out a predefined role\n> that has the necessary privilege. I'm not sure it's worth troubling\n> with cases that are still just \"must be superuser\". It seems like\n> you'd mostly just be creating work for the translation team.\n\nMakes sense, thanks.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 May 2023 13:46:18 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improve more permissions-related error messages" } ]
[ { "msg_contents": "Hello hackers,\n\nI found \"...\" confusing in some comments, so this patch changes it to\n\"cstring\". Which seems to be the intention after all.\n\nBest regards,\nSteve", "msg_date": "Sun, 14 May 2023 18:07:23 -0300", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "'converts internal representation to \"...\"' comment is confusing" }, { "msg_contents": "Steve Chavez <steve@supabase.io> writes:\n> I found \"...\" confusing in some comments, so this patch changes it to\n> \"cstring\". Which seems to be the intention after all.\n\nThose comments are Berkeley-era, making them probably a decade older\nthan the \"cstring\" pseudotype (invented in b663f3443). Perhaps what\nyou suggest is an improvement, but I'm not sure that appealing to\noriginal intent can make the case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 May 2023 21:36:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 'converts internal representation to \"...\"' comment is confusing" }, { "msg_contents": "Thanks a lot for the clarification!\n\nThe \"...\" looks enigmatic right now. I think cstring would save newcomers\nsome head-scratching.\n\nOpen to suggestions though.\n\nBest regards,\nSteve\n\nOn Sun, 14 May 2023 at 22:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Steve Chavez <steve@supabase.io> writes:\n> > I found \"...\" confusing in some comments, so this patch changes it to\n> > \"cstring\". Which seems to be the intention after all.\n>\n> Those comments are Berkeley-era, making them probably a decade older\n> than the \"cstring\" pseudotype (invented in b663f3443). Perhaps what\n> you suggest is an improvement, but I'm not sure that appealing to\n> original intent can make the case.\n>\n> regards, tom lane\n>\n\nThanks a lot for the clarification!The \"...\" looks enigmatic right now. I think cstring would save newcomers some head-scratching.Open to suggestions though.Best regards,Steve On Sun, 14 May 2023 at 22:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:Steve Chavez <steve@supabase.io> writes:\n> I found \"...\" confusing in some comments, so this patch changes it to\n> \"cstring\". Which seems to be the intention after all.\n\nThose comments are Berkeley-era, making them probably a decade older\nthan the \"cstring\" pseudotype (invented in b663f3443).  Perhaps what\nyou suggest is an improvement, but I'm not sure that appealing to\noriginal intent can make the case.\n\n                        regards, tom lane", "msg_date": "Sun, 14 May 2023 23:50:45 -0300", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "Re: 'converts internal representation to \"...\"' comment is confusing" }, { "msg_contents": "On Sun, May 14, 2023 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Steve Chavez <steve@supabase.io> writes:\n> > I found \"...\" confusing in some comments, so this patch changes it to\n> > \"cstring\". Which seems to be the intention after all.\n>\n> Those comments are Berkeley-era, making them probably a decade older\n> than the \"cstring\" pseudotype (invented in b663f3443). Perhaps what\n> you suggest is an improvement, but I'm not sure that appealing to\n> original intent can make the case.\n\nFWIW, it does seem like an improvement to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 May 2023 08:49:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 'converts internal representation to \"...\"' comment is confusing" }, { "msg_contents": "Hello hackers,\n\nTom, could we apply this patch since Robert agrees it's an improvement?\n\nBest regards,\nSteve\n\nOn Tue, 16 May 2023 at 07:49, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, May 14, 2023 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Steve Chavez <steve@supabase.io> writes:\n> > > I found \"...\" confusing in some comments, so this patch changes it to\n> > > \"cstring\". Which seems to be the intention after all.\n> >\n> > Those comments are Berkeley-era, making them probably a decade older\n> > than the \"cstring\" pseudotype (invented in b663f3443). Perhaps what\n> > you suggest is an improvement, but I'm not sure that appealing to\n> > original intent can make the case.\n>\n> FWIW, it does seem like an improvement to me.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nHello hackers,Tom, could we apply this patch since Robert agrees it's an improvement?Best regards,Steve On Tue, 16 May 2023 at 07:49, Robert Haas <robertmhaas@gmail.com> wrote:On Sun, May 14, 2023 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Steve Chavez <steve@supabase.io> writes:\n> > I found \"...\" confusing in some comments, so this patch changes it to\n> > \"cstring\". Which seems to be the intention after all.\n>\n> Those comments are Berkeley-era, making them probably a decade older\n> than the \"cstring\" pseudotype (invented in b663f3443).  Perhaps what\n> you suggest is an improvement, but I'm not sure that appealing to\n> original intent can make the case.\n\nFWIW, it does seem like an improvement to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 24 Jun 2023 15:52:11 -0500", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "Re: 'converts internal representation to \"...\"' comment is confusing" }, { "msg_contents": "On 24/06/2023 23:52, Steve Chavez wrote:\n> On Tue, 16 May 2023 at 07:49, Robert Haas <robertmhaas@gmail.com \n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> On Sun, May 14, 2023 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> > Steve Chavez <steve@supabase.io <mailto:steve@supabase.io>> writes:\n> > > I found \"...\" confusing in some comments, so this patch changes\n> it to\n> > > \"cstring\". Which seems to be the intention after all.\n> >\n> > Those comments are Berkeley-era, making them probably a decade older\n> > than the \"cstring\" pseudotype (invented in b663f3443).  Perhaps what\n> > you suggest is an improvement, but I'm not sure that appealing to\n> > original intent can make the case.\n> \n> FWIW, it does seem like an improvement to me.\n> \n> Tom, could we apply this patch since Robert agrees it's an improvement?\n\nLooking around at other input/output functions, we're not very \nconsistent, there are many variants of \"converts string to [datatype]\", \n\"converts C string to [datatype]\", and \"input routine for [datatype]\". \nThey are all fine, even though they're inconsistent. Doesn't seem worth \nthe code churn to change them.\n\nAnyway, I agree this patch is an improvement, so applied. Thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 26 Jun 2023 11:59:06 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: 'converts internal representation to \"...\"' comment is confusing" } ]
[ { "msg_contents": "Hello hackers,\n\nI use postgres/src/tools/make_ctags and it works great. But it leaves the\ntags files ready to be committed in git. So, I've added 'tags' to\n.gitignore.\n\nBest regards,\nSteve\n\nHello hackers,I use postgres/src/tools/make_ctags and it works great. But it leaves the tags files ready to be committed in git. So, I've added 'tags' to .gitignore.Best regards,Steve", "msg_date": "Sun, 14 May 2023 18:13:21 -0300", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "Using make_ctags leaves tags files in git" }, { "msg_contents": "On Sun, May 14, 2023 at 06:13:21PM -0300, Steve Chavez wrote:\n> I use postgres/src/tools/make_ctags and it works great. But it leaves the\n> tags files ready to be committed in git. So, I've added 'tags' to\n> .gitignore.\n\nThis has been proposed in the past, where no conclusion was reached\nabout whether this would be the best move (backup files from vim or\nemacs are not ignored, either):\nhttps://www.postgresql.org/message-id/CAFcNs+rG-DASXzHcecYKvAj+rmxi8CpMAgbpGpEK-mjC96F=Lg@mail.gmail.com\n\nAnd there are global rules, as well.\n--\nMichael", "msg_date": "Mon, 15 May 2023 08:44:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Using make_ctags leaves tags files in git" }, { "msg_contents": "Hello Michael,\n\nOn the previous patch:\n\n- `.*.swp` was added, which I entirely agree shouldn't be in .gitignore as\nit's editor specific(despite me using vim).\n- The discussion dabbled too much around the *.swp addition.\n\nIn this case I just propose adding 'tags'. I believe it's reasonable to\nignore these as they're produced by make_ctags.\n\nBest regards,\nSteve\n\nOn Sun, 14 May 2023 at 20:44, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, May 14, 2023 at 06:13:21PM -0300, Steve Chavez wrote:\n> > I use postgres/src/tools/make_ctags and it works great. But it leaves the\n> > tags files ready to be committed in git. So, I've added 'tags' to\n> > .gitignore.\n>\n> This has been proposed in the past, where no conclusion was reached\n> about whether this would be the best move (backup files from vim or\n> emacs are not ignored, either):\n>\n> https://www.postgresql.org/message-id/CAFcNs+rG-DASXzHcecYKvAj+rmxi8CpMAgbpGpEK-mjC96F=Lg@mail.gmail.com\n>\n> And there are global rules, as well.\n> --\n> Michael\n>\n\nHello Michael,On the previous patch:- `.*.swp` was added, which I entirely agree shouldn't be in .gitignore as it's editor specific(despite me using vim).- The discussion dabbled too much around the *.swp addition.In this case I just propose adding 'tags'. I believe it's reasonable to ignore these as they're produced by make_ctags. Best regards,SteveOn Sun, 14 May 2023 at 20:44, Michael Paquier <michael@paquier.xyz> wrote:On Sun, May 14, 2023 at 06:13:21PM -0300, Steve Chavez wrote:\n> I use postgres/src/tools/make_ctags and it works great. But it leaves the\n> tags files ready to be committed in git. So, I've added 'tags' to\n> .gitignore.\n\nThis has been proposed in the past, where no conclusion was reached\nabout whether this would be the best move (backup files from vim or\nemacs are not ignored, either):\nhttps://www.postgresql.org/message-id/CAFcNs+rG-DASXzHcecYKvAj+rmxi8CpMAgbpGpEK-mjC96F=Lg@mail.gmail.com\n\nAnd there are global rules, as well.\n--\nMichael", "msg_date": "Sun, 14 May 2023 21:00:40 -0300", "msg_from": "Steve Chavez <steve@supabase.io>", "msg_from_op": true, "msg_subject": "Re: Using make_ctags leaves tags files in git" }, { "msg_contents": "Steve Chavez <steve@supabase.io> writes:\n> In this case I just propose adding 'tags'. I believe it's reasonable to\n> ignore these as they're produced by make_ctags.\n\nOur policy on this is that the project's .gitignore files should ignore\nfiles that are produced by our standard build scripts. Anything else\nyou should put in your personal ignore patterns (one way is to set\nthe core.excludesFile property in ~/.gitconfig). Otherwise it's very\nvery hard to argue which tools are privileged to get a project-wide\nignore entry. Personally, for example, I use emacs but not ctags,\nso I'd put \"*~\" way ahead of \"tags\". But it's my responsibility to\nignore \"*~\", and I do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 May 2023 21:25:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using make_ctags leaves tags files in git" }, { "msg_contents": "On 2023-May-14, Tom Lane wrote:\n\n> Steve Chavez <steve@supabase.io> writes:\n> > In this case I just propose adding 'tags'. I believe it's reasonable to\n> > ignore these as they're produced by make_ctags.\n> \n> Our policy on this is that the project's .gitignore files should ignore\n> files that are produced by our standard build scripts.\n\nBut make_ctags is *our* script, so I think this rule applies to them as\nwell. (In any case, what can be hurt? We're not going to add any files\nto git named \"tags\" anyway.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 15 May 2023 12:33:17 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Using make_ctags leaves tags files in git" }, { "msg_contents": "On Mon, May 15, 2023 at 12:33:17PM +0200, Alvaro Herrera wrote:\n> But make_ctags is *our* script, so I think this rule applies to them as\n> well. (In any case, what can be hurt? We're not going to add any files\n> to git named \"tags\" anyway.)\n\nYes, you have a point about the origin of the script generating the\ntags. One thing is that one can still add a file even if listed in\nwhat to ignore, as long as it is done with git-add -f. Okay, that's\nnot going to happen.\n\n(FWIW, looking at my stuff, I have just set up that globally in 2018\nafter seeing the other thread.)\n--\nMichael", "msg_date": "Tue, 16 May 2023 08:58:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Using make_ctags leaves tags files in git" }, { "msg_contents": "> On 15 May 2023, at 12:33, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2023-May-14, Tom Lane wrote:\n> \n>> Steve Chavez <steve@supabase.io> writes:\n>>> In this case I just propose adding 'tags'. I believe it's reasonable to\n>>> ignore these as they're produced by make_ctags.\n>> \n>> Our policy on this is that the project's .gitignore files should ignore\n>> files that are produced by our standard build scripts.\n> \n> But make_ctags is *our* script, so I think this rule applies to them as\n> well.\n\nIt is our script, but \"tags\" is not an artifact created by the build scripts\nsince make_{c|e}tags isn't part of the build so I'm not convinced we should\nbother. If we do I'm sure there are many other scripts in src/tools which\ngenerate files we don't track (like make_mkid, which admittedly probably hasn't\nbeen executed by anyone in a decade or so).\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 16 May 2023 10:19:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Using make_ctags leaves tags files in git" } ]
[ { "msg_contents": "Personally I would appreciate it if \\sv actually showed you the DDL.\nOftentimes I will \\ev something to review it, with syntax highlighting.\n\nObviously this won't go in until V17, but looking at other tab-completion\nfixes.\n\nThis should not be that difficult. Just looking for feedback.\nAdmittedly \\e is questionable, because you cannot really apply the changes.\nALTHOUGH, I would consider that I could\nBEGIN;\nDROP MATERIALIZED VIEW ...;\nCREATE MATERIALIZED VIEW ...;\n\nWhich I had to do to change the WITH DATA so it creates with data when we\nreload our object.s\n\nKirk...\n\nPersonally I would appreciate it if \\sv actually showed you the DDL.Oftentimes I will \\ev something to review it, with syntax highlighting.Obviously this won't go in until V17, but looking at other tab-completion fixes.This should not be that difficult.  Just looking for feedback.Admittedly \\e is questionable, because you cannot really apply the changes.ALTHOUGH, I would consider that I couldBEGIN;DROP MATERIALIZED VIEW ...;CREATE MATERIALIZED VIEW ...;Which I had to do to change the WITH DATA so it creates with data when we reload our object.sKirk...", "msg_date": "Mon, 15 May 2023 00:32:54 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "PSQL Should \\sv & \\ev work with materialized views?" }, { "msg_contents": "On 2023-05-15 06:32 +0200, Kirk Wolak wrote:\n> Personally I would appreciate it if \\sv actually showed you the DDL.\n> Oftentimes I will \\ev something to review it, with syntax highlighting.\n\n+1. I was just reviewing some matviews and was surprised that psql\nlacks commands to show their definitions.\n\nBut I think that it should be separate commands \\sm and \\em because we\nalready have commands \\dm and \\dv that distinguish between matviews and\nviews.\n\n> This should not be that difficult. Just looking for feedback.\n> Admittedly \\e is questionable, because you cannot really apply the changes.\n> ALTHOUGH, I would consider that I could\n> BEGIN;\n> DROP MATERIALIZED VIEW ...;\n> CREATE MATERIALIZED VIEW ...;\n> \n> Which I had to do to change the WITH DATA so it creates with data when we\n> reload our object.s\n\nI think this could even be handled by optional modifiers, e.g. \\em emits\nCREATE MATERIALIZED VIEW ... WITH NO DATA and \\emD emits WITH DATA.\nAlthough I wouldn't mind manually changing WITH NO DATA to WITH DATA.\n\n-- \nErik\n\n\n", "msg_date": "Fri, 29 Mar 2024 00:02:11 +0100", "msg_from": "Erik Wienhold <ewie@ewie.name>", "msg_from_op": false, "msg_subject": "Re: PSQL Should \\sv & \\ev work with materialized views?" }, { "msg_contents": "I wrote:\n> On 2023-05-15 06:32 +0200, Kirk Wolak wrote:\n> > Personally I would appreciate it if \\sv actually showed you the DDL.\n> > Oftentimes I will \\ev something to review it, with syntax highlighting.\n> \n> +1. I was just reviewing some matviews and was surprised that psql\n> lacks commands to show their definitions.\n> \n> But I think that it should be separate commands \\sm and \\em because we\n> already have commands \\dm and \\dv that distinguish between matviews and\n> views.\n\nSeparate commands are not necessary because \\ev and \\sv already have a\n(disabled) provision in get_create_object_cmd for when CREATE OR REPLACE\nMATERIALIZED VIEW is available. So I guess both commands should also\napply to matview. The attached patch replaces that provision with a\ntransaction that drops and creates the matview. This uses meta command\n\\; to put multiple statements into the query buffer without prematurely\nsending those statements to the server.\n\nDemo:\n\n\t=> DROP MATERIALIZED VIEW IF EXISTS test;\n\tDROP MATERIALIZED VIEW\n\t=> CREATE MATERIALIZED VIEW test AS SELECT s FROM generate_series(1, 10) s;\n\tSELECT 10\n\t=> \\sv test\n\tBEGIN \\;\n\tDROP MATERIALIZED VIEW public.test \\;\n\tCREATE MATERIALIZED VIEW public.test AS\n\t SELECT s\n\t FROM generate_series(1, 10) s(s)\n\t WITH DATA \\;\n\tCOMMIT\n\t=>\n\nAnd \\ev test works as well.\n\nOf course the problem with using DROP and CREATE is that indexes and\nprivileges (anything else?) must also be restored. I haven't bothered\nwith that yet.\n\n-- \nErik", "msg_date": "Fri, 29 Mar 2024 01:38:17 +0100", "msg_from": "Erik Wienhold <ewie@ewie.name>", "msg_from_op": false, "msg_subject": "Re: PSQL Should \\sv & \\ev work with materialized views?" }, { "msg_contents": "On Thu, 28 Mar 2024 at 20:38, Erik Wienhold <ewie@ewie.name> wrote:\n\n\n> Of course the problem with using DROP and CREATE is that indexes and\n> privileges (anything else?) must also be restored. I haven't bothered\n> with that yet.\n>\n\nNot just those — also anything that depends on the matview, such as views\nand other matviews.\n\nOn Thu, 28 Mar 2024 at 20:38, Erik Wienhold <ewie@ewie.name> wrote: \nOf course the problem with using DROP and CREATE is that indexes and\nprivileges (anything else?) must also be restored.  I haven't bothered\nwith that yet.Not just those — also anything that depends on the matview, such as views and other matviews.", "msg_date": "Thu, 28 Mar 2024 23:27:48 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PSQL Should \\sv & \\ev work with materialized views?" }, { "msg_contents": "On 2024-03-29 04:27 +0100, Isaac Morland wrote:\n> On Thu, 28 Mar 2024 at 20:38, Erik Wienhold <ewie@ewie.name> wrote:\n> \n> \n> > Of course the problem with using DROP and CREATE is that indexes and\n> > privileges (anything else?) must also be restored. I haven't bothered\n> > with that yet.\n> >\n> \n> Not just those — also anything that depends on the matview, such as views\n> and other matviews.\n\nRight. But you'd run into the same issue for a regular view if you use\n\\ev and add DROP VIEW myview CASCADE which may be necessary if you\nwant to change columns names and/or types. Likewise, you'd have to\nmanually change DROP MATERIALIZED VIEW and add the CASCADE option to\nlose dependent objects.\n\nI think implementing CREATE OR REPLACE MATERIALIZED VIEW has more\nvalue. But the semantics have to be defined first. I guess it has to\nbehave like CREATE OR REPLACE VIEW in that it only allows changing the\nquery without altering column names and types.\n\nWe could also implement \\sv so that it only prints CREATE MATERIALIZED\nVIEW and change \\ev to not work with matviews. Both commands use\nget_create_object_cmd to populate the query buffer, so you get \\ev for\nfree when changing \\sv.\n\n-- \nErik\n\n\n", "msg_date": "Fri, 29 Mar 2024 05:27:17 +0100", "msg_from": "Erik Wienhold <ewie@ewie.name>", "msg_from_op": false, "msg_subject": "Re: PSQL Should \\sv & \\ev work with materialized views?" } ]
[ { "msg_contents": "This would be a trivial change. Willing to do it, and push it.\n\nIn effect, we have this GREAT feature:\n\\set ECHO_HIDDON on\n\nWhich outputs a bunch of queries (as you all know).\nBut somehow nobody thought that a user might want to paste ALL of the\nqueries into their query editor, or even into another psql session, via (\\e)\nand NOT get a ton of syntax errors?\n\nAs an example: (added -- and a space)\n\n-- ********* QUERY **********\nSELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered,\ni.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),\n pg_catalog.pg_get_constraintdef(con.oid, true), contype, condeferrable,\ncondeferred, i.indisreplident, c2.reltablespace\nFROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i\n LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND\nconindid = i.indexrelid AND contype IN ('p','u','x'))\nWHERE c.oid = '21949943' AND c.oid = i.indrelid AND i.indexrelid = c2.oid\nORDER BY i.indisprimary DESC, c2.relname;\n-- **************************\n\n-- ********* QUERY **********\nSELECT pol.polname, pol.polpermissive,\n CASE WHEN pol.polroles = '{0}' THEN NULL ELSE\npg_catalog.array_to_string(array(select rolname from pg_catalog.pg_roles\nwhere oid = any (pol.polroles) order by 1),',') END,\n pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),\n pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),\n CASE pol.polcmd\n WHEN 'r' THEN 'SELECT'\n WHEN 'a' THEN 'INSERT'\n WHEN 'w' THEN 'UPDATE'\n WHEN 'd' THEN 'DELETE'\n END AS cmd\nFROM pg_catalog.pg_policy pol\nWHERE pol.polrelid = '21949943' ORDER BY 1;\n-- **************************\n\nKirk...\n\nThis would be a trivial change.  Willing to do it, and push it.In effect, we have this GREAT feature:\\set ECHO_HIDDON onWhich outputs a bunch of queries (as you all know).But somehow nobody thought that a user might want to paste ALL of the queries into their query editor, or even into another psql session, via (\\e)and NOT get a ton of syntax errors?As an example: (added -- and a space)-- ********* QUERY **********SELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered, i.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),  pg_catalog.pg_get_constraintdef(con.oid, true), contype, condeferrable, condeferred, i.indisreplident, c2.reltablespaceFROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i  LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND conindid = i.indexrelid AND contype IN ('p','u','x'))WHERE c.oid = '21949943' AND c.oid = i.indrelid AND i.indexrelid = c2.oidORDER BY i.indisprimary DESC, c2.relname;-- **************************-- ********* QUERY **********SELECT pol.polname, pol.polpermissive,  CASE WHEN pol.polroles = '{0}' THEN NULL ELSE pg_catalog.array_to_string(array(select rolname from pg_catalog.pg_roles where oid = any (pol.polroles) order by 1),',') END,  pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),  pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),  CASE pol.polcmd    WHEN 'r' THEN 'SELECT'    WHEN 'a' THEN 'INSERT'    WHEN 'w' THEN 'UPDATE'    WHEN 'd' THEN 'DELETE'    END AS cmdFROM pg_catalog.pg_policy polWHERE pol.polrelid = '21949943' ORDER BY 1;-- **************************Kirk...", "msg_date": "Mon, 15 May 2023 02:00:56 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "psql: Could we get \"-- \" prefixing on the **** QUERY **** outputs?\n (ECHO_HIDDEN)" }, { "msg_contents": "Hi\n\nDne po 15. 5. 2023 8:01 uživatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> This would be a trivial change. Willing to do it, and push it.\n>\n> In effect, we have this GREAT feature:\n> \\set ECHO_HIDDON on\n>\n> Which outputs a bunch of queries (as you all know).\n> But somehow nobody thought that a user might want to paste ALL of the\n> queries into their query editor, or even into another psql session, via (\\e)\n> and NOT get a ton of syntax errors?\n>\n> As an example: (added -- and a space)\n>\n> -- ********* QUERY **********\n> SELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered,\n> i.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),\n> pg_catalog.pg_get_constraintdef(con.oid, true), contype, condeferrable,\n> condeferred, i.indisreplident, c2.reltablespace\n> FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i\n> LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND\n> conindid = i.indexrelid AND contype IN ('p','u','x'))\n> WHERE c.oid = '21949943' AND c.oid = i.indrelid AND i.indexrelid = c2.oid\n> ORDER BY i.indisprimary DESC, c2.relname;\n> -- **************************\n>\n> -- ********* QUERY **********\n> SELECT pol.polname, pol.polpermissive,\n> CASE WHEN pol.polroles = '{0}' THEN NULL ELSE\n> pg_catalog.array_to_string(array(select rolname from pg_catalog.pg_roles\n> where oid = any (pol.polroles) order by 1),',') END,\n> pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),\n> pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),\n> CASE pol.polcmd\n> WHEN 'r' THEN 'SELECT'\n> WHEN 'a' THEN 'INSERT'\n> WHEN 'w' THEN 'UPDATE'\n> WHEN 'd' THEN 'DELETE'\n> END AS cmd\n> FROM pg_catalog.pg_policy pol\n> WHERE pol.polrelid = '21949943' ORDER BY 1;\n> -- **************************\n>\n> Kirk...\n>\n\nThis looks little bit strange\n\nWhat about /* comments\n\nLike\n\n/******* Query ********/\n\nOr just\n\n-------- Query --------\n\nRegards\n\nPavel\n\n>\n\nHiDne po 15. 5. 2023 8:01 uživatel Kirk Wolak <wolakk@gmail.com> napsal:This would be a trivial change.  Willing to do it, and push it.In effect, we have this GREAT feature:\\set ECHO_HIDDON onWhich outputs a bunch of queries (as you all know).But somehow nobody thought that a user might want to paste ALL of the queries into their query editor, or even into another psql session, via (\\e)and NOT get a ton of syntax errors?As an example: (added -- and a space)-- ********* QUERY **********SELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered, i.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),  pg_catalog.pg_get_constraintdef(con.oid, true), contype, condeferrable, condeferred, i.indisreplident, c2.reltablespaceFROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i  LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND conindid = i.indexrelid AND contype IN ('p','u','x'))WHERE c.oid = '21949943' AND c.oid = i.indrelid AND i.indexrelid = c2.oidORDER BY i.indisprimary DESC, c2.relname;-- **************************-- ********* QUERY **********SELECT pol.polname, pol.polpermissive,  CASE WHEN pol.polroles = '{0}' THEN NULL ELSE pg_catalog.array_to_string(array(select rolname from pg_catalog.pg_roles where oid = any (pol.polroles) order by 1),',') END,  pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),  pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),  CASE pol.polcmd    WHEN 'r' THEN 'SELECT'    WHEN 'a' THEN 'INSERT'    WHEN 'w' THEN 'UPDATE'    WHEN 'd' THEN 'DELETE'    END AS cmdFROM pg_catalog.pg_policy polWHERE pol.polrelid = '21949943' ORDER BY 1;-- **************************Kirk...This looks little bit strangeWhat about /* commentsLike/******* Query ********/Or just -------- Query --------RegardsPavel", "msg_date": "Mon, 15 May 2023 08:37:24 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Mon, May 15, 2023 at 2:37 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> Dne po 15. 5. 2023 8:01 uživatel Kirk Wolak <wolakk@gmail.com> napsal:\n>\n>> This would be a trivial change. Willing to do it, and push it.\n>>\n>> In effect, we have this GREAT feature:\n>> \\set ECHO_HIDDON on\n>> -- **************************\n>>\n>> Kirk...\n>>\n>\n> This looks little bit strange\n>\n> What about /* comments\n>\n> Like\n>\n> /******* Query ********/\n>\n> Or just\n>\n> -------- Query --------\n>\n> Regards\n>\n> Pavel\n>\n\nActually, I am open to suggestions.\n/* */\nAre good comments, usually safer!\n\nOn Mon, May 15, 2023 at 2:37 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiDne po 15. 5. 2023 8:01 uživatel Kirk Wolak <wolakk@gmail.com> napsal:This would be a trivial change.  Willing to do it, and push it.In effect, we have this GREAT feature:\\set ECHO_HIDDON on-- **************************Kirk...This looks little bit strangeWhat about /* commentsLike/******* Query ********/Or just -------- Query --------RegardsPavelActually, I am open to suggestions./* */Are good comments, usually safer!", "msg_date": "Mon, 15 May 2023 03:25:39 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Mon, 2023-05-15 at 08:37 +0200, Pavel Stehule wrote:\n> Dne po 15. 5. 2023 8:01 uživatel Kirk Wolak <wolakk@gmail.com> napsal:\n> > This would be a trivial change.  Willing to do it, and push it.\n> > \n> > In effect, we have this GREAT feature:\n> > \\set ECHO_HIDDON on\n> > \n> > Which outputs a bunch of queries (as you all know).\n> > But somehow nobody thought that a user might want to paste ALL of the queries into their query editor, or even into another psql session, via (\\e)\n> > and NOT get a ton of syntax errors?\n> > \n> > As an example: (added -- and a space)\n> > \n> > -- ********* QUERY **********\n> > SELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered, i.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),\n> >   pg_catalog.pg_get_constraintdef(con.oid, true), contype, condeferrable, condeferred, i.indisreplident, c2.reltablespace\n> > FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i\n> >   LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND conindid = i.indexrelid AND contype IN ('p','u','x'))\n> > WHERE c.oid = '21949943' AND c.oid = i.indrelid AND i.indexrelid = c2.oid\n> > ORDER BY i.indisprimary DESC, c2.relname;\n> > -- **************************\n> \n> This looks little bit strange\n> \n> What about /* comments\n> \n> Like\n> \n> /******* Query ********/\n> \n> Or just \n> \n> -------- Query --------\n\n+1 for either of Pavel's suggestions.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 15 May 2023 10:00:29 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Mon, 2023-05-15 at 08:37 +0200, Pavel Stehule wrote:\n>> This looks little bit strange\n>> \n>> What about /* comments\n>> Like\n>> /******* Query ********/\n>> Or just\n>> -------- Query --------\n\n> +1 for either of Pavel's suggestions.\n\n+1. Probably the slash-star way would be less visually surprising\nto people who are used to the current output.\n\nChecking the psql source code for \"****\", I see that the single-step\nfeature could use the same treatment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 May 2023 10:02:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On 2023-May-15, Tom Lane wrote:\n\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Mon, 2023-05-15 at 08:37 +0200, Pavel Stehule wrote:\n> >> This looks little bit strange\n> >> \n> >> What about /* comments\n> >> Like\n> >> /******* Query ********/\n> >> Or just\n> >> -------- Query --------\n> \n> > +1 for either of Pavel's suggestions.\n> \n> +1. Probably the slash-star way would be less visually surprising\n> to people who are used to the current output.\n\nIt's worth considering what will readline history do with the comment.\nAs I recall, we keep /* comments */ together with the query that\nfollows, but the -- comments are keep in a separate history entry.\nSo that's one more reason to prefer /* */\n\n(To me, that also suggests to remove the asterisk line after each query,\nand to keep just the one before.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Sallah, I said NO camels! That's FIVE camels; can't you count?\"\n(Indiana Jones)\n\n\n", "msg_date": "Mon, 15 May 2023 16:14:57 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> It's worth considering what will readline history do with the comment.\n> As I recall, we keep /* comments */ together with the query that\n> follows, but the -- comments are keep in a separate history entry.\n> So that's one more reason to prefer /* */\n\nGood point.\n\n> (To me, that also suggests to remove the asterisk line after each query,\n> and to keep just the one before.)\n\nMeh ... the one after serves to separate a query from its output.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 May 2023 10:28:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Mon, May 15, 2023 at 10:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > It's worth considering what will readline history do with the comment.\n> > As I recall, we keep /* comments */ together with the query that\n> > follows, but the -- comments are keep in a separate history entry.\n> > So that's one more reason to prefer /* */\n>\n> Good point.\n>\n> > (To me, that also suggests to remove the asterisk line after each query,\n> > and to keep just the one before.)\n>\n> Meh ... the one after serves to separate a query from its output.\n>\n> regards, tom lane\n>\n\nActually, I love the feedback!\n\nI just tested whether or not you see the trailing comment line. And I ONLY\nsee it in the windows version of PSQL.\nAnd ONLY if you paste it directly in at the command line.\n[Because it sends the text line by line, I assume]\n\nFurther Testing:\n\ncalling with: psql -f -- no output of the comments (or the query is seen)\n-- Windows/Linux\n\nwith \\e editing... In Linux nothing is displayed from the query!\n\nwith \\e editing in Windows... I found it buggy when I tossed in (\\pset\npager 0) as the first line. It blew everything up (LOL)\n\\pset: extra argument \"attcollation,\" ignored\n\\pset: extra argument \"a.attidentity,\" ignored\n\\pset: extra argument \"a.attgenerated\" ignored\n\\pset: extra argument \"FROM\" ignored\n\\pset: extra argument \"pg_catalog.pg_attribute\" ignored\n\n\nWith that said, I DEFINITELY Move to Remove the secondary comment. It's\njust noise.\nand /* */ comments it will be for the topside.\n\nAlso, I will take a quick peek at the parse failure that is in windows \\e\n[Which always does this weird doubling of lines]. But no promises here.\nIt will be good enough to identify the problem.\n\nKirk...\n\nOn Mon, May 15, 2023 at 10:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> It's worth considering what will readline history do with the comment.\n> As I recall, we keep /* comments */ together with the query that\n> follows, but the -- comments are keep in a separate history entry.\n> So that's one more reason to prefer /*  */\n\nGood point.\n\n> (To me, that also suggests to remove the asterisk line after each query,\n> and to keep just the one before.)\n\nMeh ... the one after serves to separate a query from its output.\n\n                        regards, tom laneActually, I love the feedback!I just tested whether or not you see the trailing comment line.  And I ONLY see it in the windows version of PSQL.And ONLY if you paste it directly in at the command line.[Because it sends the text line by line, I assume]Further Testing:calling with: psql -f  -- no output of the comments (or the query is seen)  -- Windows/Linuxwith \\e editing... In Linux nothing is displayed from the query!with \\e editing in Windows... I found it buggy when I tossed in (\\pset pager 0) as the first line.  It blew everything up (LOL)\\pset: extra argument \"attcollation,\" ignored\\pset: extra argument \"a.attidentity,\" ignored\\pset: extra argument \"a.attgenerated\" ignored\\pset: extra argument \"FROM\" ignored\\pset: extra argument \"pg_catalog.pg_attribute\" ignoredWith that said, I DEFINITELY Move to Remove the secondary comment.  It's just noise.and /* */ comments it will be for the topside.Also, I will take a quick peek at the parse failure that is in windows \\e[Which always does this weird doubling of lines].  But no promises here.  It will be good enough to identify the problem.Kirk...", "msg_date": "Mon, 15 May 2023 21:05:19 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Mon, May 15, 2023 at 9:05 PM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> On Mon, May 15, 2023 at 10:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> > It's worth considering what will readline history do with the comment.\n>>\n>\nHmmm... We could put a SPACE before the comment, that usually stops\nreadline from saving it?\n\n>\n>> Meh ... the one after serves to separate a query from its output.\n>>\n>> regards, tom lane\n>>\n>\n>\n> I just tested whether or not you see the trailing comment line. And I\n> ONLY see it in the windows version of PSQL.\n> And ONLY if you paste it directly in at the command line.\n> [Because it sends the text line by line, I assume]\n>\n> ,,,With that said, I DEFINITELY Move to Remove the secondary comment.\n> It's just noise.\n> and /* */ comments it will be for the topside.\n>\n>\nHere's the patch. I removed touching on .po files.\nI made the change apply to the logging (fair warning) for consistency.\n\nAll feedback is welcome. These small patches help me work through the\nprocess.\n\nKirk...\nOUTPUT:\n/********* QUERY **********/\nSELECT c.oid::pg_catalog.regclass\nFROM pg_catalog.pg_class c, pg_catalog.pg_inherits i\nWHERE c.oid = i.inhparent AND i.inhrelid = '24577'\n AND c.relkind != 'p' AND c.relkind != 'I'\nORDER BY inhseqno;\n\n/********* QUERY **********/\nSELECT c.oid::pg_catalog.regclass, c.relkind, inhdetachpending,\npg_catalog.pg_get_expr(c.relpartbound, c.oid)\nFROM pg_catalog.pg_class c, pg_catalog.pg_inherits i\nWHERE c.oid = i.inhrelid AND i.inhparent = '24577'\nORDER BY pg_catalog.pg_get_expr(c.relpartbound, c.oid) = 'DEFAULT',\nc.oid::pg_catalog.regclass::pg_catalog.text;\n\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default\n--------+--------+-----------+----------+------------------------------\n id | bigint | | not null | generated always as identity\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (id)", "msg_date": "Wed, 17 May 2023 13:39:12 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Wed, 2023-05-17 at 13:39 -0400, Kirk Wolak wrote:\n> Here's the patch.\n\nYou removed the ******** QUERY ******** at the end of the query.\nI think we should keep that (as comments, of course). People\nare used to the current output, and it is nice to have a clear\nvisual marker at the end of what isn't normally part of \"psql\"\noutput.\n\n\"okbob\" should be \"Pavel Stehule\".\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 17 May 2023 19:45:06 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> You removed the ******** QUERY ******** at the end of the query.\n> I think we should keep that (as comments, of course). People\n> are used to the current output, and it is nice to have a clear\n> visual marker at the end of what isn't normally part of \"psql\"\n> output.\n\nI agree. These considerations of what shows up in the readline\nlog if you choose to copy-and-paste seem far secondary to the\nreadability of the terminal output in the first place.\n\nAlso, you'd have to avoid copying-and-pasting the query output\nanyway, so I'm not entirely sold that there's much of\na usability gain here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 May 2023 14:13:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Wed, May 17, 2023 at 2:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > You removed the ******** QUERY ******** at the end of the query.\n>\n> Fixed\nAlso Fixed Pavel's name.\nAlso Added Laurenze as a Reviewed By: (not sure, never want to NOT ack\nsomeone)\n\n>\n> Also, you'd have to avoid copying-and-pasting the query output\n> anyway, so I'm not entirely sold that there's much of\n> a usability gain here.\n>\n\nMy output never contains query output results intermixed. I get a handful\nof queries.\nThen I get the output of the \"\\d t1\" (Which makes me wonder if I am doing\nsomething wrong,\nor there is another use case I should be testing).\n\nI labelled this v2. I also edited the Thread: (I realized I can find the\nthread, go to the Whole Thread,\nand then include the link to the first item in the thread. I assume that\nis what's expected).\n\nKirk...\n\npsql>\ncreate table t1(id bigint not null primary key generated always as\nidentity);\n\\set ECHO_HIDDEN on\n\\d t1\n\nGenerates:\n/********* QUERY **********/\n... Clipped ...\nFROM pg_catalog.pg_publication p\nWHERE p.puballtables AND pg_catalog.pg_relation_is_publishable('24577')\nORDER BY 1;\n/**************************/\n\n/********* QUERY **********/\nSELECT c.oid::pg_catalog.regclass\nFROM pg_catalog.pg_class c, pg_catalog.pg_inherits i\nWHERE c.oid = i.inhparent AND i.inhrelid = '24577'\n AND c.relkind != 'p' AND c.relkind != 'I'\nORDER BY inhseqno;\n/**************************/\n\n/********* QUERY **********/\nSELECT c.oid::pg_catalog.regclass, c.relkind, inhdetachpending,\npg_catalog.pg_get_expr(c.relpartbound, c.oid)\nFROM pg_catalog.pg_class c, pg_catalog.pg_inherits i\nWHERE c.oid = i.inhrelid AND i.inhparent = '24577'\nORDER BY pg_catalog.pg_get_expr(c.relpartbound, c.oid) = 'DEFAULT',\nc.oid::pg_catalog.regclass::pg_catalog.text;\n/**************************/\n\n Table \"public.t1\"\n ... End Clip...\n-- NOTICE: there is no output between queries using ECHO_HIDDEN", "msg_date": "Wed, 17 May 2023 17:23:00 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "\n\n> On 18 May 2023, at 02:23, Kirk Wolak <wolakk@gmail.com> wrote:\n> \n> I labelled this v2. \n\n+1 to the feature and the patch looks good to me.\n\nI have a question, but mostly for my own knowledge. Translation changes seem trivial for all languages, do we typically fix .po files in such cases? Or do we leave it to translators to revise the stuff?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Fri, 19 May 2023 10:15:56 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On 2023-May-19, Andrey M. Borodin wrote:\n\n> I have a question, but mostly for my own knowledge. Translation\n> changes seem trivial for all languages, do we typically fix .po files\n> in such cases? Or do we leave it to translators to revise the stuff?\n\nThe translations use a completely separate source repository, so even if\nsomebody were to patch them in postgresql.git, their changes would be\noverwritten next time they are copied from the translation repo anyway.\nJust leave it to the translators.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.\n That's because in Europe they call me by name, and in the US by value!\"\n\n\n", "msg_date": "Fri, 19 May 2023 19:16:16 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "I took a look at this patch and changed a couple things:\n\n * I made a similar adjustment to a few lines that seem to have been\n missed.\n * I removed a couple of asterisks from the adjusted lines in order to\n maintain the existing line lengths.\n\nBarring additional feedback, I think this is ready for commit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 25 Jul 2023 21:22:22 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "st 26. 7. 2023 v 6:22 odesílatel Nathan Bossart <nathandbossart@gmail.com>\nnapsal:\n\n> I took a look at this patch and changed a couple things:\n>\n> * I made a similar adjustment to a few lines that seem to have been\n> missed.\n> * I removed a couple of asterisks from the adjusted lines in order to\n> maintain the existing line lengths.\n>\n> Barring additional feedback, I think this is ready for commit.\n>\n>\n+1\n\nPavel\n\n-- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\nst 26. 7. 2023 v 6:22 odesílatel Nathan Bossart <nathandbossart@gmail.com> napsal:I took a look at this patch and changed a couple things:\n\n * I made a similar adjustment to a few lines that seem to have been\n   missed.\n * I removed a couple of asterisks from the adjusted lines in order to\n   maintain the existing line lengths.\n\nBarring additional feedback, I think this is ready for commit.\n+1Pavel\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 26 Jul 2023 08:06:37 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Wed, Jul 26, 2023 at 08:06:37AM +0200, Pavel Stehule wrote:\n> st 26. 7. 2023 v 6:22 odes�latel Nathan Bossart <nathandbossart@gmail.com>\n> napsal:\n>> Barring additional feedback, I think this is ready for commit.\n>>\n>>\n> +1\n\nGreat. I spent some time on the commit message in v4. I plan to commit\nthis shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 26 Jul 2023 14:39:25 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Wed, Jul 26, 2023 at 02:39:25PM -0700, Nathan Bossart wrote:\n> Great. I spent some time on the commit message in v4. I plan to commit\n> this shortly.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jul 2023 17:05:05 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "On Wed, Jul 26, 2023 at 5:39 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Wed, Jul 26, 2023 at 08:06:37AM +0200, Pavel Stehule wrote:\n> > st 26. 7. 2023 v 6:22 odesílatel Nathan Bossart <\n> nathandbossart@gmail.com>\n> > napsal:\n> >> Barring additional feedback, I think this is ready for commit.\n> >>\n> >>\n> > +1\n>\n> Great. I spent some time on the commit message in v4. I plan to commit\n> this shortly.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\nCurious about this. I expected to see the comments? (is there a chance\nthat the translation piece is kicking in reverting them)?\n(expecting / ********* QUERY **********/)\n\n01:05:47 devuser@nctest= > \\echo :VERSION_NAME :VERSION_NUM\n16.0 (Ubuntu 16.0-1.pgdg22.04+1) 160000\n01:05:57 devuser@nctest= > \\dn public\n********* QUERY **********\nSELECT n.nspname AS \"Name\",\n pg_catalog.pg_get_userbyid(n.nspowner) AS \"Owner\"\nFROM pg_catalog.pg_namespace n\nWHERE n.nspname OPERATOR(pg_catalog.~) '^(public)$' COLLATE\npg_catalog.default\nORDER BY 1;\n**************************\n\n********* QUERY **********\nSELECT pubname\nFROM pg_catalog.pg_publication p\n JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid\nWHERE n.nspname = 'public'\nORDER BY 1\n**************************\n\n List of schemas\n Name | Owner\n--------+-------------------\n public | pg_database_owner\n(1 row)\n\nOn Wed, Jul 26, 2023 at 5:39 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Wed, Jul 26, 2023 at 08:06:37AM +0200, Pavel Stehule wrote:\n> st 26. 7. 2023 v 6:22 odesílatel Nathan Bossart <nathandbossart@gmail.com>\n> napsal:\n>> Barring additional feedback, I think this is ready for commit.\n>>\n>>\n> +1\n\nGreat.  I spent some time on the commit message in v4.  I plan to commit\nthis shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.comCurious about this.  I expected to see the comments?  (is there a chance that the translation piece is kicking in reverting them)?(expecting /\n\n********* QUERY **********/)01:05:47 devuser@nctest= > \\echo :VERSION_NAME  :VERSION_NUM16.0 (Ubuntu 16.0-1.pgdg22.04+1) 16000001:05:57 devuser@nctest= > \\dn public********* QUERY **********SELECT n.nspname AS \"Name\",  pg_catalog.pg_get_userbyid(n.nspowner) AS \"Owner\"FROM pg_catalog.pg_namespace nWHERE n.nspname OPERATOR(pg_catalog.~) '^(public)$' COLLATE pg_catalog.defaultORDER BY 1;*********************************** QUERY **********SELECT pubnameFROM pg_catalog.pg_publication p     JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid     JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspidWHERE n.nspname = 'public'ORDER BY 1**************************      List of schemas  Name  |       Owner--------+------------------- public | pg_database_owner(1 row)", "msg_date": "Tue, 24 Oct 2023 01:09:48 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" }, { "msg_contents": "Hi\n\nút 24. 10. 2023 v 7:10 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> On Wed, Jul 26, 2023 at 5:39 PM Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n>\n>> On Wed, Jul 26, 2023 at 08:06:37AM +0200, Pavel Stehule wrote:\n>> > st 26. 7. 2023 v 6:22 odesílatel Nathan Bossart <\n>> nathandbossart@gmail.com>\n>> > napsal:\n>> >> Barring additional feedback, I think this is ready for commit.\n>> >>\n>> >>\n>> > +1\n>>\n>> Great. I spent some time on the commit message in v4. I plan to commit\n>> this shortly.\n>>\n>> --\n>> Nathan Bossart\n>> Amazon Web Services: https://aws.amazon.com\n>\n>\n> Curious about this. I expected to see the comments? (is there a chance\n> that the translation piece is kicking in reverting them)?\n> (expecting / ********* QUERY **********/)\n>\n> 01:05:47 devuser@nctest= > \\echo :VERSION_NAME :VERSION_NUM\n> 16.0 (Ubuntu 16.0-1.pgdg22.04+1) 160000\n> 01:05:57 devuser@nctest= > \\dn public\n> ********* QUERY **********\n> SELECT n.nspname AS \"Name\",\n> pg_catalog.pg_get_userbyid(n.nspowner) AS \"Owner\"\n> FROM pg_catalog.pg_namespace n\n> WHERE n.nspname OPERATOR(pg_catalog.~) '^(public)$' COLLATE\n> pg_catalog.default\n> ORDER BY 1;\n> **************************\n>\n> ********* QUERY **********\n> SELECT pubname\n> FROM pg_catalog.pg_publication p\n> JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n> JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid\n> WHERE n.nspname = 'public'\n> ORDER BY 1\n> **************************\n>\n> List of schemas\n> Name | Owner\n> --------+-------------------\n> public | pg_database_owner\n> (1 row)\n>\n>\nIt is working in psql 17, not in psql 16\n\n(2023-10-24 07:14:35) postgres=# \\echo :VERSION_NAME :VERSION_NUM\n17devel 170000\n(2023-10-24 07:14:37) postgres=# \\l+\n/******** QUERY *********/\nSELECT\n d.datname as \"Name\",\n pg_catalog.pg_get_userbyid(d.datdba) as \"Owner\",\n pg_catalog.pg_encoding_to_char(d.encoding) as \"Encoding\",\n CASE d.datlocprovider WHEN 'c' THEN 'libc' WHEN 'i' THEN 'icu' END AS\n\"Locale Provider\",\n...\n\n\n\n\n>\n>\n>\n\nHiút 24. 10. 2023 v 7:10 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Wed, Jul 26, 2023 at 5:39 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Wed, Jul 26, 2023 at 08:06:37AM +0200, Pavel Stehule wrote:\n> st 26. 7. 2023 v 6:22 odesílatel Nathan Bossart <nathandbossart@gmail.com>\n> napsal:\n>> Barring additional feedback, I think this is ready for commit.\n>>\n>>\n> +1\n\nGreat.  I spent some time on the commit message in v4.  I plan to commit\nthis shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.comCurious about this.  I expected to see the comments?  (is there a chance that the translation piece is kicking in reverting them)?(expecting /\n\n********* QUERY **********/)01:05:47 devuser@nctest= > \\echo :VERSION_NAME  :VERSION_NUM16.0 (Ubuntu 16.0-1.pgdg22.04+1) 16000001:05:57 devuser@nctest= > \\dn public********* QUERY **********SELECT n.nspname AS \"Name\",  pg_catalog.pg_get_userbyid(n.nspowner) AS \"Owner\"FROM pg_catalog.pg_namespace nWHERE n.nspname OPERATOR(pg_catalog.~) '^(public)$' COLLATE pg_catalog.defaultORDER BY 1;*********************************** QUERY **********SELECT pubnameFROM pg_catalog.pg_publication p     JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid     JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspidWHERE n.nspname = 'public'ORDER BY 1**************************      List of schemas  Name  |       Owner--------+------------------- public | pg_database_owner(1 row)It is working in psql 17, not in psql 16(2023-10-24 07:14:35) postgres=# \\echo :VERSION_NAME  :VERSION_NUM17devel 170000(2023-10-24 07:14:37) postgres=# \\l+/******** QUERY *********/SELECT  d.datname as \"Name\",  pg_catalog.pg_get_userbyid(d.datdba) as \"Owner\",  pg_catalog.pg_encoding_to_char(d.encoding) as \"Encoding\",  CASE d.datlocprovider WHEN 'c' THEN 'libc' WHEN 'i' THEN 'icu' END AS \"Locale Provider\",...", "msg_date": "Tue, 24 Oct 2023 07:15:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Could we get \"-- \" prefixing on the **** QUERY ****\n outputs? (ECHO_HIDDEN)" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a patch to $SUBJECT.\n\nThis is preliminary work to autogenerate some wait events\ncode and documentation done in [1].\n\nThe patch introduces 2 new \"wait events\" (WAIT_EVENT_EXTENSION\nand WAIT_EVENT_BUFFER_PIN) and their associated functions\n(pgstat_get_wait_extension() and pgstat_get_wait_bufferpin() resp.)\n\nPlease note that that would not break extensions outside contrib\nthat make use of the existing PG_WAIT_EXTENSION.\n\n[1]: https://www.postgresql.org/message-id/flat/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 15 May 2023 10:07:04 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 2023-05-15 10:07:04 +0200, Drouvot, Bertrand wrote:\n> Please find attached a patch to $SUBJECT.\n> \n> This is preliminary work to autogenerate some wait events\n> code and documentation done in [1].\n> \n> The patch introduces 2 new \"wait events\" (WAIT_EVENT_EXTENSION\n> and WAIT_EVENT_BUFFER_PIN) and their associated functions\n> (pgstat_get_wait_extension() and pgstat_get_wait_bufferpin() resp.)\n> \n> Please note that that would not break extensions outside contrib\n> that make use of the existing PG_WAIT_EXTENSION.\n> \n> [1]: https://www.postgresql.org/message-id/flat/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com\n> \n> Looking forward to your feedback,\n\nWithout an explanation for why this change is needed for [1], it's hard to\ngive useful feedback...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 May 2023 11:29:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Mon, May 15, 2023 at 11:29:56AM -0700, Andres Freund wrote:\n> Without an explanation for why this change is needed for [1], it's hard to\n> give useful feedback...\n\nThe point is to integrate the wait event classes for buffer pin and\nextension into the txt file that automates the creation of the SGML\nand C code associated to them. Doing the refactoring of this patch\nhas two advantages:\n- Being able to easily organize the tables for each wait event type\nalphabetically, the same way as HEAD, for all wait event classes.\n- Minimizing the number of exception rules needed in the perl script\nthat transforms the txt file into SGML and the C code to list all the\nevents associated in a class, where one function is used for each wait\nevent class. Currently the wait event class extension does not use\nthat.\n\nThis impacts only the internal object names, not the wait event\nstrings or the class associated to them. So this does not change the\ncontents of pg_stat_activity.\n--\nMichael", "msg_date": "Tue, 16 May 2023 07:30:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 2023-05-16 07:30:54 +0900, Michael Paquier wrote:\n> On Mon, May 15, 2023 at 11:29:56AM -0700, Andres Freund wrote:\n> > Without an explanation for why this change is needed for [1], it's hard to\n> > give useful feedback...\n> \n> The point is to integrate the wait event classes for buffer pin and\n> extension into the txt file that automates the creation of the SGML\n> and C code associated to them.\n\nIMO the submission should include why automating requires these changes (yours\ndoesn't really either). I can probably figure it out if I stare a bit at the\ncode and read the other thread - but I shouldn't need to.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 May 2023 17:17:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Mon, May 15, 2023 at 05:17:16PM -0700, Andres Freund wrote:\n> IMO the submission should include why automating requires these changes (yours\n> doesn't really either). I can probably figure it out if I stare a bit at the\n> code and read the other thread - but I shouldn't need to.\n\nHm? My previous message includes two reasons.. Anyway, I assume that\nmy previous message is also missing the explanation from the other\nthread that this is to translate a .txt file shaped similarly to\nerrcodes.txt for the wait events (sections as wait event classes,\nlisting the elements) into automatically-generated SGML and C code :)\n\nThe extensions and buffer pin parts need a few internal tweaks to make\nthe other changes much easier to do, which is what the patch of this\nthread is doing.\n--\nMichael", "msg_date": "Tue, 16 May 2023 09:38:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 2023-05-16 09:38:54 +0900, Michael Paquier wrote:\n> On Mon, May 15, 2023 at 05:17:16PM -0700, Andres Freund wrote:\n> > IMO the submission should include why automating requires these changes (yours\n> > doesn't really either). I can probably figure it out if I stare a bit at the\n> > code and read the other thread - but I shouldn't need to.\n> \n> Hm? My previous message includes two reasons..\n\nIt explained the motivation, but not why that requires the specific\nchanges. At least not in a way that I could quickly undestand.\n\n\n> The extensions and buffer pin parts need a few internal tweaks to make\n> the other changes much easier to do, which is what the patch of this\n> thread is doing.\n\nWhy those tweaks are necessary is precisely what I am asking for.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 May 2023 18:01:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Mon, May 15, 2023 at 06:01:02PM -0700, Andres Freund wrote:\n> On 2023-05-16 09:38:54 +0900, Michael Paquier wrote:\n>> On Mon, May 15, 2023 at 05:17:16PM -0700, Andres Freund wrote:\n>>> IMO the submission should include why automating requires these changes (yours\n>>> doesn't really either). I can probably figure it out if I stare a bit at the\n>>> code and read the other thread - but I shouldn't need to.\n>>\n>> Hm? My previous message includes two reasons..\n>\n> It explained the motivation, but not why that requires the specific\n> changes. At least not in a way that I could quickly undestand.\n>\n>> The extensions and buffer pin parts need a few internal tweaks to make\n>> the other changes much easier to do, which is what the patch of this\n>> thread is doing.\n>\n> Why those tweaks are necessary is precisely what I am asking for.\n\nNot sure how to answer to that without copy-pasting some code, and\nthat's what is written in the patch. Anyway, I am used to the context\nof what's wanted, so here you go with more details. As a whole, the\npatch reshapes the code to be more consistent for all the wait event \nclasses, so as it is possible to generate the code refactored by the\npatch of this thread in a single way for all the classes.\n\nThe first change is related to the functions associated to each class,\nused to fetch a specific wait event. On HEAD, all the wait event classes\nuse a dedicated function (activity, IPC, etc.) like that:\n case PG_WAIT_BLAH:\n\t{\n WaitEventBlah w = (WaitEventBlah) wait_event_info;\n event_name = pgstat_get_wait_blah(w);\n break;\n\t}\n\nThere are two exceptions to that, the wait event classes for extension\nand buffer pin, that just do that because these classes have only one\nwait event currently: \n case PG_WAIT_EXTENSION:\n event_name = \"Extension\";\n break\n [...]\n case PG_WAIT_BUFFER_PIN:\n event_name = \"BufferPin\";\n break;\nThe first thing changed is to introduce two new functions for these\ntwo classes, to work the same way as the other classes. The script in\ncharge of generating the code from the wait event .txt file will just\nbuild the internals of these functions.\n\nThe second change is to rework the enum structures for extension and\nbuffer pin, to be consistent with the other classes, so as all the\nclasses have structures shaped like that:\ntypedef enum\n{\n WAIT_EVENT_1 = PG_WAIT_BLAH,\n [...]\n WAIT_EVENT_N\n} WaitEventBlah;\n\nThen the perl script generates the same structures for all the wait\nevent classes, with all the events associated to that. There may be a\npoint in keeping extension and buffer pin out of this refactoring, but\nreworking these like that moves in a single place all the wait event\ndefinitions, making future additions easier. Buffer pin actually\nneeded a small rename to stick with the naming rules of the other\nclasses.\n\nThese are the two things refactored in the patch, explaining the what.\nThe reason behind the why is to make the script in charge of\ngenerating all these structures and functions consistent for all the\nwait event classes, simply. Treating all the wait event classes \ntogether eases greatly the generation of the documentation, so that it\nis possible to enforce an ordering of the tables of each class used to\nlist each wait event type attached to them. Does this explanation\nmake sense?\n\nLock and LWLock are funkier because of the way they grab wait events\nfor the inner function, still these had better have their\ndocumentation generated so as all the SGML tables created for all the\nwait event tables are ordered alphabetically, in the same way as\nHEAD.\n--\nMichael", "msg_date": "Tue, 16 May 2023 14:14:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Tue, May 16, 2023 at 1:14 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, May 15, 2023 at 06:01:02PM -0700, Andres Freund wrote:\n> > Why those tweaks are necessary is precisely what I am asking for.\n>\n> Then the perl script generates the same structures for all the wait\n> event classes, with all the events associated to that. There may be a\n> point in keeping extension and buffer pin out of this refactoring, but\n> reworking these like that moves in a single place all the wait event\n> definitions, making future additions easier. Buffer pin actually\n> needed a small rename to stick with the naming rules of the other\n> classes.\n>\n+1 (Nice explanation, Improving things, Consistency. Always good)\n\n>\n> These are the two things refactored in the patch, explaining the what.\n> The reason behind the why is to make the script in charge of\n> generating all these structures and functions consistent for all the\n> wait event classes, simply. Treating all the wait event classes\n> together eases greatly the generation of the documentation, so that it\n> is possible to enforce an ordering of the tables of each class used to\n> list each wait event type attached to them. Does this explanation\n> make sense?\n>\n+1 (To me, but I am not important. But having this saved in the thread!!!)\n\n>\n> Lock and LWLock are funkier because of the way they grab wait events\n> for the inner function, still these had better have their\n> documentation generated so as all the SGML tables created for all the\n> wait event tables are ordered alphabetically, in the same way as\n> HEAD.\n> --\n> Michael\n>\n+1 (Whatever helps automate/generate the docs)\n\nKirk...\n\nOn Tue, May 16, 2023 at 1:14 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, May 15, 2023 at 06:01:02PM -0700, Andres Freund wrote:> Why those tweaks are necessary is precisely what I am asking for.\nThen the perl script generates the same structures for all the wait\nevent classes, with all the events associated to that.  There may be a\npoint in keeping extension and buffer pin out of this refactoring, but\nreworking these like that moves in a single place all the wait event\ndefinitions, making future additions easier.  Buffer pin actually\nneeded a small rename to stick with the naming rules of the other\nclasses.+1 (Nice explanation, Improving things, Consistency. Always good) \n\nThese are the two things refactored in the patch, explaining the what.\nThe reason behind the why is to make the script in charge of\ngenerating all these structures and functions consistent for all the\nwait event classes, simply.  Treating all the wait event classes \ntogether eases greatly the generation of the documentation, so that it\nis possible to enforce an ordering of the tables of each class used to\nlist each wait event type attached to them.  Does this explanation\nmake sense?+1 (To me, but I am not important. But having this saved in the thread!!!) \n\nLock and LWLock are funkier because of the way they grab wait events\nfor the inner function, still these had better have their\ndocumentation generated so as all the SGML tables created for all the\nwait event tables are ordered alphabetically, in the same way as\nHEAD.\n--\nMichael+1 (Whatever helps automate/generate the docs)Kirk...", "msg_date": "Tue, 16 May 2023 01:38:32 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 5/16/23 7:14 AM, Michael Paquier wrote:\n> On Mon, May 15, 2023 at 06:01:02PM -0700, Andres Freund wrote:\n>> On 2023-05-16 09:38:54 +0900, Michael Paquier wrote:\n>>> On Mon, May 15, 2023 at 05:17:16PM -0700, Andres Freund wrote:\n\n> These are the two things refactored in the patch, explaining the what.\n> The reason behind the why is to make the script in charge of\n> generating all these structures and functions consistent for all the\n> wait event classes, simply. Treating all the wait event classes\n> together eases greatly the generation of the documentation, so that it\n> is possible to enforce an ordering of the tables of each class used to\n> list each wait event type attached to them.\n\nRight, it does \"fix\" the ordering issue (for BufferPin and Extension)\nthat I've described in the patch introduction in [1]:\n\n\"\n so that PG_WAIT_LWLOCK, PG_WAIT_LOCK, PG_WAIT_BUFFER_PIN and PG_WAIT_EXTENSION are not autogenerated.\n\n\nThis result to having the wait event part of the documentation \"monitoring-stats\" not ordered as compared to the \"Wait Event Types\" Table.\n.\n.\n.\n\"\n\nThanks Michael for having provided this detailed explanation (my patch\nintroduction clearly was missing some context as Andres pointed out).\n\n[1]: https://www.postgresql.org/message-id/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 May 2023 08:10:20 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Mon, May 15, 2023 at 10:07:04AM +0200, Drouvot, Bertrand wrote:\n> This is preliminary work to autogenerate some wait events\n> code and documentation done in [1].\n> \n> The patch introduces 2 new \"wait events\" (WAIT_EVENT_EXTENSION\n> and WAIT_EVENT_BUFFER_PIN) and their associated functions\n> (pgstat_get_wait_extension() and pgstat_get_wait_bufferpin() resp.)\n> \n> Please note that that would not break extensions outside contrib\n> that make use of the existing PG_WAIT_EXTENSION.\n\nI have looked at this one, and I think that's OK for what you are\naiming at here (in addition to my previous message that I hope\nprovides enough context about the whys and the hows).\n--\nMichael", "msg_date": "Tue, 16 May 2023 15:16:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 5/16/23 8:16 AM, Michael Paquier wrote:\n> On Mon, May 15, 2023 at 10:07:04AM +0200, Drouvot, Bertrand wrote:\n>> This is preliminary work to autogenerate some wait events\n>> code and documentation done in [1].\n>>\n>> The patch introduces 2 new \"wait events\" (WAIT_EVENT_EXTENSION\n>> and WAIT_EVENT_BUFFER_PIN) and their associated functions\n>> (pgstat_get_wait_extension() and pgstat_get_wait_bufferpin() resp.)\n>>\n>> Please note that that would not break extensions outside contrib\n>> that make use of the existing PG_WAIT_EXTENSION.\n> \n> I have looked at this one, and I think that's OK for what you are\n> aiming at here (in addition to my previous message that I hope\n> provides enough context about the whys and the hows).\n\nThanks!\n\nPlease find V2 attached, it adds WaitEventBufferPin and WaitEventExtension to\nsrc/tools/pgindent/typedefs.list (that was done in [1]...).\n\n[1]: https://www.postgresql.org/message-id/flat/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 17 May 2023 08:29:42 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Tue, May 16, 2023 at 1:14 AM Michael Paquier <michael@paquier.xyz> wrote:\n> These are the two things refactored in the patch, explaining the what.\n> The reason behind the why is to make the script in charge of\n> generating all these structures and functions consistent for all the\n> wait event classes, simply. Treating all the wait event classes\n> together eases greatly the generation of the documentation, so that it\n> is possible to enforce an ordering of the tables of each class used to\n> list each wait event type attached to them. Does this explanation\n> make sense?\n\nNot really. At least not to me. Changing the code in dblink to use\nWAIT_EVENT_EXTENSION instead of PG_WAIT_EXTENSION doesn't help you\nautomatically generate documentation in any way.\n\nIt seems to me that your automatic generation code might need a\nspecial case for wait event types that contain only a single wait\nevent. But that doesn't seem like a bad thing to have. Adding\npgstat_get_wait_extension adds runtime cost for no corresponding\nbenefit. Having a special case in the code to avoid that seems\nworthwhile.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 May 2023 09:22:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Wed, May 17, 2023 at 09:22:19AM -0400, Robert Haas wrote:\n> It seems to me that your automatic generation code might need a\n> special case for wait event types that contain only a single wait\n> event. But that doesn't seem like a bad thing to have. Adding\n> pgstat_get_wait_extension adds runtime cost for no corresponding\n> benefit. Having a special case in the code to avoid that seems\n> worthwhile.\n\nOkay. We are going to need an approach similar to what's done for\nsrc/backend/nodes where two things are generated in order to be able\nto have some of the wait event classes be treated as exceptions in the\nswitch calling each function (pgstat_get_wait_event). I'd assume: \n- Create the code calling the functions automatically, say in a\nwait_event_type.switch.c or something like that. If a class has one\nsingle element, generate the code from it.\n- Create a second file with the functions and their internals, as the\npatch does now (like wait_event_type.funcs.c?), discarding classes\nwith single elements.\n- Skip the creation of the enum structures for single-element classes,\nas well.\n\nStill it looks like the renaming of BufferPin would need to remain\naround to ease a bit the work of the script. Bertrand, what do you\nthink?\n--\nMichael", "msg_date": "Thu, 18 May 2023 07:48:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Thu, May 18, 2023 at 07:48:26AM +0900, Michael Paquier wrote:\n> Okay. We are going to need an approach similar to what's done for\n> src/backend/nodes where two things are generated in order to be able\n> to have some of the wait event classes be treated as exceptions in the\n> switch calling each function (pgstat_get_wait_event). I'd assume: \n> - Create the code calling the functions automatically, say in a\n> wait_event_type.switch.c or something like that. If a class has one\n> single element, generate the code from it.\n> - Create a second file with the functions and their internals, as the\n> patch does now (like wait_event_type.funcs.c?), discarding classes\n> with single elements.\n> - Skip the creation of the enum structures for single-element classes,\n> as well.\n\nOn top of that, why don't we just apply some inlining to all the\npgstat_get_wait_*() functions? If we do that, even the existing\nfunctions could see a benefit on top of the ones associated to classes\nwith single elements. Inlining may not be always applied depending on\nwhat the compiler wants, of course..\n--\nMichael", "msg_date": "Thu, 18 May 2023 08:28:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On 2023-05-17 09:22:19 -0400, Robert Haas wrote:\n> Adding pgstat_get_wait_extension adds runtime cost for no corresponding\n> benefit. Having a special case in the code to avoid that seems worthwhile.\n\nI don't think that should ever be used in a path where performance is\nrelevant?\n\n\n", "msg_date": "Wed, 17 May 2023 16:38:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Wed, May 17, 2023 at 7:38 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-05-17 09:22:19 -0400, Robert Haas wrote:\n> > Adding pgstat_get_wait_extension adds runtime cost for no corresponding\n> > benefit. Having a special case in the code to avoid that seems worthwhile.\n>\n> I don't think that should ever be used in a path where performance is\n> relevant?\n\nI mean, I agree that it would probably be hard to measure any real\nperformance difference. But I'm not sure that's a good reason to add\ncycles to a path where we don't really need to.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 May 2023 12:28:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Thu, May 18, 2023 at 12:28:20PM -0400, Robert Haas wrote:\n> I mean, I agree that it would probably be hard to measure any real\n> performance difference. But I'm not sure that's a good reason to add\n> cycles to a path where we don't really need to.\n\nHonestly, I am not sure that it's worth worrying about performance\nhere, or perhaps you know of some external stuff that could set the\nextension class type in a code path hot enough that it would matter.\nAnyway, why couldn't we make all these functions static inline\ninstead, then?\n--\nMichael", "msg_date": "Fri, 19 May 2023 07:36:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 5/19/23 12:36 AM, Michael Paquier wrote:\n> On Thu, May 18, 2023 at 12:28:20PM -0400, Robert Haas wrote:\n>> I mean, I agree that it would probably be hard to measure any real\n>> performance difference. But I'm not sure that's a good reason to add\n>> cycles to a path where we don't really need to.\n> \n> Honestly, I am not sure that it's worth worrying about performance\n> here,\n\nSame feeling here and as those new functions will be used \"only\" from\npg_stat_get_activity() / pg_stat_get_backend_wait_event().\n\n> or perhaps you know of some external stuff that could set the\n> extension class type in a code path hot enough that it would matter.\n\nAnd that would matter, only when making use of pg_stat_get_activity()\n/ pg_stat_get_backend_wait_event() at the time the \"extension is waiting\"\non this wait event.\n\nWhile at it, I think that making use of an enum might also be an open door\n(need to think more about it) to allow extensions to set their own wait event.\nSomething like RequestNamedLWLockTranche()/GetNamedLWLockTranche() are doing.\n\nCurrently we have \"only\" the \"extension\" wait event which is not that useful when\nthere is multiples extensions installed in a database.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 May 2023 09:48:10 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 2023-05-19 16:48, Drouvot, Bertrand wrote:\n> While at it, I think that making use of an enum might also be an open \n> door\n> (need to think more about it) to allow extensions to set their own wait \n> event.\n> Something like RequestNamedLWLockTranche()/GetNamedLWLockTranche() are \n> doing.\n> \n> Currently we have \"only\" the \"extension\" wait event which is not that\n> useful when\n> there is multiples extensions installed in a database.\n\n(Excuse me for cutting in, and this is not directly related to the \nthread.)\n+1. I'm interested in the feature.\n\nRecently, I encountered a case where it would be nice if\ndifferent wait events were output for each extension.\n\nI tested a combination of two extensions, postgres_fdw and neon[1],\nand they output the \"Extension\" wait event, but it wasn't immediately \nclear\nwhich one was the bottleneck.\n\nThis is just a example and it probable be useful for other users. IMO, \nat least,\nit's better to improve the specification that \"Extension\" wait event \ntype has\nonly the \"Extension\" wait event.\n\n[1] https://github.com/neondatabase/neon\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 08 Jun 2023 10:57:55 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Thu, Jun 08, 2023 at 10:57:55AM +0900, Masahiro Ikeda wrote:\n> (Excuse me for cutting in, and this is not directly related to the thread.)\n> +1. I'm interested in the feature.\n> \n> This is just a example and it probable be useful for other users. IMO, at\n> least, it's better to improve the specification that \"Extension\"\n> wait event type has only the \"Extension\" wait event.\n\nI hope that nobody would counter-argue you here. In my opinion, we\nshould just introduce an API that allows extensions to retrieve wait\nevent numbers that are allocated by the backend under\nPG_WAIT_EXTENSION, in a fashion similar to GetNamedLWLockTranche().\nSay something like:\nint GetExtensionWaitEvent(const char *wait_event_name);\n\nI don't quite see a design where extensions could rely on their own\nnumbers statically assigned by the extension maintainers, as this is\nmost likely going to cause conflicts. And I would guess that a lot of\nexternal code would want to get more data pushed to pg_stat_activity,\nmeaning a lot of conflicts, potentially.\n--\nMichael", "msg_date": "Fri, 9 Jun 2023 08:15:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 6/9/23 1:15 AM, Michael Paquier wrote:\n> On Thu, Jun 08, 2023 at 10:57:55AM +0900, Masahiro Ikeda wrote:\n>> (Excuse me for cutting in, and this is not directly related to the thread.)\n>> +1. I'm interested in the feature.\n>>\n>> This is just a example and it probable be useful for other users. IMO, at\n>> least, it's better to improve the specification that \"Extension\"\n>> wait event type has only the \"Extension\" wait event.\n> \n> I hope that nobody would counter-argue you here. In my opinion, we\n> should just introduce an API that allows extensions to retrieve wait\n> event numbers that are allocated by the backend under\n> PG_WAIT_EXTENSION, in a fashion similar to GetNamedLWLockTranche().\n> Say something like:\n> int GetExtensionWaitEvent(const char *wait_event_name);\n\n+1, that's something I've in mind to work on once/if this patch is/get\ncommitted.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Jun 2023 06:26:07 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Fri, Jun 09, 2023 at 06:26:07AM +0200, Drouvot, Bertrand wrote:\n> +1, that's something I've in mind to work on once/if this patch is/get\n> committed.\n\nFWIW, I'm OK with the patch, without the need to worry about the\nperformance. Even if that's the case, we could just mark all these as\ninline and move on..\n--\nMichael", "msg_date": "Fri, 9 Jun 2023 14:58:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 2023-06-09 13:26, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 6/9/23 1:15 AM, Michael Paquier wrote:\n>> On Thu, Jun 08, 2023 at 10:57:55AM +0900, Masahiro Ikeda wrote:\n>>> (Excuse me for cutting in, and this is not directly related to the \n>>> thread.)\n>>> +1. I'm interested in the feature.\n>>> \n>>> This is just a example and it probable be useful for other users. \n>>> IMO, at\n>>> least, it's better to improve the specification that \"Extension\"\n>>> wait event type has only the \"Extension\" wait event.\n>> \n>> I hope that nobody would counter-argue you here. In my opinion, we\n>> should just introduce an API that allows extensions to retrieve wait\n>> event numbers that are allocated by the backend under\n>> PG_WAIT_EXTENSION, in a fashion similar to GetNamedLWLockTranche().\n>> Say something like:\n>> int GetExtensionWaitEvent(const char *wait_event_name);\n> \n> +1, that's something I've in mind to work on once/if this patch is/get\n> committed.\n\nThanks for replying. If you are ok, I'll try to make a patch\nto allow extensions to define custom wait events.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 09 Jun 2023 18:20:09 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "Hi,\n\nOn 6/9/23 11:20 AM, Masahiro Ikeda wrote:\n> Hi,\n> \n> On 2023-06-09 13:26, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> On 6/9/23 1:15 AM, Michael Paquier wrote:\n>>> On Thu, Jun 08, 2023 at 10:57:55AM +0900, Masahiro Ikeda wrote:\n>>>> (Excuse me for cutting in, and this is not directly related to the thread.)\n>>>> +1. I'm interested in the feature.\n>>>>\n>>>> This is just a example and it probable be useful for other users. IMO, at\n>>>> least, it's better to improve the specification that \"Extension\"\n>>>> wait event type has only the \"Extension\" wait event.\n>>>\n>>> I hope that nobody would counter-argue you here.  In my opinion, we\n>>> should just introduce an API that allows extensions to retrieve wait\n>>> event numbers that are allocated by the backend under\n>>> PG_WAIT_EXTENSION, in a fashion similar to GetNamedLWLockTranche().\n>>> Say something like:\n>>> int GetExtensionWaitEvent(const char *wait_event_name);\n>>\n>> +1, that's something I've in mind to work on once/if this patch is/get\n>> committed.\n> \n> Thanks for replying. If you are ok, I'll try to make a patch\n> to allow extensions to define custom wait events.\n\nGreat, thank you!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Jun 2023 12:11:30 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" }, { "msg_contents": "On Fri, Jun 09, 2023 at 02:58:42PM +0900, Michael Paquier wrote:\n> FWIW, I'm OK with the patch, without the need to worry about the\n> performance. Even if that's the case, we could just mark all these as\n> inline and move on..\n\nI am attempting to get all that done for the beginning of the\ndevelopment cycle, so the first refactoring patch has been applied.\nNow into the second, main one..\n--\nMichael", "msg_date": "Mon, 3 Jul 2023 13:18:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN" } ]
[ { "msg_contents": "Hello, hackers!\n\nWhen running tests for version 15, we found a conflict between \nregression tests namespace & transactions due to recent changes [1].\n\ndiff -w -U3 .../src/test/regress/expected/transactions.out \n.../src/bin/pg_upgrade/tmp_check/results/transactions.out\n--- .../src/test/regress/expected/transactions.out ...\n+++ .../src/bin/pg_upgrade/tmp_check/results/transactions.out ...\n@@ -899,6 +899,9 @@\n\n RESET default_transaction_read_only;\n DROP TABLE abc;\n+ERROR: cannot drop table abc because other objects depend on it\n+DETAIL: view test_ns_schema_2.abc_view depends on table abc\n+HINT: Use DROP ... CASCADE to drop the dependent objects too.\n -- Test assorted behaviors around the implicit transaction block \ncreated\n -- when multiple SQL commands are sent in a single Query message. \nThese\n -- tests rely on the fact that psql will not break SQL commands apart \nat a\n...\n\nIIUC the conflict was caused by\n\n+SET search_path to public, test_ns_schema_1;\n+CREATE SCHEMA test_ns_schema_2\n+ CREATE VIEW abc_view AS SELECT a FROM abc;\n\nbecause the parallel regression test transactions had already created \nthe table abc and was trying to drop it.\n\nISTM the patch diff.patch fixes this problem...\n\n[1] \nhttps://github.com/postgres/postgres/commit/dbd5795e7539ec9e15c0d4ed2d05b1b18d2a3b09\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 15 May 2023 18:27:29 +0300", "msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Conflict between regression tests namespace & transactions due to\n recent changes" }, { "msg_contents": "Marina Polyakova <m.polyakova@postgrespro.ru> writes:\n> IIUC the conflict was caused by\n\n> +SET search_path to public, test_ns_schema_1;\n> +CREATE SCHEMA test_ns_schema_2\n> + CREATE VIEW abc_view AS SELECT a FROM abc;\n\n> because the parallel regression test transactions had already created \n> the table abc and was trying to drop it.\n\nHmm. I'd actually fix the blame on transactions.sql here. Creating\na table named as generically as \"abc\" is horribly bad practice in\na set of concurrent tests. namespace.sql is arguably okay, since\nit's creating that table name in a private schema.\n\nI'd be inclined to fix this by doing s/abc/something-else/g in\ntransactions.sql.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 May 2023 12:16:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Conflict between regression tests namespace & transactions due to\n recent changes" }, { "msg_contents": "On 2023-05-15 19:16, Tom Lane wrote:\n> Marina Polyakova <m.polyakova@postgrespro.ru> writes:\n>> IIUC the conflict was caused by\n> \n>> +SET search_path to public, test_ns_schema_1;\n>> +CREATE SCHEMA test_ns_schema_2\n>> + CREATE VIEW abc_view AS SELECT a FROM abc;\n> \n>> because the parallel regression test transactions had already created\n>> the table abc and was trying to drop it.\n> \n> Hmm. I'd actually fix the blame on transactions.sql here. Creating\n> a table named as generically as \"abc\" is horribly bad practice in\n> a set of concurrent tests. namespace.sql is arguably okay, since\n> it's creating that table name in a private schema.\n> \n> I'd be inclined to fix this by doing s/abc/something-else/g in\n> transactions.sql.\n> \n> \t\t\tregards, tom lane\n\nMaybe use a separate schema for all new objects in the transaction \ntest?.. See diff_set_tx_schema.patch.\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 15 May 2023 23:23:18 +0300", "msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Conflict between regression tests namespace & transactions due to\n recent changes" }, { "msg_contents": "On Mon, May 15, 2023 at 11:23:18PM +0300, Marina Polyakova wrote:\n> On 2023-05-15 19:16, Tom Lane wrote:\n>> Hmm. I'd actually fix the blame on transactions.sql here. Creating\n>> a table named as generically as \"abc\" is horribly bad practice in\n>> a set of concurrent tests. namespace.sql is arguably okay, since\n>> it's creating that table name in a private schema.\n>> \n>> I'd be inclined to fix this by doing s/abc/something-else/g in\n>> transactions.sql.\n>\n> Maybe use a separate schema for all new objects in the transaction test?..\n> See diff_set_tx_schema.patch.\n\nSure, you could do that to bypass the failure (without the \"public\"\nactually?), leaving non-generic names around. Still I'd agree with\nTom here and just rename the objects to something more in line with\nthe context of the test to make things a bit more greppable. These\ncould be renamed as transaction_tab or transaction_view, for example.\n--\nMichael", "msg_date": "Tue, 16 May 2023 08:19:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Conflict between regression tests namespace & transactions due\n to recent changes" }, { "msg_contents": "On Mon, May 15, 2023 at 12:16:08PM -0400, Tom Lane wrote:\n> Marina Polyakova <m.polyakova@postgrespro.ru> writes:\n> > IIUC the conflict was caused by\n> \n> > +SET search_path to public, test_ns_schema_1;\n> > +CREATE SCHEMA test_ns_schema_2\n> > + CREATE VIEW abc_view AS SELECT a FROM abc;\n> \n> > because the parallel regression test transactions had already created \n> > the table abc and was trying to drop it.\n> \n> Hmm. I'd actually fix the blame on transactions.sql here. Creating\n> a table named as generically as \"abc\" is horribly bad practice in\n> a set of concurrent tests. namespace.sql is arguably okay, since\n> it's creating that table name in a private schema.\n> \n> I'd be inclined to fix this by doing s/abc/something-else/g in\n> transactions.sql.\n\nFor the record, I'm fairly sure s/public, test_ns_schema_1/test_ns_schema_1/\non the new namespace tests would also solve things. Those tests don't need\n\"public\" in the picture. Nonetheless, +1 for your proposal.\n\n\n", "msg_date": "Mon, 15 May 2023 22:03:11 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Conflict between regression tests namespace & transactions due\n to recent changes" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> For the record, I'm fairly sure s/public, test_ns_schema_1/test_ns_schema_1/\n> on the new namespace tests would also solve things. Those tests don't need\n> \"public\" in the picture. Nonetheless, +1 for your proposal.\n\nHmm, I'd not read the test case all that closely, but I did think\nthat including \"public\" in the search path was an important part\nof it. If it is not, maybe the comments could use adjustment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 May 2023 01:19:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Conflict between regression tests namespace & transactions due to\n recent changes" }, { "msg_contents": "On 2023-05-16 02:19, Michael Paquier wrote:\n> On Mon, May 15, 2023 at 11:23:18PM +0300, Marina Polyakova wrote:\n>> Maybe use a separate schema for all new objects in the transaction \n>> test?..\n>> See diff_set_tx_schema.patch.\n> \n> Sure, you could do that to bypass the failure (without the \"public\"\n> actually?), leaving non-generic names around. Still I'd agree with\n> Tom here and just rename the objects to something more in line with\n> the context of the test to make things a bit more greppable. These\n> could be renamed as transaction_tab or transaction_view, for example.\n> --\n> Michael\n\nIt confuses me a little that different methods are used for the same \npurpose. But the namespace test checks schemas. So see \ndiff_abc_to_txn_table.patch which replaces abc with txn_table in the \ntransaction test.\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 16 May 2023 11:02:45 +0300", "msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Conflict between regression tests namespace & transactions due to\n recent changes" }, { "msg_contents": "On Tue, May 16, 2023 at 11:02:45AM +0300, Marina Polyakova wrote:\n> It confuses me a little that different methods are used for the same\n> purpose. But the namespace test checks schemas. So see\n> diff_abc_to_txn_table.patch which replaces abc with txn_table in the\n> transaction test.\n\nLooks OK seen from here. Thanks!\n--\nMichael", "msg_date": "Wed, 17 May 2023 14:39:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Conflict between regression tests namespace & transactions due\n to recent changes" }, { "msg_contents": "On Wed, May 17, 2023 at 02:39:10PM +0900, Michael Paquier wrote:\n> On Tue, May 16, 2023 at 11:02:45AM +0300, Marina Polyakova wrote:\n>> It confuses me a little that different methods are used for the same\n>> purpose. But the namespace test checks schemas. So see\n>> diff_abc_to_txn_table.patch which replaces abc with txn_table in the\n>> transaction test.\n> \n> Looks OK seen from here. Thanks!\n\nFYI, the buildfarm is seeing some spurious failures as well:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2023-05-19 04%3A29%3A42\n--\nMichael", "msg_date": "Fri, 19 May 2023 15:03:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Conflict between regression tests namespace & transactions due\n to recent changes" }, { "msg_contents": "On 2023-05-19 09:03, Michael Paquier wrote:\n> On Wed, May 17, 2023 at 02:39:10PM +0900, Michael Paquier wrote:\n>> On Tue, May 16, 2023 at 11:02:45AM +0300, Marina Polyakova wrote:\n>>> It confuses me a little that different methods are used for the same\n>>> purpose. But the namespace test checks schemas. So see\n>>> diff_abc_to_txn_table.patch which replaces abc with txn_table in the\n>>> transaction test.\n>> \n>> Looks OK seen from here. Thanks!\n> \n> FYI, the buildfarm is seeing some spurious failures as well:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2023-05-19\n> 04%3A29%3A42\n> --\n> Michael\n\nYes, it is the same error. Here's another one in version 13:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2023-05-18%2022:37:49\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 19 May 2023 09:39:39 +0300", "msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Conflict between regression tests namespace & transactions due to\n recent changes" }, { "msg_contents": "Marina Polyakova <m.polyakova@postgrespro.ru> writes:\n> On 2023-05-19 09:03, Michael Paquier wrote:\n>> FYI, the buildfarm is seeing some spurious failures as well:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2023-05-1904%3A29%3A42\n\n> Yes, it is the same error. Here's another one in version 13:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2023-05-18%2022:37:49\n\nRight. I went ahead and pushed the fix in hopes of stabilizing things.\n(I went with \"trans_abc\" as the new table name, for consistency with\nsome other nearby names.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 May 2023 10:59:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Conflict between regression tests namespace & transactions due to\n recent changes" }, { "msg_contents": "On 2023-05-19 17:59, Tom Lane wrote:\n> Marina Polyakova <m.polyakova@postgrespro.ru> writes:\n>> On 2023-05-19 09:03, Michael Paquier wrote:\n>>> FYI, the buildfarm is seeing some spurious failures as well:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2023-05-1904%3A29%3A42\n> \n>> Yes, it is the same error. Here's another one in version 13:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2023-05-18%2022:37:49\n> \n> Right. I went ahead and pushed the fix in hopes of stabilizing things.\n> (I went with \"trans_abc\" as the new table name, for consistency with\n> some other nearby names.)\n> \n> \t\t\tregards, tom lane\n\nThank you! I missed the same changes in version 11 :(\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 19 May 2023 19:34:13 +0300", "msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Conflict between regression tests namespace & transactions due to\n recent changes" } ]
[ { "msg_contents": "The pginfra team is about to begin an upgrade of git.postgresql.org to\na new version of debian. During the operation there may be some\nintermittent outages -- we will let you know once the upgrade is\ncomplete.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 15 May 2023 21:37:47 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Upgrade of git.postgresql.org" }, { "msg_contents": "On Mon, May 15, 2023 at 9:37 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> The pginfra team is about to begin an upgrade of git.postgresql.org to\n> a new version of debian. During the operation there may be some\n> intermittent outages -- we will let you know once the upgrade is\n> complete.\n\nThis upgrade has now been completed. If you see any issues with it\nfrom now on, please let us know and we'll look into it!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 15 May 2023 22:04:53 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Upgrade of git.postgresql.org" } ]
[ { "msg_contents": "Currently, I believe that we are frowning on writing things directly to\nPublic by default.\n\nAlso, we had already taken that step in our systems. Furthermore we force\nPublic to be the first Schema, so this restriction enforces that everything\ncreated must be assigned to the appropriate schema.\n\nThat could make us a bit more sensitive to how pgbench operates. But this\nbecomes an opportunity to sharpen a tool.\n\nBut logging in to our regular users it has no ability to create it's tables\nand do it's work. At the same time, we want exactly this login, because we\nwant to benchmark things in our application.\n\nOur lives would be simplified if there was a simple way to specify a schema\nfor pgbench to use. It would always reference that schema, and it would\nnot be in the path. Making it operate \"just like our apps\" while benching\nmarking our items.\n\nWhile we are here, SHOULD we consider having pgbench have a default schema\nname it creates and then cleans up? And then if someone wants to override\nthat, they can?\n\nKirk...\n\nCurrently, I believe that we are frowning on writing things directly to Public by default.Also, we had already taken that step in our systems.  Furthermore we force Public to be the first Schema, so this restriction enforces that everything created must be assigned to the appropriate schema.That could make us a bit more sensitive to how pgbench operates.  But this becomes an opportunity to sharpen a tool.But logging in to our regular users it has no ability to create it's tables and do it's work.  At the same time, we want exactly this login, because we want to benchmark things in our application.Our lives would be simplified if there was a simple way to specify a schema for pgbench to use.  It would always reference that schema, and it would not be in the path.  Making it operate \"just like our apps\" while benching marking our items.While we are here, SHOULD we consider having pgbench have a default schema name it creates and then cleans up?  And then if someone wants to override that, they can?Kirk...", "msg_date": "Mon, 15 May 2023 21:35:54 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "pgbench: can we add a way to specify the schema to write to?" } ]
[ { "msg_contents": "Hi Hackers,\n\nI noticed some confusing behavior with REVOKE recently. Normally if\nREVOKE fails to revoke anything a warning is printed. For example, see\nthe following scenario:\n\n```\ntest=# SELECT current_role;\n current_role\n--------------\n joe\n(1 row)\n\ntest=# CREATE ROLE r1;\nCREATE ROLE\ntest=# CREATE TABLE t ();\nCREATE TABLE\ntest=# GRANT SELECT ON TABLE t TO r1;\nGRANT\ntest=# SET ROLE r1;\nSET\ntest=> REVOKE SELECT ON TABLE t FROM r1;\nWARNING: no privileges could be revoked for \"t\"\nWARNING: no privileges could be revoked for column \"tableoid\" of relation\n\"t\"\nWARNING: no privileges could be revoked for column \"cmax\" of relation \"t\"\nWARNING: no privileges could be revoked for column \"xmax\" of relation \"t\"\nWARNING: no privileges could be revoked for column \"cmin\" of relation \"t\"\nWARNING: no privileges could be revoked for column \"xmin\" of relation \"t\"\nWARNING: no privileges could be revoked for column \"ctid\" of relation \"t\"\nREVOKE\ntest=> SELECT relacl FROM pg_class WHERE relname = 't';\n relacl\n-----------------------------\n {joe=arwdDxtm/joe,r1=r/joe}\n(1 row)\n\n```\n\nHowever, if the REVOKE fails and the revoker has a grant option on the\nprivilege, then no warning is emitted. For example, see the following\nscenario:\n\n```\ntest=# SELECT current_role;\n current_role\n--------------\n joe\n(1 row)\n\ntest=# CREATE ROLE r1;\nCREATE ROLE\ntest=# CREATE TABLE t ();\nCREATE TABLE\ntest=# GRANT SELECT ON TABLE t TO r1 WITH GRANT OPTION;\nGRANT\ntest=# SET ROLE r1;\nSET\ntest=> REVOKE SELECT ON TABLE t FROM r1;\nREVOKE\ntest=> SELECT relacl FROM pg_class WHERE relname = 't';\n relacl\n------------------------------\n {joe=arwdDxtm/joe,r1=r*/joe}\n(1 row)\n\n```\nThe warnings come from restrict_and_check_grant() in aclchk.c. The\npsuedo code is\n\n if (revoked_privileges & available_grant_options == 0)\n emit_warning()\n\nIn the second example, `r1` does have the proper grant options so no\nwarning is emitted. However, the revoke has no actual effect.\n\nReading through the docs [0], I'm not actually sure if the REVOKE\nin the second example should succeed or not. At first it says:\n\n> A user can only revoke privileges that were granted directly by that\n> user. If, for example, user A has granted a privilege with grant\n> option to user B, and user B has in turn granted it to user C, then\n> user A cannot revoke the privilege directly from C.\n\nWhich seems pretty clear that you can only revoke privileges that you\ndirectly granted. However later on it says:\n\n> As long as some privilege is available, the command will proceed, but\n>it will revoke only those privileges for which the user has grant\n> options.\n...\n> while the other forms will issue a warning if grant options for any\n> of the privileges specifically named in the command are not held.\n\nWhich seems to imply that you can revoke a privilege as long as you\nhave a grant option on that privilege.\n\nEither way I think the REVOKE should either fail and emit a warning\nOR succeed and emit no warning.\n\nI wasn't able to locate where the check for\n> A user can only revoke privileges that were granted directly by that\n> user.\nis in the code, but we should probably just add a warning there.\n\n- Joe Koshakow\n\n[0] https://www.postgresql.org/docs/15/sql-revoke.html\n\nHi Hackers,I noticed some confusing behavior with REVOKE recently. Normally ifREVOKE fails to revoke anything a warning is printed. For example, seethe following scenario:```test=# SELECT current_role; current_role -------------- joe(1 row)test=# CREATE ROLE r1;CREATE ROLEtest=# CREATE TABLE t ();CREATE TABLEtest=# GRANT SELECT ON TABLE t TO r1;GRANTtest=# SET ROLE r1;SETtest=> REVOKE SELECT ON TABLE t FROM r1;WARNING:  no privileges could be revoked for \"t\"WARNING:  no privileges could be revoked for column \"tableoid\" of relation \"t\"WARNING:  no privileges could be revoked for column \"cmax\" of relation \"t\"WARNING:  no privileges could be revoked for column \"xmax\" of relation \"t\"WARNING:  no privileges could be revoked for column \"cmin\" of relation \"t\"WARNING:  no privileges could be revoked for column \"xmin\" of relation \"t\"WARNING:  no privileges could be revoked for column \"ctid\" of relation \"t\"REVOKEtest=> SELECT relacl FROM pg_class WHERE relname = 't';           relacl            ----------------------------- {joe=arwdDxtm/joe,r1=r/joe}(1 row)```However, if the REVOKE fails and the revoker has a grant option on theprivilege, then no warning is emitted. For example, see the followingscenario:```test=# SELECT current_role; current_role -------------- joe(1 row)test=# CREATE ROLE r1;CREATE ROLEtest=# CREATE TABLE t ();CREATE TABLEtest=# GRANT SELECT ON TABLE t TO r1 WITH GRANT OPTION;GRANTtest=# SET ROLE r1;SETtest=> REVOKE SELECT ON TABLE t FROM r1;REVOKEtest=> SELECT relacl FROM pg_class WHERE relname = 't';            relacl            ------------------------------ {joe=arwdDxtm/joe,r1=r*/joe}(1 row)```The warnings come from restrict_and_check_grant() in aclchk.c. Thepsuedo code is  if (revoked_privileges & available_grant_options == 0)    emit_warning()In the second example, `r1` does have the proper grant options so nowarning is emitted. However, the revoke has no actual effect.Reading through the docs [0], I'm not actually sure if the REVOKEin the second example should succeed or not. At first it says:> A user can only revoke privileges that were granted directly by that> user. If, for example, user A has granted a privilege with grant> option to user B, and user B has in turn granted it to user C, then> user A cannot revoke the privilege directly from C.Which seems pretty clear that you can only revoke privileges that youdirectly granted. However later on it says:> As long as some privilege is available, the command will proceed, but>it will revoke only those privileges for which the user has grant> options....> while the other forms will issue a warning if grant options for any> of the privileges specifically named in the command are not held.Which seems to imply that you can revoke a privilege as long as youhave a grant option on that privilege.Either way I think the REVOKE should either fail and emit a warningOR succeed and emit no warning.I wasn't able to locate where the check for > A user can only revoke privileges that were granted directly by that> user.is in the code, but we should probably just add a warning there.- Joe Koshakow[0] https://www.postgresql.org/docs/15/sql-revoke.html", "msg_date": "Mon, 15 May 2023 23:23:22 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Missing warning on revokes with grant options" }, { "msg_contents": "On Mon, May 15, 2023 at 11:23:22PM -0400, Joseph Koshakow wrote:\n> Reading through the docs [0], I'm not actually sure if the REVOKE\n> in the second example should succeed or not. At first it says:\n> \n>> A user can only revoke privileges that were granted directly by that\n>> user. If, for example, user A has granted a privilege with grant\n>> option to user B, and user B has in turn granted it to user C, then\n>> user A cannot revoke the privilege directly from C.\n> \n> Which seems pretty clear that you can only revoke privileges that you\n> directly granted. However later on it says:\n> \n>> As long as some privilege is available, the command will proceed, but\n>>it will revoke only those privileges for which the user has grant\n>> options.\n> ...\n>> while the other forms will issue a warning if grant options for any\n>> of the privileges specifically named in the command are not held.\n> \n> Which seems to imply that you can revoke a privilege as long as you\n> have a grant option on that privilege.\n\nI believe the \"can only revoke privileges that were granted directly by\nthat user\" rule still applies. However, I can see how the section about\nnon-owners attempting to revoke privileges might cause confusion about\nthis. The text in question has been around since 2004 (4b2dafc) and might\nbe worth revisiting.\n\nIMO the most confusing part is that the warnings won't appear if you have\nthe grant option on the privilege in question but aren't the grantor. My\n(possibly naive) expectation would be that you'd see warnings when a\nprivilege cannot be revoked because you are not the grantor.\n\n> Either way I think the REVOKE should either fail and emit a warning\n> OR succeed and emit no warning.\n\nThe thread for the aforementioned change [0] mentions the standard quite a\nbit, which might explain the current behavior.\n\n> I wasn't able to locate where the check for\n>> A user can only revoke privileges that were granted directly by that\n>> user.\n> is in the code, but we should probably just add a warning there.\n\nІ'm not certain, but I suspect the calls to aclupdate() in\nmerge_acl_with_grant() take care of this because the grantors will never\nmatch.\n\n[0] https://postgr.es/m/20040511091816.E9887CF519E%40www.postgresql.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 17 May 2023 20:48:44 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing warning on revokes with grant options" }, { "msg_contents": "On Wed, May 17, 2023 at 11:48 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n>\n> The thread for the aforementioned change [0] mentions the standard\nquite a\n> bit, which might explain the current behavior.\n\nI went through that thread and the quoted parts of the SQL standard. It\nseems clear that if a user tries to REVOKE some privilege and they\ndon't have a grant option on that privilege, then a warning should be\nissued. There was some additional discussion on when there should be\nan error vs a warning, but I don't think it's that relevant to this\ndiscussion. However, I was not able to find any discussion about the\nrestriction that a revoker can only revoke privileges that they granted\nthemselves.\n\nThe restriction was added to PostgreSQL at the same time as GRANT\nOPTIONs were introduced. The commit [0] and mailing thread [1] don't\nprovide much details on this specific restriction.\n\nThe SQL99 standard for REVOKE is very dense and I may have\nmisunderstood parts, but here's my interpretation of how this\nrestriction might come from the standard and what it says about issuing\na warning (section 12.6).\n\nLet's start with the Syntax Rules:\n\n 1) Let O be the object identified by the <object name> contained in\n<privileges>\n\nIn my example O is the table t.\n\n 3) Let U be the current user identifier and R be the current role name.\n 4) Case:\n a) If GRANTED BY <grantor> is not specified, then\n Case:\n i) If U is not the null value, then let A be U.\n ii) Otherwise, let A be R.\n\nIn my example A is the role r1.\n\n 9) Case:\n a) If the <revoke statement> is a <revoke privileges statement>, then\nfor every <grantee>\n specified, a set of privilege descriptors is identified. A\nprivilege descriptor P is said to be\n identified if it belongs to the set of privilege descriptors that\ndefined, for any <action>\n explicitly or implicitly in <privileges>, that <action> on O, or\nany of the objects in S, granted\n by A to <grantee>\n\nIn my example, <grantee> is the role r1, <privileges> is the list of\nprivileges that only contain SELECT, <action> is SELECT. Therefore the\nset of identified privilege descriptors would be a single privilege\ndescriptor on table t where the privileges contain SELECT, the grantor\nis r1, and the grantee is r1. Such a privilege does not exist, so the\nidentified privilege set is empty.\n\nNow onto the General Rules:\n\n 1) Case:\n a) If the <revoke statement> is a <revoke privilege statement>, then\n Case:\n i) If neither WITH HIERARCHY OPTION nor GRANT OPTION FOR is\nspecified, then:\n 2) The identified privilege descriptors are destroyed.\n\nIn my example, the identified set of privileges is empty, so no\nprivileges are destroyed (which I'm interpreting to mean the same thing\nas revoked).\n\n 18) If the <revoke statement> is a <revoke privileges statement>, then:\n a) For every combination of <grantee> and <action> on O specified in\n<privileges>, if there\n is no corresponding privilege descriptor in the set of identified\nprivilege descriptors, then a\n completion condition is raised: warning — privilege not revoked.\n\nIn my example the identified privileges set is empty, therefore it\ncannot contain a corresponding privilege descriptor, therefore we\nshould be issuing a warning.\n\nSo I think our current behavior is not in spec. Would you agree with\nthis evaluation or do you think I've misunderstood something?\n\n> > I wasn't able to locate where the check for\n> >> A user can only revoke privileges that were granted directly by that\n> >> user.\n> > is in the code, but we should probably just add a warning there.\n>\n> І'm not certain, but I suspect the calls to aclupdate() in\n> merge_acl_with_grant() take care of this because the grantors will\nnever\n> match.\n\nI looked into this function and that is correct. We fail to find a\nmatch for the revoked privilege here:\n\n/*\n* Search the ACL for an existing entry for this grantee and grantor. If\n* one exists, just modify the entry in-place (well, in the same position,\n* since we actually return a copy); otherwise, insert the new entry at\n* the end.\n*/\n\nfor (dst = 0; dst < num; ++dst)\n{\nif (aclitem_match(mod_aip, old_aip + dst))\n{\n/* found a match, so modify existing item */\nnew_acl = allocacl(num);\nnew_aip = ACL_DAT(new_acl);\nmemcpy(new_acl, old_acl, ACL_SIZE(old_acl));\nbreak;\n}\n}\n\nSeeing that there was no match, we add a new empty privilege to the end\nof the existing ACL list here:\n\nif (dst == num)\n{\n/* need to append a new item */\nnew_acl = allocacl(num + 1);\nnew_aip = ACL_DAT(new_acl);\nmemcpy(new_aip, old_aip, num * sizeof(AclItem));\n\n/* initialize the new entry with no permissions */\nnew_aip[dst].ai_grantee = mod_aip->ai_grantee;\nnew_aip[dst].ai_grantor = mod_aip->ai_grantor;\nACLITEM_SET_PRIVS_GOPTIONS(new_aip[dst],\n ACL_NO_RIGHTS, ACL_NO_RIGHTS);\nnum++; /* set num to the size of new_acl */\n}\n\nWe then try and revoke the specified privileges from the new empty\nprivilege, leaving it empty (modechg will equal ACL_MODECHG_DEL here):\n\nold_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);\nold_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);\n\n/* apply the specified permissions change */\nswitch (modechg)\n{\ncase ACL_MODECHG_ADD:\nACLITEM_SET_RIGHTS(new_aip[dst],\n old_rights | ACLITEM_GET_RIGHTS(*mod_aip));\nbreak;\ncase ACL_MODECHG_DEL:\nACLITEM_SET_RIGHTS(new_aip[dst],\n old_rights & ~ACLITEM_GET_RIGHTS(*mod_aip));\nbreak;\ncase ACL_MODECHG_EQL:\nACLITEM_SET_RIGHTS(new_aip[dst],\n ACLITEM_GET_RIGHTS(*mod_aip));\nbreak;\n}\n\nThen since the new privilege remains empty, we remove it from the ACL\nlist:\n\nnew_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);\nnew_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);\n\n/*\n* If the adjusted entry has no permissions, delete it from the list.\n*/\nif (new_rights == ACL_NO_RIGHTS)\n{\nmemmove(new_aip + dst,\nnew_aip + dst + 1,\n(num - dst - 1) * sizeof(AclItem));\n/* Adjust array size to be 'num - 1' items */\nARR_DIMS(new_acl)[0] = num - 1;\nSET_VARSIZE(new_acl, ACL_N_SIZE(num - 1));\n}\n\nThanks,\nJoe Koshakow\n\n[0]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=ef7422510e93266e5aa9bb926d6747d5f2ae21f4\n[1]\nhttps://www.postgresql.org/message-id/Pine.LNX.4.44.0301191916160.789-100000%40localhost.localdomain\n\nOn Wed, May 17, 2023 at 11:48 PM Nathan Bossart <nathandbossart@gmail.com> wrote:>>    The thread for the aforementioned change [0] mentions the standard quite a>    bit, which might explain the current behavior.I went through that thread and the quoted parts of the SQL standard. Itseems clear that if a user tries to REVOKE some privilege and theydon't have a grant option on that privilege, then a warning should beissued. There was some additional discussion on when there should bean error vs a warning, but I don't think it's that relevant to thisdiscussion. However, I was not able to find any discussion about therestriction that a revoker can only revoke privileges that they grantedthemselves.The restriction was added to PostgreSQL at the same time as GRANTOPTIONs were introduced. The commit [0] and mailing thread [1] don'tprovide much details on this specific restriction.The SQL99 standard for REVOKE is very dense and I may havemisunderstood parts, but here's my interpretation of how thisrestriction might come from the standard and what it says about issuinga warning (section 12.6).Let's start with the Syntax Rules:    1) Let O be the object identified by the <object name> contained in <privileges>In my example O is the table t.    3) Let U be the current user identifier and R be the current role name.    4) Case:      a) If GRANTED BY <grantor> is not specified, then        Case:          i) If U is not the null value, then let A be U.          ii) Otherwise, let A be R.In my example A is the role r1.    9) Case:      a) If the <revoke statement> is a <revoke privileges statement>, then for every <grantee>         specified, a set of privilege descriptors is identified. A privilege descriptor P is said to be         identified if it belongs to the set of privilege descriptors that defined, for any <action>         explicitly or implicitly in <privileges>, that <action> on O, or any of the objects in S, granted         by A to <grantee>In my example, <grantee> is the role r1, <privileges> is the list ofprivileges that only contain SELECT, <action> is SELECT. Therefore theset of identified privilege descriptors would be a single privilegedescriptor on table t where the privileges contain SELECT, the grantoris r1, and the grantee is r1. Such a privilege does not exist, so theidentified privilege set is empty.Now onto the General Rules:    1) Case:      a) If the <revoke statement> is a <revoke privilege statement>, then        Case:          i) If neither WITH HIERARCHY OPTION nor GRANT OPTION FOR is specified, then:            2) The identified privilege descriptors are destroyed.In my example, the identified set of privileges is empty, so noprivileges are destroyed (which I'm interpreting to mean the same thingas revoked).    18) If the <revoke statement> is a <revoke privileges statement>, then:      a) For every combination of <grantee> and <action> on O specified in <privileges>, if there         is no corresponding privilege descriptor in the set of identified privilege descriptors, then a         completion condition is raised: warning — privilege not revoked.In my example the identified privileges set is empty, therefore itcannot contain a corresponding privilege descriptor, therefore weshould be issuing a warning.So I think our current behavior is not in spec. Would you agree withthis evaluation or do you think I've misunderstood something?>    > I wasn't able to locate where the check for>    >> A user can only revoke privileges that were granted directly by that>    >> user.>    > is in the code, but we should probably just add a warning there.>>    І'm not certain, but I suspect the calls to aclupdate() in>    merge_acl_with_grant() take care of this because the grantors will never>    match.I looked into this function and that is correct. We fail to find amatch for the revoked privilege here:\t/*\t * Search the ACL for an existing entry for this grantee and grantor. If\t * one exists, just modify the entry in-place (well, in the same position,\t * since we actually return a copy); otherwise, insert the new entry at\t * the end.\t */\tfor (dst = 0; dst < num; ++dst)\t{\t\tif (aclitem_match(mod_aip, old_aip + dst))\t\t{\t\t\t/* found a match, so modify existing item */\t\t\tnew_acl = allocacl(num);\t\t\tnew_aip = ACL_DAT(new_acl);\t\t\tmemcpy(new_acl, old_acl, ACL_SIZE(old_acl));\t\t\tbreak;\t\t}\t}Seeing that there was no match, we add a new empty privilege to the endof the existing ACL list here:\tif (dst == num)\t{\t\t/* need to append a new item */\t\tnew_acl = allocacl(num + 1);\t\tnew_aip = ACL_DAT(new_acl);\t\tmemcpy(new_aip, old_aip, num * sizeof(AclItem));\t\t/* initialize the new entry with no permissions */\t\tnew_aip[dst].ai_grantee = mod_aip->ai_grantee;\t\tnew_aip[dst].ai_grantor = mod_aip->ai_grantor;\t\tACLITEM_SET_PRIVS_GOPTIONS(new_aip[dst],\t\t\t\t\t\t\t\t   ACL_NO_RIGHTS, ACL_NO_RIGHTS);\t\tnum++;\t\t\t\t\t/* set num to the size of new_acl */\t}We then try and revoke the specified privileges from the new emptyprivilege, leaving it empty (modechg will equal ACL_MODECHG_DEL here):\told_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);\told_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);\t/* apply the specified permissions change */\tswitch (modechg)\t{\t\tcase ACL_MODECHG_ADD:\t\t\tACLITEM_SET_RIGHTS(new_aip[dst],\t\t\t\t\t\t\t   old_rights | ACLITEM_GET_RIGHTS(*mod_aip));\t\t\tbreak;\t\tcase ACL_MODECHG_DEL:\t\t\tACLITEM_SET_RIGHTS(new_aip[dst],\t\t\t\t\t\t\t   old_rights & ~ACLITEM_GET_RIGHTS(*mod_aip));\t\t\tbreak;\t\tcase ACL_MODECHG_EQL:\t\t\tACLITEM_SET_RIGHTS(new_aip[dst],\t\t\t\t\t\t\t   ACLITEM_GET_RIGHTS(*mod_aip));\t\t\tbreak;\t}Then since the new privilege remains empty, we remove it from the ACLlist:\tnew_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);\tnew_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);\t/*\t * If the adjusted entry has no permissions, delete it from the list.\t */\tif (new_rights == ACL_NO_RIGHTS)\t{\t\tmemmove(new_aip + dst,\t\t\t\tnew_aip + dst + 1,\t\t\t\t(num - dst - 1) * sizeof(AclItem));\t\t/* Adjust array size to be 'num - 1' items */\t\tARR_DIMS(new_acl)[0] = num - 1;\t\tSET_VARSIZE(new_acl, ACL_N_SIZE(num - 1));\t}Thanks,Joe Koshakow[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=ef7422510e93266e5aa9bb926d6747d5f2ae21f4[1] https://www.postgresql.org/message-id/Pine.LNX.4.44.0301191916160.789-100000%40localhost.localdomain", "msg_date": "Thu, 18 May 2023 19:17:47 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Missing warning on revokes with grant options" }, { "msg_contents": "On Thu, May 18, 2023 at 7:17 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> I looked into this function and that is correct. We fail to find a\n> match for the revoked privilege here:\n>\n> /*\n> * Search the ACL for an existing entry for this grantee and grantor. If\n> * one exists, just modify the entry in-place (well, in the same\nposition,\n> * since we actually return a copy); otherwise, insert the new entry at\n> * the end.\n> */\n>\n> for (dst = 0; dst < num; ++dst)\n> {\n> if (aclitem_match(mod_aip, old_aip + dst))\n> {\n> /* found a match, so modify existing item */\n> new_acl = allocacl(num);\n> new_aip = ACL_DAT(new_acl);\n> memcpy(new_acl, old_acl, ACL_SIZE(old_acl));\n> break;\n> }\n> }\n>\n> Seeing that there was no match, we add a new empty privilege to the end\n> of the existing ACL list here:\n>\n> if (dst == num)\n> {\n> /* need to append a new item */\n> new_acl = allocacl(num + 1);\n> new_aip = ACL_DAT(new_acl);\n> memcpy(new_aip, old_aip, num * sizeof(AclItem));\n>\n> /* initialize the new entry with no permissions */\n> new_aip[dst].ai_grantee = mod_aip->ai_grantee;\n> new_aip[dst].ai_grantor = mod_aip->ai_grantor;\n> ACLITEM_SET_PRIVS_GOPTIONS(new_aip[dst],\n> ACL_NO_RIGHTS, ACL_NO_RIGHTS);\n> num++; /* set num to the size of new_acl */\n> }\n>\n> We then try and revoke the specified privileges from the new empty\n> privilege, leaving it empty (modechg will equal ACL_MODECHG_DEL here):\n>\n> old_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);\n> old_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);\n>\n> /* apply the specified permissions change */\n> switch (modechg)\n> {\n> case ACL_MODECHG_ADD:\n> ACLITEM_SET_RIGHTS(new_aip[dst],\n> old_rights | ACLITEM_GET_RIGHTS(*mod_aip));\n> break;\n> case ACL_MODECHG_DEL:\n> ACLITEM_SET_RIGHTS(new_aip[dst],\n> old_rights & ~ACLITEM_GET_RIGHTS(*mod_aip));\n> break;\n> case ACL_MODECHG_EQL:\n> ACLITEM_SET_RIGHTS(new_aip[dst],\n> ACLITEM_GET_RIGHTS(*mod_aip));\n> break;\n> }\n>\n> Then since the new privilege remains empty, we remove it from the ACL\n> list:\n>\n> new_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);\n> new_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);\n>\n> /*\n> * If the adjusted entry has no permissions, delete it from the list.\n> */\n> if (new_rights == ACL_NO_RIGHTS)\n> {\n> memmove(new_aip + dst,\n> new_aip + dst + 1,\n> (num - dst - 1) * sizeof(AclItem));\n> /* Adjust array size to be 'num - 1' items */\n> ARR_DIMS(new_acl)[0] = num - 1;\n> SET_VARSIZE(new_acl, ACL_N_SIZE(num - 1));\n> }\n\nSorry about the unformatted code, here's the entire quoted section\nagain with proper formatting:\n\nI looked into this function and that is correct. We fail to find a\nmatch for the revoked privilege here:\n\n /*\n * Search the ACL for an existing entry for this grantee and grantor. If\n * one exists, just modify the entry in-place (well, in the same\nposition,\n * since we actually return a copy); otherwise, insert the new entry at\n * the end.\n */\n\n for (dst = 0; dst < num; ++dst)\n {\n if (aclitem_match(mod_aip, old_aip + dst))\n {\n /* found a match, so modify existing item */\n new_acl = allocacl(num);\n new_aip = ACL_DAT(new_acl);\n memcpy(new_acl, old_acl, ACL_SIZE(old_acl));\n break;\n }\n }\n\nSeeing that there was no match, we add a new empty privilege to the end\nof the existing ACL list here:\n\n if (dst == num)\n {\n /* need to append a new item */\n new_acl = allocacl(num + 1);\n new_aip = ACL_DAT(new_acl);\n memcpy(new_aip, old_aip, num * sizeof(AclItem));\n\n /* initialize the new entry with no permissions */\n new_aip[dst].ai_grantee = mod_aip->ai_grantee;\n new_aip[dst].ai_grantor = mod_aip->ai_grantor;\n ACLITEM_SET_PRIVS_GOPTIONS(new_aip[dst],\n ACL_NO_RIGHTS, ACL_NO_RIGHTS);\n num++; /* set num to the size of new_acl */\n }\n\nWe then try and revoke the specified privileges from the new empty\nprivilege, leaving it empty (modechg will equal ACL_MODECHG_DEL here):\n\n old_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);\n old_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);\n\n /* apply the specified permissions change */\n switch (modechg)\n {\n case ACL_MODECHG_ADD:\n ACLITEM_SET_RIGHTS(new_aip[dst],\n old_rights | ACLITEM_GET_RIGHTS(*mod_aip));\n break;\n case ACL_MODECHG_DEL:\n ACLITEM_SET_RIGHTS(new_aip[dst],\n old_rights & ~ACLITEM_GET_RIGHTS(*mod_aip));\n break;\n case ACL_MODECHG_EQL:\n ACLITEM_SET_RIGHTS(new_aip[dst],\n ACLITEM_GET_RIGHTS(*mod_aip));\n break;\n }\n\nThen since the new privilege remains empty, we remove it from the ACL\nlist:\n\n new_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);\n new_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);\n\n /*\n * If the adjusted entry has no permissions, delete it from the list.\n */\n if (new_rights == ACL_NO_RIGHTS)\n {\n memmove(new_aip + dst,\n new_aip + dst + 1,\n (num - dst - 1) * sizeof(AclItem));\n /* Adjust array size to be 'num - 1' items */\n ARR_DIMS(new_acl)[0] = num - 1;\n SET_VARSIZE(new_acl, ACL_N_SIZE(num - 1));\n }\n\nThanks,\nJoe Koshakow\n\nOn Thu, May 18, 2023 at 7:17 PM Joseph Koshakow <koshy44@gmail.com> wrote:>>    I looked into this function and that is correct. We fail to find a>    match for the revoked privilege here:>>    /*>    * Search the ACL for an existing entry for this grantee and grantor. If>    * one exists, just modify the entry in-place (well, in the same position,>    * since we actually return a copy); otherwise, insert the new entry at>    * the end.>    */>>    for (dst = 0; dst < num; ++dst)>    {>    if (aclitem_match(mod_aip, old_aip + dst))>    {>    /* found a match, so modify existing item */>    new_acl = allocacl(num);>    new_aip = ACL_DAT(new_acl);>    memcpy(new_acl, old_acl, ACL_SIZE(old_acl));>    break;>    }>    }>>    Seeing that there was no match, we add a new empty privilege to the end>    of the existing ACL list here:>>    if (dst == num)>    {>    /* need to append a new item */>    new_acl = allocacl(num + 1);>    new_aip = ACL_DAT(new_acl);>    memcpy(new_aip, old_aip, num * sizeof(AclItem));>>    /* initialize the new entry with no permissions */>    new_aip[dst].ai_grantee = mod_aip->ai_grantee;>    new_aip[dst].ai_grantor = mod_aip->ai_grantor;>    ACLITEM_SET_PRIVS_GOPTIONS(new_aip[dst],>      ACL_NO_RIGHTS, ACL_NO_RIGHTS);>    num++; /* set num to the size of new_acl */>    }>>    We then try and revoke the specified privileges from the new empty>    privilege, leaving it empty (modechg will equal ACL_MODECHG_DEL here):>>    old_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);>    old_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);>>    /* apply the specified permissions change */>    switch (modechg)>    {>    case ACL_MODECHG_ADD:>    ACLITEM_SET_RIGHTS(new_aip[dst],>      old_rights | ACLITEM_GET_RIGHTS(*mod_aip));>    break;>    case ACL_MODECHG_DEL:>    ACLITEM_SET_RIGHTS(new_aip[dst],>      old_rights & ~ACLITEM_GET_RIGHTS(*mod_aip));>    break;>    case ACL_MODECHG_EQL:>    ACLITEM_SET_RIGHTS(new_aip[dst],>      ACLITEM_GET_RIGHTS(*mod_aip));>    break;>    }>>    Then since the new privilege remains empty, we remove it from the ACL>    list:>>    new_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);>    new_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);>>    /*>    * If the adjusted entry has no permissions, delete it from the list.>    */>    if (new_rights == ACL_NO_RIGHTS)>    {>    memmove(new_aip + dst,>    new_aip + dst + 1,>    (num - dst - 1) * sizeof(AclItem));>    /* Adjust array size to be 'num - 1' items */>    ARR_DIMS(new_acl)[0] = num - 1;>    SET_VARSIZE(new_acl, ACL_N_SIZE(num - 1));>    }Sorry about the unformatted code, here's the entire quoted sectionagain with proper formatting:I looked into this function and that is correct. We fail to find amatch for the revoked privilege here:    /*     * Search the ACL for an existing entry for this grantee and grantor. If     * one exists, just modify the entry in-place (well, in the same position,     * since we actually return a copy); otherwise, insert the new entry at     * the end.     */    for (dst = 0; dst < num; ++dst)    {        if (aclitem_match(mod_aip, old_aip + dst))        {            /* found a match, so modify existing item */            new_acl = allocacl(num);            new_aip = ACL_DAT(new_acl);            memcpy(new_acl, old_acl, ACL_SIZE(old_acl));            break;        }    }Seeing that there was no match, we add a new empty privilege to the endof the existing ACL list here:    if (dst == num)    {        /* need to append a new item */        new_acl = allocacl(num + 1);        new_aip = ACL_DAT(new_acl);        memcpy(new_aip, old_aip, num * sizeof(AclItem));        /* initialize the new entry with no permissions */        new_aip[dst].ai_grantee = mod_aip->ai_grantee;        new_aip[dst].ai_grantor = mod_aip->ai_grantor;        ACLITEM_SET_PRIVS_GOPTIONS(new_aip[dst],                                   ACL_NO_RIGHTS, ACL_NO_RIGHTS);        num++;                    /* set num to the size of new_acl */    }We then try and revoke the specified privileges from the new emptyprivilege, leaving it empty (modechg will equal ACL_MODECHG_DEL here):    old_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);    old_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);    /* apply the specified permissions change */    switch (modechg)    {        case ACL_MODECHG_ADD:            ACLITEM_SET_RIGHTS(new_aip[dst],                               old_rights | ACLITEM_GET_RIGHTS(*mod_aip));            break;        case ACL_MODECHG_DEL:            ACLITEM_SET_RIGHTS(new_aip[dst],                               old_rights & ~ACLITEM_GET_RIGHTS(*mod_aip));            break;        case ACL_MODECHG_EQL:            ACLITEM_SET_RIGHTS(new_aip[dst],                               ACLITEM_GET_RIGHTS(*mod_aip));            break;    }Then since the new privilege remains empty, we remove it from the ACLlist:    new_rights = ACLITEM_GET_RIGHTS(new_aip[dst]);    new_goptions = ACLITEM_GET_GOPTIONS(new_aip[dst]);    /*     * If the adjusted entry has no permissions, delete it from the list.     */    if (new_rights == ACL_NO_RIGHTS)    {        memmove(new_aip + dst,                new_aip + dst + 1,                (num - dst - 1) * sizeof(AclItem));        /* Adjust array size to be 'num - 1' items */        ARR_DIMS(new_acl)[0] = num - 1;        SET_VARSIZE(new_acl, ACL_N_SIZE(num - 1));    }Thanks,Joe Koshakow", "msg_date": "Thu, 18 May 2023 19:22:17 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Missing warning on revokes with grant options" }, { "msg_contents": "I've been thinking about this some more and reading the SQL99 spec. In\nthe original thread that added these warnings [0], which was linked\nearlier in this thread by Nathan, the following assertion was made:\n\n> After that, you get to the General Rules, which pretty clearly say that\n> trying to grant privileges you don't have grant option for is just a\n> warning and not an error condition. (Such privileges will not be in the\n> set of \"identified privilege descriptors\".)\n>\n> AFAICS the specification for REVOKE is exactly parallel.\n\nI think it is true that for both GRANT and REVOKE, if a privilege was\nspecified in the statement and a corresponding privilege does not exist\nin the identified set then a warning should be issued. However, the\nmeaning of \"identified set\" is different between GRANT and REVOKE.\n\nIn GRANT the identified set is defined as\n\n 4) A set of privilege descriptors is identified. The privilege\ndescriptors identified are those defining,\n for each <action> explicitly or implicitly in <privileges>, that\n<action> on O held by A with\n grant option.\n\nEssentially it is all privileges specified in the GRANT statement on O\n**where by A is the grantee with a grant option**.\n\nIn REVOKE the identified set is defined as\n\n 1) Case:\n a) If the <revoke statement> is a <revoke privileges statement>, then\nfor every <grantee>\n specified, a set of privilege descriptors is identified. A\nprivilege descriptor P is said to be\n identified if it belongs to the set of privilege descriptors that\ndefined, for any <action>\n explicitly or implicitly in <privileges>, that <action> on O, or\nany of the objects in S, granted\n by A to <grantee>.\n\nEssentially it is all privileges specified in the REVOKE statement on O\n**where A is the grantor and the grantee is one of the grantees\nspecified in the REVOKE statement**.\n\nIn fact as far as I can tell, the ability to revoke a privilege does\nnot directly depend on having a grant option for that privilege, it\nonly depends on being the grantor of the specified privilege. However,\nour code in restrict_and_check_grant doesn't match this. It treats the\nrules for GRANTs and REVOKEs the same, in that you need a grant option\nto execute either. It's possible that due to the abandoned privilege\nrules that it is impossible for a privilege to exist where the grantor\ndoesn't also have a grant option on that privilege. I haven't read that\npart of the spec closely enough.\n\nAs a consequence of how the identified set is defined for REVOKE, not\nonly should a warning be issued in the example from my previous email,\nbut I think a warning should also be issued even if the grantee has no\nprivileges on O. For example,\n\n```\ntest=# SELECT current_role;\n current_role\n--------------\n joe\n(1 row)\n\ntest=# CREATE TABLE t ();\nCREATE TABLE\ntest=# CREATE ROLE r1;\nCREATE ROLE\ntest=# SELECT relacl FROM pg_class WHERE relname = 't';\n relacl\n--------\n\n(1 row)\n\ntest=# REVOKE SELECT ON t FROM r1;\nREVOKE\n```\n\nHere the identified set for the REVOKE statement is empty. So there is\nno corresponding privilege descriptor in the identified set for the\nSELECT privilege in the REVOKE statement. So a warning should be\nissued. Recall:\n\n 18) If the <revoke statement> is a <revoke privileges statement>, then:\n a) For every combination of <grantee> and <action> on O specified in\n<privileges>, if there\n is no corresponding privilege descriptor in the set of identified\nprivilege descriptors, then a\n completion condition is raised: warning — privilege not revoked\n\nEssentially the meaning of the warning for REVOKE does not mean \"you\ntried to revoke a privilege but you don't have a grant option\", it\nmeans \"you tried to revoke a privilege (where you are the grantor), but\nsuch a privilege does not exist\".\n\nThanks,\nJoe Koshakow\n\n[0] https://postgr.es/m/20040511091816.E9887CF519E%40www.postgresql.com\n\nI've been thinking about this some more and reading the SQL99 spec. Inthe original thread that added these warnings [0], which was linkedearlier in this thread by Nathan, the following assertion was made:> After that, you get to the General Rules, which pretty clearly say that> trying to grant privileges you don't have grant option for is just a> warning and not an error condition.  (Such privileges will not be in the> set of \"identified privilege descriptors\".)>> AFAICS the specification for REVOKE is exactly parallel.I think it is true that for both GRANT and REVOKE, if a privilege wasspecified in the statement and a corresponding privilege does not existin the identified set then a warning should be issued. However, themeaning of \"identified set\" is different between GRANT and REVOKE.In GRANT the identified set is defined as    4) A set of privilege descriptors is identified. The privilege descriptors identified are those defining,    for each <action> explicitly or implicitly in <privileges>, that <action> on O held by A with    grant option.Essentially it is all privileges specified in the GRANT statement on O**where by A is the grantee with a grant option**.In REVOKE the identified set is defined as    1) Case:      a) If the <revoke statement> is a <revoke privileges statement>, then for every <grantee>         specified, a set of privilege descriptors is identified. A privilege descriptor P is said to be         identified if it belongs to the set of privilege descriptors that defined, for any <action>         explicitly or implicitly in <privileges>, that <action> on O, or any of the objects in S, granted         by A to <grantee>.Essentially it is all privileges specified in the REVOKE statement on O**where A is the grantor and the grantee is one of the granteesspecified in the REVOKE statement**.In fact as far as I can tell, the ability to revoke a privilege doesnot directly depend on having a grant option for that privilege, itonly depends on being the grantor of the specified privilege. However,our code in restrict_and_check_grant doesn't match this. It treats therules for GRANTs and REVOKEs the same, in that you need a grant optionto execute either. It's possible that due to the abandoned privilegerules that it is impossible for a privilege to exist where the grantordoesn't also have a grant option on that privilege. I haven't read thatpart of the spec closely enough.As a consequence of how the identified set is defined for REVOKE, notonly should a warning be issued in the example from my previous email,but I think a warning should also be issued even if the grantee has noprivileges on O. For example,```test=# SELECT current_role; current_role -------------- joe(1 row)test=# CREATE TABLE t ();CREATE TABLEtest=# CREATE ROLE r1;CREATE ROLEtest=# SELECT relacl FROM pg_class WHERE relname = 't'; relacl -------- (1 row)test=# REVOKE SELECT ON t FROM r1;REVOKE```Here the identified set for the REVOKE statement is empty. So there isno corresponding privilege descriptor in the identified set for theSELECT privilege in the REVOKE statement. So a warning should beissued. Recall:    18) If the <revoke statement> is a <revoke privileges statement>, then:      a) For every combination of <grantee> and <action> on O specified in <privileges>, if there         is no corresponding privilege descriptor in the set of identified privilege descriptors, then a         completion condition is raised: warning — privilege not revokedEssentially the meaning of the warning for REVOKE does not mean \"youtried to revoke a privilege but you don't have a grant option\", itmeans \"you tried to revoke a privilege (where you are the grantor), butsuch a privilege does not exist\".Thanks,Joe Koshakow[0] https://postgr.es/m/20040511091816.E9887CF519E%40www.postgresql.com", "msg_date": "Fri, 19 May 2023 12:54:33 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Missing warning on revokes with grant options" }, { "msg_contents": "Sorry for the multiple consecutive emails. I just came across this\ncomment that explains the current behavior in restrict_and_check_grant\n\n/*\n* Restrict the operation to what we can actually grant or revoke, and\n* issue a warning if appropriate. (For REVOKE this isn't quite what the\n* spec says to do: the spec seems to want a warning only if no privilege\n* bits actually change in the ACL. In practice that behavior seems much\n* too noisy, as well as inconsistent with the GRANT case.)\n*/\n\nHowever, I still think the current behavior is a bit strange since\nholding a grant option is not directly required to issue a revoke.\nPerhaps for revoke the logic should be:\n - for each specified privilege:\n - if the set of acl items on the specified object that includes\n this privilege is non empty\n - and none of those acl items have the current role as the\n grantor\n - then issue a warning.\n\nThanks,\nJoe Koshakow\n\nSorry for the multiple consecutive emails. I just came across thiscomment that explains the current behavior in restrict_and_check_grant\t/*\t * Restrict the operation to what we can actually grant or revoke, and\t * issue a warning if appropriate.  (For REVOKE this isn't quite what the\t * spec says to do: the spec seems to want a warning only if no privilege\t * bits actually change in the ACL. In practice that behavior seems much\t * too noisy, as well as inconsistent with the GRANT case.)\t */However, I still think the current behavior is a bit strange sinceholding a grant option is not directly required to issue a revoke.Perhaps for revoke the logic should be:  - for each specified privilege:      - if the set of acl items on the specified object that includes        this privilege is non empty      - and none of those acl items have the current role as the        grantor      - then issue a warning.Thanks,Joe Koshakow", "msg_date": "Fri, 19 May 2023 13:22:12 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Missing warning on revokes with grant options" } ]
[ { "msg_contents": "Hi,\n\nCurrently, the main loop of apply worker looks like below[1]. Since there are\ntwo loops, the inner loop will keep receiving and applying message from\npublisher until no more message left. The worker only reloads the configuration in\nthe outer loop. This means if the publisher keeps sending messages (it could\nkeep sending multiple transactions), the apply worker won't get a chance to\nupdate the GUCs.\n\n[1]\nfor(;;) /* outer loop */\n{\n\tfor(;;) /* inner loop */\n\t{\n\t\tlen = walrcv_receive()\n\t\tif (len == 0)\n\t\t\tbreak;\n\t\t...\n\t\tapply change\n\t}\n\n\t...\n\tif (ConfigReloadPending)\n\t{\n\t\tConfigReloadPending = false;\n\t\tProcessConfigFile(PGC_SIGHUP);\n\t}\n\t...\n}\n\nI think it would be better that the apply worker can reflect user's\nconfiguration changes sooner. To achieve this, we can add one more\nProcessConfigFile() call in the inner loop. Attach the patch for the same. What\ndo you think ?\n\nBTW, I saw one BF failure[2] (it's very rare and only happened once in 4\nmonths) which I think is due to the low frequent reload in apply worker.\n\nThe attached tap test shows how the failure happened.\n\nThe test use streaming parallel mode and change logical_replication_mode to\nimmediate, we expect serialization to happen in the test. To reproduce the failure\neasier, we need to add a sleep(1s) in the inner loop of apply worker so\nthat the apply worker won't be able to consume all messages quickly and will be\nbusy in the inner loop. Then the attached test will fail because the leader\napply didn't reload the configuration, thus serialization didn't happen.\n\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-05-12%2008%3A05%3A41\n\nBest Regards,\nHou zj", "msg_date": "Wed, 17 May 2023 01:47:51 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Reload configuration more frequently in apply worker." }, { "msg_contents": "On Wed, May 17, 2023 at 7:18 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Currently, the main loop of apply worker looks like below[1]. Since there are\n> two loops, the inner loop will keep receiving and applying message from\n> publisher until no more message left. The worker only reloads the configuration in\n> the outer loop. This means if the publisher keeps sending messages (it could\n> keep sending multiple transactions), the apply worker won't get a chance to\n> update the GUCs.\n>\n\nApart from that, I think in rare cases, it seems possible that after\nthe apply worker has waited for the data and just before it receives\nthe new replication data/message, the reload happens, then it won't\nget a chance to process the reload before processing the new message.\nI think such a theory can explain the rare BF failure you pointed out\nlater in the thread. Does that make sense?\n\n> [1]\n> for(;;) /* outer loop */\n> {\n> for(;;) /* inner loop */\n> {\n> len = walrcv_receive()\n> if (len == 0)\n> break;\n> ...\n> apply change\n> }\n>\n> ...\n> if (ConfigReloadPending)\n> {\n> ConfigReloadPending = false;\n> ProcessConfigFile(PGC_SIGHUP);\n> }\n> ...\n> }\n>\n> I think it would be better that the apply worker can reflect user's\n> configuration changes sooner. To achieve this, we can add one more\n> ProcessConfigFile() call in the inner loop. Attach the patch for the same. What\n> do you think ?\n>\n\nI think it appears to somewhat match what Tom said in the third point\nin his email [1].\n\n> BTW, I saw one BF failure[2] (it's very rare and only happened once in 4\n> months) which I think is due to the low frequent reload in apply worker.\n>\n> The attached tap test shows how the failure happened.\n>\n\nI haven't yet tried to reproduce it but will try later sometime.\nThanks for your analysis.\n\n[1] - https://www.postgresql.org/message-id/2138662.1623460441%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 17 May 2023 08:34:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reload configuration more frequently in apply worker." }, { "msg_contents": "On Wednesday, May 17, 2023 11:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, May 17, 2023 at 7:18 AM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > Currently, the main loop of apply worker looks like below[1]. Since\r\n> > there are two loops, the inner loop will keep receiving and applying\r\n> > message from publisher until no more message left. The worker only\r\n> > reloads the configuration in the outer loop. This means if the\r\n> > publisher keeps sending messages (it could keep sending multiple\r\n> > transactions), the apply worker won't get a chance to update the GUCs.\r\n> >\r\n> \r\n> Apart from that, I think in rare cases, it seems possible that after the apply\r\n> worker has waited for the data and just before it receives the new replication\r\n> data/message, the reload happens, then it won't get a chance to process the\r\n> reload before processing the new message.\r\n> I think such a theory can explain the rare BF failure you pointed out later in the\r\n> thread. Does that make sense?\r\n\r\nYes, that makes sense. That's another case where we would miss the reload and I think\r\nis the reason for the failure because the apply worker has finished applying changes(which\r\nmeans it's idle) before the failed case.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Fri, 19 May 2023 03:50:04 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Reload configuration more frequently in apply worker." }, { "msg_contents": "On Wed, May 17, 2023 at 8:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 17, 2023 at 7:18 AM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> >\n> > The attached tap test shows how the failure happened.\n> >\n>\n> I haven't yet tried to reproduce it but will try later sometime.\n>\n\nI am able to reproduce the problem with the script and steps mentioned\nby you. The patch provided by you fixes the problem and looks good to\nme. I'll push this by Wednesday unless there are any\ncomments/suggestions. Though this problem exists in previous branches\nas well, but as there are no field reports and the problem also\ndoesn't appear to be a common problem, so I intend to push only for\nv16. What do you or others think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Jun 2023 05:32:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reload configuration more frequently in apply worker." }, { "msg_contents": "On Monday, June 5, 2023 8:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, May 17, 2023 at 8:34 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Wed, May 17, 2023 at 7:18 AM Zhijie Hou (Fujitsu)\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > >\r\n> > > The attached tap test shows how the failure happened.\r\n> > >\r\n> >\r\n> > I haven't yet tried to reproduce it but will try later sometime.\r\n> >\r\n> \r\n> I am able to reproduce the problem with the script and steps mentioned by\r\n> you. The patch provided by you fixes the problem and looks good to me. I'll\r\n> push this by Wednesday unless there are any comments/suggestions. Though\r\n> this problem exists in previous branches as well, but as there are no field\r\n> reports and the problem also doesn't appear to be a common problem, so I\r\n> intend to push only for v16. What do you or others think?\r\n\r\nThanks for the review. I agree that we can push only for V16.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 6 Jun 2023 02:15:31 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Reload configuration more frequently in apply worker." }, { "msg_contents": "On Tue, Jun 6, 2023 at 7:45 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, June 5, 2023 8:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > I am able to reproduce the problem with the script and steps mentioned by\n> > you. The patch provided by you fixes the problem and looks good to me. I'll\n> > push this by Wednesday unless there are any comments/suggestions. Though\n> > this problem exists in previous branches as well, but as there are no field\n> > reports and the problem also doesn't appear to be a common problem, so I\n> > intend to push only for v16. What do you or others think?\n>\n> Thanks for the review. I agree that we can push only for V16.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Jun 2023 11:23:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reload configuration more frequently in apply worker." } ]
[ { "msg_contents": "Hi,\n\nThis patch set tries to add loongarch64 native spin lock to postgresql.\n\n- [PATCH 1/2] implements a loongarch64 native spin lock.\n- [PATCH 2/2] fixes s_lock_test to make it runnable via `make check'.\n\nThe patch set is tested on my Loongson 3A5000 machine with Loong Arch \nLinux and GCC 13.1.0 with default ./configure with no options.\n\nOutput of `make check' in src/backend/storage/lmgr is attached.\n\nSee:\n[1]: \nhttps://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#atomic-memory-access-instructions\n[2]: \nhttps://github.com/torvalds/linux/blob/f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6/arch/loongarch/include/asm/cmpxchg.h#L12\n\n----\nYANG Xudong", "msg_date": "Wed, 17 May 2023 16:49:05 +0800", "msg_from": "YANG Xudong <yangxudong@ymatrix.cn>", "msg_from_op": true, "msg_subject": "[PATCH] Add loongarch64 native spin lock." }, { "msg_contents": "YANG Xudong <yangxudong@ymatrix.cn> writes:\n> This patch set tries to add loongarch64 native spin lock to postgresql.\n\nThis came up before, and our response was\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=1c72d82c2\n\nIn principle, at least, there is no longer any need for\nmachine-specific s_lock.h additions. Is there a strong reason\nwhy the __sync_lock_test_and_set solution isn't good enough?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 May 2023 08:37:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add loongarch64 native spin lock." }, { "msg_contents": "Thanks for the information.\n\nI checked the assembly code of __sync_lock_test_and_set generated by GCC \nfor loongarch64. It is exactly the same as this patch.\n\nI guess this patch is not necessary any more.\n\nRegards\n\nOn 2023/5/17 20:37, Tom Lane wrote:\n> YANG Xudong <yangxudong@ymatrix.cn> writes:\n>> This patch set tries to add loongarch64 native spin lock to postgresql.\n> \n> This came up before, and our response was\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=1c72d82c2\n> \n> In principle, at least, there is no longer any need for\n> machine-specific s_lock.h additions. Is there a strong reason\n> why the __sync_lock_test_and_set solution isn't good enough?\n> \n> \t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 May 2023 08:53:08 +0800", "msg_from": "YANG Xudong <yangxudong@ymatrix.cn>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add loongarch64 native spin lock." } ]
[ { "msg_contents": "Team,\n I made the /**** QUERY ****/ changes.\nAnd I found the .po files, and modified those to match.\n\n make checkworld -- worked\n\n Anyway, I have NO IDEA how I run psql in a different language.\nI tried gnome-language-selector, I installed Russian/Czech.\nI switched my entire system to Russian.\n\n git, make, all respond in russian.\n\n But not psql? (First, I assume that I should check that the translations\nworked),\nSecond... Does ANYONE do this? (I snapshotted my VM in case it gets so bad\nthat I have to revert to get English back, so at least I have a safety net).\n\nThanks in advance.\n\nKirk...\nPS: I would have posted the patch files, but I want to work on the tooling\nto generate proper patch headers.\n\nTeam,  I made the /**** QUERY ****/ changes.And I found the .po files, and modified those to match.  make checkworld   -- worked  Anyway, I have NO IDEA how I run psql in a different language.I tried gnome-language-selector, I installed Russian/Czech.I switched my entire system to Russian.  git, make, all respond in russian.  But not psql?  (First, I assume that I should check that the translations worked),Second... Does ANYONE do this?  (I snapshotted my VM in case it gets so bad that I have to revert to get English back, so at least I have a safety net).Thanks in advance.Kirk...PS: I would have posted the patch files, but I want to work on the tooling to generate proper patch headers.", "msg_date": "Wed, 17 May 2023 11:04:14 -0400", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "How do I set a different language to test psql? (/**** QUERY ****/)" }, { "msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n> I made the /**** QUERY ****/ changes.\n> And I found the .po files, and modified those to match.\n\nIt's not your job to modify the .po files, at least not unless\nyou join the translation team --- and even then, it'd not happen\ntill after the core patch gets committed.\n\n> Anyway, I have NO IDEA how I run psql in a different language.\n> I tried gnome-language-selector, I installed Russian/Czech.\n> I switched my entire system to Russian.\n> git, make, all respond in russian.\n> But not psql?\n\nSounds like you did not configure with --enable-nls. If you\ndid, it should respond to LC_MESSAGES or the other usual locale\nenvironment variables.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 May 2023 11:31:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How do I set a different language to test psql? (/**** QUERY\n ****/)" } ]
[ { "msg_contents": "|1. Switch --load-via-partition-root is very useful.\nWould it be a big deal to add extra information to the |||dump |custom \nformat so that this switch can be specified from pg_restore level?\n\n2. Another common usability problem is a quick dump of the selected \nparent table.\nIt is important that it should includes all inherited tables or \nsubpartitions.\nNow you can't just specify -t root-table-name, because you'll usually \nget an empty data dump.\n\nThe list of necessary tables is sometimes very long and, when using \ntimescaledb, partition tables are called not humanly.\n\n|\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\n1. Switch \n--load-via-partition-root is very useful.\nWould it be a big deal to add extra information to the dump custom format so that this switch can \nbe specified from pg_restore level?\n\n2. Another common usability problem is a quick dump of the selected \nparent table.\nIt is important that it should includes all inherited tables or \nsubpartitions.\nNow you can't just specify -t root-table-name, because you'll usually \nget an empty data dump.\n\nThe list of necessary tables is sometimes very long and, when using \ntimescaledb, partition tables are called not humanly.\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66", "msg_date": "Wed, 17 May 2023 17:46:31 +0200", "msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>", "msg_from_op": true, "msg_subject": "Easy dump of partitioned and inherited data" }, { "msg_contents": "Przemysław Sztoch wrote on 17.05.2023 17:46:||||\n||\n> |2. Another common usability problem is a quick dump of the selected \n> parent table.\n> It is important that it should includes all inherited tables or \n> subpartitions.\n> Now you can't just specify -t root-table-name, because you'll usually \n> get an empty data dump.\n>\n> The list of necessary tables is sometimes very long and, when using \n> timescaledb, partition tables are called not humanly.\n> |\nJust saw that the problem was solved in \"[Proposal] Allow pg_dump to \ninclude all child tables with the root table\"\nand \"pg_dump - read data for some options from external file \n<https://commitfest.postgresql.org/43/2573/>\".\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nPrzemysław Sztoch wrote on 17.05.2023 \n17:46:\n\n\n2. Another common usability problem is a quick dump of the selected \nparent table.\n\nIt is important that it should includes all inherited tables or \nsubpartitions.\nNow you can't just specify -t root-table-name, because you'll usually \nget an empty data dump.\n\nThe list of necessary tables is sometimes very long and, when using \ntimescaledb, partition tables are called not humanly.\nJust saw that the problem was solved in \"[Proposal] Allow pg_dump to \ninclude all child tables with the root table\"\nand \"pg_dump - read\n data for some options from external file\".\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66", "msg_date": "Thu, 18 May 2023 23:44:32 +0200", "msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>", "msg_from_op": true, "msg_subject": "Re: Easy dump of partitioned and inherited data" } ]
[ { "msg_contents": "I'm sorry I was unable to respond right away.\n\nOn 09.05.2023 17:23, torikoshia wrote:\n> You may already understand it, but these variable names are given in \n> imitation of FREEZE and BINARY cases:\n>\n>   --- a/src/include/commands/copy.h\n>   +++ b/src/include/commands/copy.h\n>   @@ -42,6 +42,7 @@ typedef struct CopyFormatOptions\n>                                    * -1 if not specified */\n>       bool        binary;         /* binary format? */\n>       bool        freeze;         /* freeze rows on loading? */\n>   +   bool        ignore_datatype_errors;  /* ignore rows with \n> datatype errors */\n>\n>   --- a/src/backend/commands/copy.c\n>   +++ b/src/backend/commands/copy.c\n>   @@ -419,6 +419,7 @@ ProcessCopyOptions(ParseState *pstate,\n>       bool        format_specified = false;\n>       bool        freeze_specified = false;\n>       bool        header_specified = false;\n>   +   bool        ignore_datatype_errors_specified = false;\n>\n> I think it would be sane to align the names with the FREEZE and BINARY \n> options.\n>\n> I agree with the name is too long and we once used the name \n> 'ignore_errors'.\n> However, current implementation does not ignore all errors but just \n> data type error, so I renamed it.\n> There may be a better name, but I haven't come up with one.\n\nYes, you are right, I saw it.\n\n>\n> As far as I take a quick look at on PostgreSQL source code, there're \n> few variable name with \"_counter\". It seems to be used for function names.\n> Something like \"ignored_errors_count\" might be better.\nI noticed that many variables are named with the \"_counter\" postfix, and \nmost of them are used as a counter. For example, PgStat_StatTabEntry or \nJitInstrumentation structures consisted of many such variables. Despite \nthis, I agree with your suggested name, because I found many similar \nvariables that are used in the program as a counter, but it seems to me \nthat the most of them are still used by local variables in the function.\n\n\n", "msg_date": "Wed, 17 May 2023 20:10:16 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": true, "msg_subject": "Fwd: POC PATCH: copy from ... exceptions to: (was Re: VLDB Features)" }, { "msg_contents": "I'm sorry I was unable to respond right away.\n\nOn 09.05.2023 17:23, torikoshia wrote:\n> You may already understand it, but these variable names are given in \n> imitation of FREEZE and BINARY cases:\n>\n>   --- a/src/include/commands/copy.h\n>   +++ b/src/include/commands/copy.h\n>   @@ -42,6 +42,7 @@ typedef struct CopyFormatOptions\n>                                    * -1 if not specified */\n>       bool        binary;         /* binary format? */\n>       bool        freeze;         /* freeze rows on loading? */\n>   +   bool        ignore_datatype_errors;  /* ignore rows with \n> datatype errors */\n>\n>   --- a/src/backend/commands/copy.c\n>   +++ b/src/backend/commands/copy.c\n>   @@ -419,6 +419,7 @@ ProcessCopyOptions(ParseState *pstate,\n>       bool        format_specified = false;\n>       bool        freeze_specified = false;\n>       bool        header_specified = false;\n>   +   bool        ignore_datatype_errors_specified = false;\n>\n> I think it would be sane to align the names with the FREEZE and BINARY \n> options.\n>\n> I agree with the name is too long and we once used the name \n> 'ignore_errors'.\n> However, current implementation does not ignore all errors but just \n> data type error, so I renamed it.\n> There may be a better name, but I haven't come up with one.\n\nYes, you are right, I saw it.\n\n>\n> As far as I take a quick look at on PostgreSQL source code, there're \n> few variable name with \"_counter\". It seems to be used for function names.\n> Something like \"ignored_errors_count\" might be better.\nI noticed that many variables are named with the \"_counter\" postfix, and \nmost of them are used as a counter. For example, PgStat_StatTabEntry or \nJitInstrumentation structures consisted of many such variables. Despite \nthis, I agree with your suggested name, because I found many similar \nvariables that are used in the program as a counter, but it seems to me \nthat the most of them are still used by local variables in the function.\n\n\n", "msg_date": "Sun, 21 May 2023 12:23:14 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: POC PATCH: copy from ... exceptions to: (was Re: VLDB Features)" } ]
[ { "msg_contents": "Hi,\n\nI was working on setting up a PG16.devel(375407f4) test database with\nUTF8 but C locale, and found the following strange behaviour:\n\nplain initdb:\n[...]\n Using default ICU locale \"en_US\".\n Using language tag \"en-US\" for ICU locale \"en_US\".\n The database cluster will be initialized with this locale configuration:\n provider: icu\n ICU locale: en-US\n LC_*: en_US.UTF-8\n The default database encoding has accordingly been set to \"UTF8\".\n The default text search configuration will be set to \"english\".\n\ninitdb --no-locale: (documented as \"equivalent to --locale=C\")\n[...]\n The database cluster will be initialized with locale \"C\".\n The default database encoding has accordingly been set to \"SQL_ASCII\".\n The default text search configuration will be set to \"english\".\n\ninitdb --locale=C\n[...]\n Using default ICU locale \"en_US\".\n Using language tag \"en-US\" for ICU locale \"en_US\".\n The database cluster will be initialized with this locale configuration:\n provider: icu\n ICU locale: en-US\n LC_*: C\n The default database encoding has accordingly been set to \"UTF8\".\n The default text search configuration will be set to \"english\".\n\nNotably; if initdb chooses the C locale from --no-locale, it uses\nSQL_ASCII through libc, but when the C locale is specified through\n--locale=C, it somehow defaults to the ICU locale en-US and uses UTF8\nas encoding.\n\nIn my view that's very unexpected behaviour.\n\nKind regards,\n\nMatthias van de Meent\nNeon (neon.tech)\n\n\n", "msg_date": "Wed, 17 May 2023 22:38:37 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Inconsistent behavior with locale definition in initdb/pg_ctl init" } ]
[ { "msg_contents": "I did a preliminary test run of pgindent today, and was dismayed\nto see a bunch of misformatting in backend/jit/llvm/, which\nevidently is because the typedefs list available from the\nbuildfarm no longer includes any LLVM typedefs. We apparently\nused to have at least one buildfarm animal that was configured\n--with-llvm and ran the \"typedefs\" task, but there aren't any\ntoday.\n\nI can manually prevent those typedefs from getting deleted from\ntypedefs.list, but it'd be better if the normal process was\ntaking care of this -- the more so because we're hoping to automate\nthat process some more.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 May 2023 18:14:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "No buildfarm animals are running both typedefs and --with-llvm" }, { "msg_contents": "On 2023-05-17 We 18:14, Tom Lane wrote:\n> I did a preliminary test run of pgindent today, and was dismayed\n> to see a bunch of misformatting in backend/jit/llvm/, which\n> evidently is because the typedefs list available from the\n> buildfarm no longer includes any LLVM typedefs. We apparently\n> used to have at least one buildfarm animal that was configured\n> --with-llvm and ran the \"typedefs\" task, but there aren't any\n> today.\n>\n> I can manually prevent those typedefs from getting deleted from\n> typedefs.list, but it'd be better if the normal process was\n> taking care of this -- the more so because we're hoping to automate\n> that process some more.\n>\n> \t\t\t\n\n\nDo you remember which animals that used to be? A lot of animals are not \nbuilding with LLVM. Maybe we should encourage it more, e.g. by adding \n--with-llvm to the default config set in the sample file.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-17 We 18:14, Tom Lane wrote:\n\n\nI did a preliminary test run of pgindent today, and was dismayed\nto see a bunch of misformatting in backend/jit/llvm/, which\nevidently is because the typedefs list available from the\nbuildfarm no longer includes any LLVM typedefs. We apparently\nused to have at least one buildfarm animal that was configured\n--with-llvm and ran the \"typedefs\" task, but there aren't any\ntoday.\n\nI can manually prevent those typedefs from getting deleted from\ntypedefs.list, but it'd be better if the normal process was\ntaking care of this -- the more so because we're hoping to automate\nthat process some more.\n\n\t\t\t\n\n\n\nDo you remember which animals that used to be? A lot of animals\n are not building with LLVM. Maybe we should encourage it more,\n e.g. by adding --with-llvm to the default config set in the sample\n file.\n\n\n\ncheers\n\n\nandrew\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 18 May 2023 08:20:13 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: No buildfarm animals are running both typedefs and --with-llvm" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-05-17 We 18:14, Tom Lane wrote:\n>> I did a preliminary test run of pgindent today, and was dismayed\n>> to see a bunch of misformatting in backend/jit/llvm/, which\n>> evidently is because the typedefs list available from the\n>> buildfarm no longer includes any LLVM typedefs. We apparently\n>> used to have at least one buildfarm animal that was configured\n>> --with-llvm and ran the \"typedefs\" task, but there aren't any\n>> today.\n\n> Do you remember which animals that used to be?\n\nNope, but last night I configured indri to do it, so we have some\ntypedefs coverage now. It'd be good to have more than one animal\ndoing it though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 May 2023 09:05:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: No buildfarm animals are running both typedefs and --with-llvm" } ]
[ { "msg_contents": "Hi hackers,\n\nA colleague of mine, Ante Krešić, got puzzled by the following behavior:\n\nSetup:\n\npostgres=# create table inh_test (id serial, value float);\nCREATE TABLE\npostgres=# create table inh_child_1 () INHERITS ( inh_test);\nCREATE TABLE\npostgres=# create table inh_child_2 () INHERITS ( inh_test);\nCREATE TABLE\npostgres=# insert into inh_child_1 values (1,1);\nINSERT 0 1\npostgres=# insert into inh_child_2 values (1,1);\nINSERT 0 1\n\nUpdate tuples in first transaction:\n\npostgres=# begin;\nBEGIN\npostgres=*# update inh_test set value = 2 where value = 1;\nUPDATE 2\n\nDelete in second transaction while the first is still active:\n\npostgres=# delete from inh_test where value = 1;\n\nCommit in the first transaction and we get a delete in the second one\neven though committed values do not qualify after update.\n\npostgres=# COMMIT;\n\npostgres=# delete from inh_test where value = 1;\nDELETE 1\n\nThe same happens for declarative partitioned tables as well. When\nworking on a table without inheritance / partitioning the result is\ndifferent, DELETE 0.\n\nSo what's the problem?\n\nAccording to the documentation [1]:\n\n\"\"\"\nUPDATE, DELETE [..] commands behave the same as SELECT in terms of\nsearching for target rows: they will only find target rows that were\ncommitted as of the command start time. However, such a target row\nmight have already been updated (or deleted or locked) by another\nconcurrent transaction by the time it is found. In this case, the\nwould-be updater will wait for the first updating transaction to\ncommit or roll back (if it is still in progress). If the first updater\nrolls back, then its effects are negated and the second updater can\nproceed with updating the originally found row. If the first updater\ncommits, the second updater will ignore the row if the first updater\ndeleted it, otherwise it will attempt to apply its operation to the\nupdated version of the row. The search condition of the command (the\nWHERE clause) is re-evaluated to see if the updated version of the row\nstill matches the search condition. If so, the second updater proceeds\nwith its operation using the updated version of the row.\n\"\"\"\n\nIt looks like the observed behaviour contradicts the documentation. If\nwe read it literally the second transaction should delete 0 rows, as\nit does for non-partitioned and non-inherited tables. From what I can\ntell the observed behavior doesn't contradict the general guarantees\npromised by READ COMMITTED.\n\nPerhaps we should update the documentation for this case, or maybe\nremove the quoted part of it.\n\nThoughts?\n\n[1]: https://www.postgresql.org/docs/current/transaction-iso.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 18 May 2023 17:23:08 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> A colleague of mine, Ante Krešić, got puzzled by the following behavior:\n\nThat's not a documentation problem. That's a bug, and an extremely\nnasty one. A quick check shows that it works as expected up through\nv13, but fails as described in v14 and later. Needs bisecting ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 May 2023 10:53:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "On Thu, May 18, 2023 at 10:53:35AM -0400, Tom Lane wrote:\n> That's not a documentation problem. That's a bug, and an extremely\n> nasty one. A quick check shows that it works as expected up through\n> v13, but fails as described in v14 and later. Needs bisecting ...\n\ngit-bisect points me to 86dc900.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 May 2023 08:20:18 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "I wrote:\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n>> A colleague of mine, Ante Krešić, got puzzled by the following behavior:\n\n> That's not a documentation problem. That's a bug, and an extremely\n> nasty one. A quick check shows that it works as expected up through\n> v13, but fails as described in v14 and later. Needs bisecting ...\n\nUgh. Bisecting says it broke at\n\n86dc90056dfdbd9d1b891718d2e5614e3e432f35 is the first bad commit\ncommit 86dc90056dfdbd9d1b891718d2e5614e3e432f35\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Wed Mar 31 11:52:34 2021 -0400\n\n Rework planning and execution of UPDATE and DELETE.\n\nwhich was absolutely not supposed to be breaking any concurrent-execution\nguarantees. I wonder what we got wrong.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 May 2023 11:22:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Hi,\n\n> I wonder what we got wrong.\n\nOne thing we noticed is that the description for EvalPlanQual may be wrong [1]:\n\n\"\"\"\nIn UPDATE/DELETE, only the target relation needs to be handled this way.\nIn SELECT FOR UPDATE, there may be multiple relations flagged FOR UPDATE,\nso we obtain lock on the current tuple version in each such relation before\nexecuting the recheck.\n\"\"\"\n\n[1]: https://github.com/postgres/postgres/blob/master/src/backend/executor/README#L381\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 18 May 2023 18:27:22 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "On Thu, May 18, 2023 at 11:22:54AM -0400, Tom Lane wrote:\n> Ugh. Bisecting says it broke at\n> \n> 86dc90056dfdbd9d1b891718d2e5614e3e432f35 is the first bad commit\n> commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Wed Mar 31 11:52:34 2021 -0400\n> \n> Rework planning and execution of UPDATE and DELETE.\n> \n> which was absolutely not supposed to be breaking any concurrent-execution\n> guarantees. I wonder what we got wrong.\n\nWith the reproduction steps listed upthread, I see that XMAX for both\ntuples is set to the deleting transaction, but the one in inh_child_2 has\ntwo additional infomask flags: HEAP_XMAX_EXCL_LOCK and HEAP_XMAX_LOCK_ONLY.\nIf I add a third table (i.e., inh_child_3), XMAX for all three tuples is\nset to the deleting transaction, and only the one in inh_child_3 has the\nlock bits set. Also, in the three-table case, the DELETE statement reports\n\"DELETE 2\".\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 May 2023 12:51:23 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, May 18, 2023 at 11:22:54AM -0400, Tom Lane wrote:\n>> Ugh. Bisecting says it broke at\n>> commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35\n>> which was absolutely not supposed to be breaking any concurrent-execution\n>> guarantees. I wonder what we got wrong.\n\n> With the reproduction steps listed upthread, I see that XMAX for both\n> tuples is set to the deleting transaction, but the one in inh_child_2 has\n> two additional infomask flags: HEAP_XMAX_EXCL_LOCK and HEAP_XMAX_LOCK_ONLY.\n> If I add a third table (i.e., inh_child_3), XMAX for all three tuples is\n> set to the deleting transaction, and only the one in inh_child_3 has the\n> lock bits set. Also, in the three-table case, the DELETE statement reports\n> \"DELETE 2\".\n\nYeah. I see the problem: when starting up an EPQ recheck, we stuff\nthe tuple-to-test into the epqstate->relsubs_slot[] entry for the\nrelation it came from, but we do nothing to the EPQ state for the\nother target relations, which allows the EPQ plan to fetch rows\nfrom those relations as usual. If it finds a (non-updated) row\npassing the qual, kaboom! We decide the EPQ check passed.\n\nWhat we need to do, I think, is set epqstate->relsubs_done[] for\nall target relations except the one we are stuffing a tuple into.\n\nWhile nodeModifyTable can certainly be made to do that, things are\ncomplicated by the fact that currently ExecScanReScan thinks it ought\nto clear all the relsubs_done flags, which would break things again.\nI wonder if we can simply delete that code. Dropping the\nFDW/Custom-specific code there is a bit scary, but on the whole that\nlooks like code that got cargo-culted in rather than anything we\nactually need.\n\nThe reason this wasn't a bug before 86dc90056 is that any given\nplan tree could have only one target relation, so there was not\nanything else to suppress.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 May 2023 16:03:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "On Thu, May 18, 2023 at 04:03:36PM -0400, Tom Lane wrote:\n> Yeah. I see the problem: when starting up an EPQ recheck, we stuff\n> the tuple-to-test into the epqstate->relsubs_slot[] entry for the\n> relation it came from, but we do nothing to the EPQ state for the\n> other target relations, which allows the EPQ plan to fetch rows\n> from those relations as usual. If it finds a (non-updated) row\n> passing the qual, kaboom! We decide the EPQ check passed.\n\nAh, so the EPQ check only fails for the last tuple because we won't fetch\nrows from the other relations. I think that explains the behavior I'm\nseeing.\n\n> What we need to do, I think, is set epqstate->relsubs_done[] for\n> all target relations except the one we are stuffing a tuple into.\n\nThis seems generally reasonable to me.\n\n> While nodeModifyTable can certainly be made to do that, things are\n> complicated by the fact that currently ExecScanReScan thinks it ought\n> to clear all the relsubs_done flags, which would break things again.\n> I wonder if we can simply delete that code. Dropping the\n> FDW/Custom-specific code there is a bit scary, but on the whole that\n> looks like code that got cargo-culted in rather than anything we\n> actually need.\n\nI see that part was added in 385f337 [0]. I haven't had a chance to\nevaluate whether it seems necessary.\n\n[0] https://postgr.es/m/9A28C8860F777E439AA12E8AEA7694F80117370C%40BPXM15GP.gisp.nec.co.jp\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 May 2023 14:34:42 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, May 18, 2023 at 04:03:36PM -0400, Tom Lane wrote:\n>> What we need to do, I think, is set epqstate->relsubs_done[] for\n>> all target relations except the one we are stuffing a tuple into.\n\n> This seems generally reasonable to me.\n\nHere's a draft patch for this. I think it's OK for HEAD, but we are\ngoing to need some different details for the back branches, because\nadding a field to struct EPQState would create an ABI break. Our\nstandard trick of shoving the field to the end in the back branches\nwon't help, because struct EPQState is actually embedded in\nstruct ModifyTableState, meaning that all the subsequent fields\ntherein will move. Maybe we can get away with that, but I bet not.\n\nI think what the back branches will have to do is reinitialize the\nrelsubs_done array every time through EvalPlanQual(), which is a bit sad\nbut probably doesn't amount to anything compared to the startup overhead\nof the sub-executor.\n\nDebian Code Search doesn't know of any outside code touching\nrelsubs_done, so I think we are safe in dropping that code in\nExecScanReScan. It seems quite pointless anyway considering\nthat up to now, EvalPlanQualBegin has always zeroed the whole\narray.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 18 May 2023 18:26:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "I wrote:\n> Debian Code Search doesn't know of any outside code touching\n> relsubs_done, so I think we are safe in dropping that code in\n> ExecScanReScan. It seems quite pointless anyway considering\n> that up to now, EvalPlanQualBegin has always zeroed the whole\n> array.\n\nOh, belay that. What I'd forgotten is that it's possible that\nthe target relation is on the inside of a nestloop, meaning that\nwe might need to fetch the EPQ substitute tuple more than once.\nSo there are three possible states: blocked (never return a\ntuple), ready to return a tuple, and done returning a tuple\nfor this scan. ExecScanReScan needs to reset \"done\" to \"ready\",\nbut not touch the \"blocked\" state. The attached v2 mechanizes\nthat using two bool arrays.\n\nWhat I'm thinking about doing to back-patch this is to replace\none of the pointer fields in EPQState with a pointer to a\nsubsidiary palloc'd structure, where we can put the new fields\nalong with the cannibalized old one. We've done something\nsimilar before, and it seems a lot safer than having basically\ndifferent logic in v16 than earlier branches.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 18 May 2023 19:57:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Hi,\n\n> The attached v2 mechanizes that using two bool arrays.\n\nI tested the patch on several combinations of operating systems\n(LInux, MacOS) and architectures (x64, RISC-V) available to me at the\nmoment, with both Meson and Autotools. Also I made sure\neval-plan-qual.spec fails when the C code is untouched.\n\nThe patch passed all the checks I could come up with.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 May 2023 14:02:24 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Thanks for the patch.\n\nOn Fri, May 19, 2023 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Debian Code Search doesn't know of any outside code touching\n> > relsubs_done, so I think we are safe in dropping that code in\n> > ExecScanReScan. It seems quite pointless anyway considering\n> > that up to now, EvalPlanQualBegin has always zeroed the whole\n> > array.\n>\n> Oh, belay that. What I'd forgotten is that it's possible that\n> the target relation is on the inside of a nestloop, meaning that\n> we might need to fetch the EPQ substitute tuple more than once.\n> So there are three possible states: blocked (never return a\n> tuple), ready to return a tuple, and done returning a tuple\n> for this scan. ExecScanReScan needs to reset \"done\" to \"ready\",\n> but not touch the \"blocked\" state. The attached v2 mechanizes\n> that using two bool arrays.\n\nAha, that's clever. So ExecScanReScan() would only reset the\nrelsubs_done[] entry for the currently active (\"unblocked\") target\nrelation, because that would be the only one \"unblocked\" during a\ngiven EvalPlanQual() invocation.\n\n+ * Initialize per-relation EPQ tuple states. Result relations, if any,\n+ * get marked as blocked; others as not-fetched.\n\nWould it be helpful to clarify that \"blocked\" means blocked for a\ngiven EvalPlanQual() cycle?\n\n+ /*\n+ * relsubs_blocked[scanrelid - 1] is true if there is no EPQ tuple for\n+ * this target relation.\n+ */\n+ bool *relsubs_blocked;\n\nSimilarly, maybe say \"no EPQ tuple for this target relation in a given\nEvalPlanQual() invocation\" here?\n\nBTW, I didn't quite understand why EPQ involving resultRelations must\nbehave in this new way but not the EPQ during LockRows?\n\n> What I'm thinking about doing to back-patch this is to replace\n> one of the pointer fields in EPQState with a pointer to a\n> subsidiary palloc'd structure, where we can put the new fields\n> along with the cannibalized old one. We've done something\n> similar before, and it seems a lot safer than having basically\n> different logic in v16 than earlier branches.\n\n+1.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 May 2023 21:53:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, May 19, 2023 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> + * Initialize per-relation EPQ tuple states. Result relations, if any,\n> + * get marked as blocked; others as not-fetched.\n\n> Would it be helpful to clarify that \"blocked\" means blocked for a\n> given EvalPlanQual() cycle?\n\nProbably best to put that with the data structure's comments. I changed\nthose to look like\n\n /*\n * relsubs_done[scanrelid - 1] is true if there is no EPQ tuple for this\n * target relation or it has already been fetched in the current scan of\n * this target relation within the current EvalPlanQual test.\n */\n bool *relsubs_done;\n\n /*\n * relsubs_blocked[scanrelid - 1] is true if there is no EPQ tuple for\n * this target relation during the current EvalPlanQual test. We keep\n * these flags set for all relids listed in resultRelations, but\n * transiently clear the one for the relation whose tuple is actually\n * passed to EvalPlanQual().\n */\n bool *relsubs_blocked;\n\n\n> BTW, I didn't quite understand why EPQ involving resultRelations must\n> behave in this new way but not the EPQ during LockRows?\n\nLockRows doesn't have a bug: it always fills all the EPQ tuple slots\nit's responsible for, and it doesn't use EvalPlanQual() anyway.\nIn the name of simplicity I kept the behavior exactly the same for\ncallers other than nodeModifyTable.\n\nPerhaps replication/logical/worker.c could use a closer look here.\nIt's not entirely clear to me that the EPQ state it sets up is ever\nused; but if it is I think it is okay as I have it here, because it\nlooks like those invocations always have just one result relation in\nthe plan, so there aren't any \"extra\" result rels that need to be\nblocked.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 May 2023 12:22:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, May 19, 2023 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'm thinking about doing to back-patch this is to replace\n>> one of the pointer fields in EPQState with a pointer to a\n>> subsidiary palloc'd structure, where we can put the new fields\n>> along with the cannibalized old one. We've done something\n>> similar before, and it seems a lot safer than having basically\n>> different logic in v16 than earlier branches.\n\n> +1.\n\nDone that way. I chose to replace the tuple_table field, because\nit was in a convenient spot and it seemed like the field least\nlikely to have any outside code referencing it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 May 2023 14:33:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" }, { "msg_contents": "Hi Tom,\n\n> Done that way. I chose to replace the tuple_table field, because\n> it was in a convenient spot and it seemed like the field least\n> likely to have any outside code referencing it.\n\nMany thanks!\n\nIf it's not too much trouble could you please recommend good entry\npoints to learn more about the internals of this part of the system\nand accompanying edge cases? Perhaps there is an experiment or two an\nextension author can do in order to lower the entry threshold and/or\nknown bugs, limitations or wanted features one could start with?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 May 2023 21:42:50 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: The documentation for READ COMMITTED may be incomplete or wrong" } ]
[ { "msg_contents": "Hi all,\n\nAs was previously discussed at the thread surrounding [1]: Currently\nany block registration in WAL takes up at least 8 bytes in xlog\noverhead, regardless of how much data is included in that block: 1\nbyte for block ID, 1 byte for fork and flags, 2 bytes for the block\ndata length, and 4 bytes for the blockNo. (Usually, another 12 bytes\nare used for a RelFileLocator; but that may not be included for some\nblocks in the record when conditions apply)\n\nAttached is a patch that reduces this overhead by up to 2 bytes by\nencoding how large the block data length field is into the block ID,\nand thus optionally reducing the block data's length field to 0 bytes.\nExamples: cross-page update records will now be 2 bytes shorter,\nbecause the record never registers any data for the new block of the\nupdate; pgbench transactions are now either 6 or 8 bytes smaller\ndepending on whether the update crosses a page boundary (in xlog\nrecord size; after alignment it is 0 or 4/8 bytes, depending on\nMAXALIGN and whether the updates are cross-page updates).\n\nIt changes the block IDs used to fit in 6 bits, using the upper 2 bits\nof the block_id field to store how much data is contained in the\nrecord (0, <=UINT8_MAX, or <=UINT16_MAX bytes).\n\nThis is part 1 of a series of patches trying to decrease the size of\nWAL; see also [0], [1] and [2] for more info on what's still to go.\nI'm working on a separate, much more involved patch for the XLogRecord\nheader itself, similar to the patch in [1], which I expect to send\nsometime soon as well.\nUnless someone thinks the patches should be discussed as one series,\nI'm planning on posting that in another thread, as I don't see any\nmeaningful dependencies between the patches, and the XLR header patch\nwill be quite a bit larger than this one.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n[0] https://wiki.postgresql.org/wiki/Updating_the_WAL_infrastructure\n[1] https://www.postgresql.org/message-id/flat/CAEze2Wjd3jY_UhhOGdGGnC6NO%3D%2BNmtNOmd%3DJaYv-v-nwBAiXXA%40mail.gmail.com#17a51d83923f4390d8f407d0d6c5da07\n[2] https://www.postgresql.org/message-id/flat/CAEze2Whf%3DfwAj7rosf6aDM9t%2B7MU1w-bJn28HFWYGkz%2Bics-hg%40mail.gmail.com\n\nPS. Benchmark results on my system (5950x with other light tasks\nrunning) don't show an obviously negative effect in a 10-minute run\nwith these arbitrary pgbench settings on a fresh cluster with default\nconfiguration:\n\n./pg_install/bin/pgbench postgres -j 2 -c 6 -T 600 -M prepared\n[...]\nmaster: tps = 375\npatched: tps = 381", "msg_date": "Thu, 18 May 2023 16:59:37 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On 18/05/2023 17:59, Matthias van de Meent wrote:\n> Attached is a patch that reduces this overhead by up to 2 bytes by\n> encoding how large the block data length field is into the block ID,\n> and thus optionally reducing the block data's length field to 0 bytes.\n> Examples: cross-page update records will now be 2 bytes shorter,\n> because the record never registers any data for the new block of the\n> update; pgbench transactions are now either 6 or 8 bytes smaller\n> depending on whether the update crosses a page boundary (in xlog\n> record size; after alignment it is 0 or 4/8 bytes, depending on\n> MAXALIGN and whether the updates are cross-page updates).\n> \n> It changes the block IDs used to fit in 6 bits, using the upper 2 bits\n> of the block_id field to store how much data is contained in the\n> record (0, <=UINT8_MAX, or <=UINT16_MAX bytes).\n\nPerhaps we should introduce a few generic inline functions to do varint \nencoding. That could be useful in many places, while this scheme is very \ntailored for XLogRecordBlockHeader.\n\nWe could replace XLogRecordDataHeaderShort and XLogRecordDataHeaderLong \nwith this too. With just one XLogRecordDataHeader, with a \nvariable-length length field.\n\n> This is part 1 of a series of patches trying to decrease the size of\n> WAL; see also [0], [1] and [2] for more info on what's still to go.\n> I'm working on a separate, much more involved patch for the XLogRecord\n> header itself, similar to the patch in [1], which I expect to send\n> sometime soon as well.\n> Unless someone thinks the patches should be discussed as one series,\n> I'm planning on posting that in another thread, as I don't see any\n> meaningful dependencies between the patches, and the XLR header patch\n> will be quite a bit larger than this one.\n> \n> Kind regards,\n> \n> Matthias van de Meent\n> Neon, Inc.\n> \n> [0] https://wiki.postgresql.org/wiki/Updating_the_WAL_infrastructure\n\nGood ideas here. Eliminating the two padding bytes from XLogRecord in \nparticular seems like a pure win.\n\n> PS. Benchmark results on my system (5950x with other light tasks\n> running) don't show an obviously negative effect in a 10-minute run\n> with these arbitrary pgbench settings on a fresh cluster with default\n> configuration:\n> \n> ./pg_install/bin/pgbench postgres -j 2 -c 6 -T 600 -M prepared\n> [...]\n> master: tps = 375\n> patched: tps = 381\n\nThat was probably not CPU limited, so that any overhead in generating \nthe WAL would not show up. Try PGOPTIONS=\"-csynchronous_commit=off\" and \npgbench -N option. And make sure the scale is large enough that there is \nno lock contention. Also would be good to measure the overhead in \nreplaying the WAL.\n\nHow much space saving does this yield?\n\n- Heikki\n\n\n\n", "msg_date": "Thu, 18 May 2023 19:22:26 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On Thu, 18 May 2023 at 18:22, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 18/05/2023 17:59, Matthias van de Meent wrote:\n> Perhaps we should introduce a few generic inline functions to do varint\n> encoding. That could be useful in many places, while this scheme is very\n> tailored for XLogRecordBlockHeader.\n\nI'm not sure about the reusability of such code, as not all varint\nencodings are made equal:\n\nHere, I chose to determine the size of the field with some bits stored\nin leftover bits of another field, so storing the field and size\nseparately.\nBut in other cases, such as UTF8's code point encoding, each byte has\na carry bit indicating whether the value has more bytes to go.\nIn even more other cases, such as sqlite's integer encoding, the value\nis stored in a single byte, unless that byte contains a sentinel value\nthat indicates the number of bytes that the value continues into.\n\nWhat I'm trying to say is that there is no perfect encoding that is\nbetter than all others, and I picked what I thought worked best in\nthis specific case. I think it is reasonable to expect that\nvarint-encoding of e.g. blockNo or RelFileNode into the WAL record\ncould want to choose a different method than the method I've chosen\nfor the block data length.\n\n> We could replace XLogRecordDataHeaderShort and XLogRecordDataHeaderLong\n> with this too. With just one XLogRecordDataHeader, with a\n> variable-length length field.\n\nYes, that could be used too. But that's not part of the patch right\nnow, and I've not yet planned on implementing that for this patch.\n\n> > [0] https://wiki.postgresql.org/wiki/Updating_the_WAL_infrastructure\n>\n> Good ideas here. Eliminating the two padding bytes from XLogRecord in\n> particular seems like a pure win.\n\nIt requires code churn and probably increases complexity, but apart\nfrom that I think 'pure win' is accurate, yes.\n\n> > PS. Benchmark results on my system (5950x with other light tasks\n> > running) don't show an obviously negative effect in a 10-minute run\n> > with these arbitrary pgbench settings on a fresh cluster with default\n> > configuration:\n> >\n> > ./pg_install/bin/pgbench postgres -j 2 -c 6 -T 600 -M prepared\n> > [...]\n> > master: tps = 375\n> > patched: tps = 381\n>\n> That was probably not CPU limited, so that any overhead in generating\n> the WAL would not show up. Try PGOPTIONS=\"-csynchronous_commit=off\" and\n> pgbench -N option. And make sure the scale is large enough that there is\n> no lock contention. Also would be good to measure the overhead in\n> replaying the WAL.\n\nwith assertions now disabled, and the following configuration:\n\nsynchronous_commit = off\nfsync = off\nfull_page_writes = off\ncheckpoint_timeout = 1d\nautovacuum = off\n\nand now without assertions, I get\nmaster: tps = 3500.815859\npatched: tps = 3535.188054\n\nWith autovacuum enabled it's worked similarly well, within 1% of these results.\n\n> How much space saving does this yield?\n\nNo meaningful savings in the pgbench workload, mostly due to xlog\nrecord length MAXALIGNs currently not being favorable in the pgbench\nworkload. But, record sizes have dropped by 1 or 2 bytes in several\ncases, as can be seen at the bottom of this mail.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\nThe data: Record type, then record length averages (average aligned\nlength between parens) for both master and patched, and the average\nper-record savings with this patch.\n\n| record type | master avg | patched avg | delta | delta |\n| | (aligned avg) | (aligned avg) | | aligned |\n|---------------|-----------------|-----------------|-------|---------|\n| BT/DEDUP | 64.00 (64.00) | 63.00 (64.00) | -1 | 0 |\n| BT/INS_LEAF | 81.41 (81.41) | 80.41 (81.41) | -1 | 0 |\n| CLOG/0PG | 30.00 (32.00) | 30.00 (32.00) | 0 | 0 |\n| HEAP/DEL | 54.00 (56.00) | 52.00 (56.00) | -2 | 0 |\n| HEAP/HOT_UPD | 72.02 (72.19) | 71.02 (72.19) | 0 | 0 |\n| HEAP/INS | 79.00 (80.00) | 78.00 (80.00) | -1 | 0 |\n| HEAP/INS+INIT | 79.00 (80.00) | 78.00 (80.00) | -1 | 0 |\n| HEAP/LOCK | 54.00 (56.00) | 52.00 (56.00) * | -2 | 0 |\n| HEAP2/MUL_INS | 85.00 (88.00) | 84.00 (88.00) * | -1 | 0 |\n| HEAP2/PRUNE | 65.17 (68.19) | 64.17 (68.19) | -1 | 0 |\n| STDBY/R_XACTS | 52.76 (56.00) | 52.21 (56.00) | -0.5 | 0 |\n| TX/COMMIT | 34.00 (40.00) | 34.00 (40.00) | 0 | 0 |\n| XLOG/CHCKPT_O | 114.00 (120.00) | 114.00 (120.00) | 0 | 0 |\n\n\n", "msg_date": "Fri, 19 May 2023 01:47:51 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "Hi,\n\nI noticed that the patch needs review and decided to take a look.\n\n> No meaningful savings in the pgbench workload, mostly due to xlog\n> record length MAXALIGNs currently not being favorable in the pgbench\n> workload. But, record sizes have dropped by 1 or 2 bytes in several\n> cases, as can be seen at the bottom of this mail.\n\nThis may not sound a lot but still is valuable IMO if we consider the\nreduction in terms of percentages of overall saved disk throughput,\nnetwork traffic, etc, not in absolute values per one record. Even if\n1-2 bytes are not a bottleneck that can be seen on benchmarks (or the\nperformance improvement is not that impressive), it's some amount of\nmoney paid on cloud. Considering the fact that the patch is not that\ncomplicated I see no reason not to apply the optimization as long as\nit doesn't cause degradations.\n\nI also agree with Matthias' arguments above regarding the lack of\none-size-fits-all variable encoding and the overall desire to keep the\nfocus. E.g. the code can be refactored if and when we discover that\ndifferent subsystems ended up using the same encoding.\n\nAll in all the patch looks good to me, but I have a couple of nitpicks:\n\n* The comment for XLogSizeClass seems to be somewhat truncated as if\nCtr+S was not pressed before creating the patch. I also suggest\ndouble-checking the grammar.\n* `Size written = -1;` in XLogWriteLength() can lead to compiler\nwarnings some day considering the fact that Size / size_t are\nunsigned. Also this assignment doesn't seem to serve any particular\npurpose. So I suggest removing it.\n* I don't see much value in using the WRITE_OP macro in\nXLogWriteLength(). The code is read more often than it's written and I\nwouldn't call this code particularly readable (although it's shorter).\n* XLogReadLength() - ditto\n* `if (read < 0)` in DecodeXLogRecord() is noop since `read` is unsigned\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 5 Sep 2023 16:04:46 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On Tue, 5 Sept 2023 at 15:04, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> I noticed that the patch needs review and decided to take a look.\n\nThanks for reviewing!\n\n> All in all the patch looks good to me, but I have a couple of nitpicks:\n>\n> * The comment for XLogSizeClass seems to be somewhat truncated as if\n> Ctr+S was not pressed before creating the patch. I also suggest\n> double-checking the grammar.\n\nI've updated the various comments with improved wording.\n\n> * `Size written = -1;` in XLogWriteLength() can lead to compiler\n> warnings some day considering the fact that Size / size_t are\n> unsigned. Also this assignment doesn't seem to serve any particular\n> purpose. So I suggest removing it.\n\nFixed, it now uses `int` instead, as does XLogReadLength().\n\n> * I don't see much value in using the WRITE_OP macro in\n> XLogWriteLength(). The code is read more often than it's written and I\n> wouldn't call this code particularly readable (although it's shorter).\n> * XLogReadLength() - ditto\n\nI use READ_OP and WRITE_OP mostly to make sure that each operation's\ncode is clear. Manually expanding the macro would allow the handling\nof each variant to have different structure code, and that would allow\nfor more coding errors. I think it's extra important to make sure the\ncode isn't wrong because this concerns WAL (de)serialization, and one\ncopy is (in my opinion) easier to check for errors than 3 copies.\n\nI've had my share of issues in copy-edited code, so I rather like keep\nthe template around as long as I don't need to modify the underlying\ncode.\n\n> * `if (read < 0)` in DecodeXLogRecord() is noop since `read` is unsigned\n\nYes, thanks for noticing. I've been working with Rust recently, where\nunsigned size is `usize` and `size` is signed. The issue has been\nfixed in the attached patch with 'int' types instead.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Mon, 18 Sep 2023 20:50:33 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "Hi,\n\nOn 2023-05-18 19:22:26 +0300, Heikki Linnakangas wrote:\n> On 18/05/2023 17:59, Matthias van de Meent wrote:\n> > Attached is a patch that reduces this overhead by up to 2 bytes by\n> > encoding how large the block data length field is into the block ID,\n> > and thus optionally reducing the block data's length field to 0 bytes.\n> > Examples: cross-page update records will now be 2 bytes shorter,\n> > because the record never registers any data for the new block of the\n> > update; pgbench transactions are now either 6 or 8 bytes smaller\n> > depending on whether the update crosses a page boundary (in xlog\n> > record size; after alignment it is 0 or 4/8 bytes, depending on\n> > MAXALIGN and whether the updates are cross-page updates).\n> > \n> > It changes the block IDs used to fit in 6 bits, using the upper 2 bits\n> > of the block_id field to store how much data is contained in the\n> > record (0, <=UINT8_MAX, or <=UINT16_MAX bytes).\n> \n> Perhaps we should introduce a few generic inline functions to do varint\n> encoding. That could be useful in many places, while this scheme is very\n> tailored for XLogRecordBlockHeader.\n\nYes - I proposed that and wrote an implementation of reasonably efficient\nvarint encoding. Here's my prototype:\nhttps://postgr.es/m/20221004234952.anrguppx5owewb6n%40awork3.anarazel.de\n\nI think it's a bad tradeoff to write lots of custom varint encodings, just to\neek out a bit more space savings. The increase in code complexity IMO makes it\na bad tradeoff.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 18 Sep 2023 16:03:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On Mon, Sep 18, 2023 at 04:03:38PM -0700, Andres Freund wrote:\n> I think it's a bad tradeoff to write lots of custom varint encodings, just to\n> eek out a bit more space savings. The increase in code complexity IMO makes it\n> a bad tradeoff.\n\n+1.\n--\nMichael", "msg_date": "Tue, 19 Sep 2023 09:57:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On Tue, 19 Sept 2023 at 01:03, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-05-18 19:22:26 +0300, Heikki Linnakangas wrote:\n> > On 18/05/2023 17:59, Matthias van de Meent wrote:\n> > > It changes the block IDs used to fit in 6 bits, using the upper 2 bits\n> > > of the block_id field to store how much data is contained in the\n> > > record (0, <=UINT8_MAX, or <=UINT16_MAX bytes).\n> >\n> > Perhaps we should introduce a few generic inline functions to do varint\n> > encoding. That could be useful in many places, while this scheme is very\n> > tailored for XLogRecordBlockHeader.\n\nThis scheme is reused later for the XLogRecord xl_tot_len field over\nat [0], and FWIW is thus being reused. Sure, it's tailored to this WAL\nuse case, but IMO we're getting good value from it. We don't use\nprotobuf or JSON for WAL, we use our own serialization format. Having\nsome specialized encoding/decoding in that format for certain fields\nis IMO quite acceptable.\n\n> Yes - I proposed that and wrote an implementation of reasonably efficient\n> varint encoding. Here's my prototype:\n> https://postgr.es/m/20221004234952.anrguppx5owewb6n%40awork3.anarazel.de\n\nAs I mentioned on that thread, that prototype has a significant\nprobability of doing nothing to improve WAL size, or even increasing\nthe WAL size for installations which consume a lot of OIDs.\n\n> I think it's a bad tradeoff to write lots of custom varint encodings, just to\n> eek out a bit more space savings.\n\nThis is only a single \"custom\" varint encoding though, if you can even\ncall it that. It makes a field's size depend on flags set in another\nbyte, which is not that much different from the existing use of\nXLR_BLOCK_ID_DATA_[LONG, SHORT].\n\n> The increase in code complexity IMO makes it a bad tradeoff.\n\nPardon me for asking, but what would you consider to be a good\ntradeoff then? I think the code relating to the WAL storage format is\nabout as simple as you can get it within the feature set it provides\nand the size of the resulting records. While I think there is still\nmuch to gain w.r.t. WAL record size, I don't think we can get much of\nthose improvements without adding at least some amount of complexity,\nsomething I think to be true for most components in PostgreSQL.\n\nSo, except for redesigning significant parts of the public WAL APIs,\nare we just going to ignore any potential improvements because they\n\"increase code complexity\"?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://commitfest.postgresql.org/43/4386/\n\n\n", "msg_date": "Mon, 25 Sep 2023 19:16:50 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "Hi,\n\n> This scheme is reused later for the XLogRecord xl_tot_len field over\n> at [0], and FWIW is thus being reused. Sure, it's tailored to this WAL\n> use case, but IMO we're getting good value from it. We don't use\n> protobuf or JSON for WAL, we use our own serialization format. Having\n> some specialized encoding/decoding in that format for certain fields\n> is IMO quite acceptable.\n>\n> > Yes - I proposed that and wrote an implementation of reasonably efficient\n> > varint encoding. Here's my prototype:\n> > https://postgr.es/m/20221004234952.anrguppx5owewb6n%40awork3.anarazel.de\n>\n> As I mentioned on that thread, that prototype has a significant\n> probability of doing nothing to improve WAL size, or even increasing\n> the WAL size for installations which consume a lot of OIDs.\n>\n> > I think it's a bad tradeoff to write lots of custom varint encodings, just to\n> > eek out a bit more space savings.\n>\n> This is only a single \"custom\" varint encoding though, if you can even\n> call it that. It makes a field's size depend on flags set in another\n> byte, which is not that much different from the existing use of\n> XLR_BLOCK_ID_DATA_[LONG, SHORT].\n>\n> > The increase in code complexity IMO makes it a bad tradeoff.\n>\n> Pardon me for asking, but what would you consider to be a good\n> tradeoff then? I think the code relating to the WAL storage format is\n> about as simple as you can get it within the feature set it provides\n> and the size of the resulting records. While I think there is still\n> much to gain w.r.t. WAL record size, I don't think we can get much of\n> those improvements without adding at least some amount of complexity,\n> something I think to be true for most components in PostgreSQL.\n>\n> So, except for redesigning significant parts of the public WAL APIs,\n> are we just going to ignore any potential improvements because they\n> \"increase code complexity\"?\n\nHere are my two cents.\n\nI definitely see the value in having reusable varint encoding. This\nwould probably be a good option for extendable TOAST pointers,\ncompression dictionaries, and perhaps other patches.\n\nIn this particular case however Matthias has good arguments that this\nis not the right tool for this particular task, IMO. We don't use\nrbtrees for everything that needs a map from x to y. Hash tables have\nother compromises. Sometimes one container is a better fit, sometimes\nthe other, sometimes none and we implement a fine tuned container for\nthe given case. Same here.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 26 Sep 2023 18:33:59 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On Tue, 26 Sept 2023 at 02:09, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Tue, 19 Sept 2023 at 01:03, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-05-18 19:22:26 +0300, Heikki Linnakangas wrote:\n> > > On 18/05/2023 17:59, Matthias van de Meent wrote:\n> > > > It changes the block IDs used to fit in 6 bits, using the upper 2 bits\n> > > > of the block_id field to store how much data is contained in the\n> > > > record (0, <=UINT8_MAX, or <=UINT16_MAX bytes).\n> > >\n> > > Perhaps we should introduce a few generic inline functions to do varint\n> > > encoding. That could be useful in many places, while this scheme is very\n> > > tailored for XLogRecordBlockHeader.\n>\n> This scheme is reused later for the XLogRecord xl_tot_len field over\n> at [0], and FWIW is thus being reused. Sure, it's tailored to this WAL\n> use case, but IMO we're getting good value from it. We don't use\n> protobuf or JSON for WAL, we use our own serialization format. Having\n> some specialized encoding/decoding in that format for certain fields\n> is IMO quite acceptable.\n>\n> > Yes - I proposed that and wrote an implementation of reasonably efficient\n> > varint encoding. Here's my prototype:\n> > https://postgr.es/m/20221004234952.anrguppx5owewb6n%40awork3.anarazel.de\n>\n> As I mentioned on that thread, that prototype has a significant\n> probability of doing nothing to improve WAL size, or even increasing\n> the WAL size for installations which consume a lot of OIDs.\n>\n> > I think it's a bad tradeoff to write lots of custom varint encodings, just to\n> > eek out a bit more space savings.\n>\n> This is only a single \"custom\" varint encoding though, if you can even\n> call it that. It makes a field's size depend on flags set in another\n> byte, which is not that much different from the existing use of\n> XLR_BLOCK_ID_DATA_[LONG, SHORT].\n>\n> > The increase in code complexity IMO makes it a bad tradeoff.\n>\n> Pardon me for asking, but what would you consider to be a good\n> tradeoff then? I think the code relating to the WAL storage format is\n> about as simple as you can get it within the feature set it provides\n> and the size of the resulting records. While I think there is still\n> much to gain w.r.t. WAL record size, I don't think we can get much of\n> those improvements without adding at least some amount of complexity,\n> something I think to be true for most components in PostgreSQL.\n>\n> So, except for redesigning significant parts of the public WAL APIs,\n> are we just going to ignore any potential improvements because they\n> \"increase code complexity\"?\n\nI'm seeing that there has been no activity in this thread for nearly 4\nmonths, I'm planning to close this in the current commitfest unless\nsomeone is planning to take it forward.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 21 Jan 2024 07:44:14 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "Hi,\n\n> I'm seeing that there has been no activity in this thread for nearly 4\n> months, I'm planning to close this in the current commitfest unless\n> someone is planning to take it forward.\n\nI don't think that closing CF entries purely due to inactivity is a\ngood practice (neither something we did before) as long as there is\ncode, it applies, etc. There are a lot of patches and few people\nworking on them. Inactivity in a given thread doesn't necessarily\nindicate lack of interest, more likely lack of resources.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:38:23 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On Mon, 22 Jan 2024 at 16:08, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > I'm seeing that there has been no activity in this thread for nearly 4\n> > months, I'm planning to close this in the current commitfest unless\n> > someone is planning to take it forward.\n>\n> I don't think that closing CF entries purely due to inactivity is a\n> good practice (neither something we did before) as long as there is\n> code, it applies, etc. There are a lot of patches and few people\n> working on them. Inactivity in a given thread doesn't necessarily\n> indicate lack of interest, more likely lack of resources.\n\nThere are a lot of patches like this and there is no clear way to find\nout if someone wants to work on it or if they have lost interest in\nit. That is the reason, I thought to send out a mail so that the\nauthor/reviewer can reply and take it to the next state like ready for\ncommitter state. If the author/reviewer is not planning to work in\nthis commitfest, but has plans to work in the next commitfest we can\nmove this to the next commitfest. I don't see a better way to identify\nif the patch has interest or not.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 22 Jan 2024 22:29:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On Mon, Jan 22, 2024 at 5:38 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> I don't think that closing CF entries purely due to inactivity is a\n> good practice (neither something we did before) as long as there is\n> code, it applies, etc. There are a lot of patches and few people\n> working on them. Inactivity in a given thread doesn't necessarily\n> indicate lack of interest, more likely lack of resources.\n\nIf the CF entry doesn't serve the purpose of allowing someone to find\npatches in need of review, it's worse than useless. Patches that\naren't getting reviewed should stay in the CF until they do, or until\nwe make a collective decision that we don't care about or want that\nparticular patch enough to bother. But patches that are not ready for\nreview need to get out until such time as they are. Otherwise they're\njust non-actionable clutter. And unfortunately we have so much of that\nin the CF app now that finding things that really need review is time\nconsuming and difficult. That REALLY needs to be cleaned up.\n\nIn the case of this particular patch, I think the problem is that\nthere's no consensus on the design. There's not a ton of debate on\nthis thread, but thread [1] linked in the original post contains a lot\nof vigorous debate about what the right thing to do is here and I\ndon't believe we reached any meeting of the minds. In light of that\nlack of agreement, I'm honestly a bit confused why Matthias even found\nit worthwhile to submit this on a new thread. I think we all agree\nwith him that there's room for improvement here, but we don't agree on\nwhat particular form that improvement should take, and as long as that\nagreement is lacking, I find it hard to imagine anything getting\ncommitted. The task right now is to get agreement on something, and\nleaving the CF entry open longer isn't going make the people who have\nalready expressed opinions start agreeing with each other more than\nthey do already.\n\nIt looks like I never replied to\nhttps://www.postgresql.org/message-id/20221019192130.ebjbycpw6bzjry4v%40awork3.anarazel.de\nbut, FWIW, I agree with Andres that applying the same technique to\nmultiple fields that are stored together (DB OID, TS OID, rel #, block\n#) is unlikely in practice to produce many cases that regress. But the\nquestion for this thread is really more about whether we're OK with\nusing ad-hoc bit swizzling to reduce the size of xlog records or\nwhether we want to insist on the use of a uniform varint encoding.\nHeikki and Andres both seem to favor the latter. IIRC, I was initially\nmore optimistic about ad-hoc bit swizzling being a potentially\nacceptable technique, but I'm not convinced enough about it to argue\nagainst two very smart committers both of whom know more about\nmicro-optimizing performance than I do, and nobody else seems to\nmaking this argument on this thread either, so I just don't really see\nhow this patch is ever going to go anywhere in its current form.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 12:23:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On 22/01/2024 19:23, Robert Haas wrote:\n> In the case of this particular patch, I think the problem is that\n> there's no consensus on the design. There's not a ton of debate on\n> this thread, but thread [1] linked in the original post contains a lot\n> of vigorous debate about what the right thing to do is here and I\n> don't believe we reached any meeting of the minds.\n\nYeah, so it seems.\n\n> It looks like I never replied to\n> https://www.postgresql.org/message-id/20221019192130.ebjbycpw6bzjry4v%40awork3.anarazel.de\n> but, FWIW, I agree with Andres that applying the same technique to\n> multiple fields that are stored together (DB OID, TS OID, rel #, block\n> #) is unlikely in practice to produce many cases that regress. But the\n> question for this thread is really more about whether we're OK with\n> using ad-hoc bit swizzling to reduce the size of xlog records or\n> whether we want to insist on the use of a uniform varint encoding.\n> Heikki and Andres both seem to favor the latter. IIRC, I was initially\n> more optimistic about ad-hoc bit swizzling being a potentially\n> acceptable technique, but I'm not convinced enough about it to argue\n> against two very smart committers both of whom know more about\n> micro-optimizing performance than I do, and nobody else seems to\n> making this argument on this thread either, so I just don't really see\n> how this patch is ever going to go anywhere in its current form.\n\nI don't have a clear idea of how to proceed with this either. Some \nthoughts I have:\n\nUsing varint encoding makes sense for length fields. The common values \nare small, and if a length of anything is large, then the size of the \nlength field itself is insignificant compared to the actual data.\n\nI don't like using varint encoding for OID. They might be small in \ncommon cases, but it feels wrong to rely on that. They're just arbitrary \nnumbers. We could pick them randomly, it's just an implementation detail \nthat we use a counter to choose the next one. I really dislike the idea \nthat someone would do a pg_dump + restore, just to get smaller OIDs and \nsmaller WAL as a result.\n\nIt does make sense to have a fast-path (small-path?) for 0 OIDs though.\n\nTo shrink OIDs fields, you could refer to earlier WAL records. A special \nvalue for \"same relation as in previous record\", or something like that. \nNow we're just re-inventing LZ-style compression though. Might as well \nuse LZ4 or Snappy or something to compress the whole WAL stream. It's a \nbit tricky to get the crash-safety right, but shouldn't be impossible.\n\nHas anyone seriously considered implementing wholesale compression of WAL?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 2 Feb 2024 15:52:50 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" }, { "msg_contents": "On Fri, Feb 2, 2024 at 8:52 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> To shrink OIDs fields, you could refer to earlier WAL records. A special\n> value for \"same relation as in previous record\", or something like that.\n> Now we're just re-inventing LZ-style compression though. Might as well\n> use LZ4 or Snappy or something to compress the whole WAL stream. It's a\n> bit tricky to get the crash-safety right, but shouldn't be impossible.\n>\n> Has anyone seriously considered implementing wholesale compression of WAL?\n\nI thought about the idea of referring to earlier WAL and/or undo\nrecords when working on zheap. It seems tricky, because what if replay\nstarts after those WAL records and you can't refer back to them? It's\nOK if you can make sure that you never depend on anything prior to the\nlatest checkpoint, but the natural way to make that work is to add\nmore looping like what we already do for FPIs, and then you have to\nworry about whether that extra logic is going to be more expensive\nthan what you save. FPIs are so expensive that we can afford to go to\na lot of trouble to avoid them and still come out ahead, but storing\nthe full OID instead of an OID reference isn't nearly in the same\ncategory.\n\nI also thought about trying to refer to earlier items on the same\npage, thinking that the locality would make things easier. But it\ndoesn't, because we don't know which page will ultimately contain the\nWAL record until quite late, so we can't reason about what's on the\nsame page when constructing it.\n\nWholesale compression of WAL might run into some of the same issues,\ne.g. if you don't want to compress each record individually, that\nmeans you can't compress until you know the insert position. And even\nthen, if you want the previous data to be present in the compressor as\ncontext, you almost need all the WAL compression to be done by a\nsingle process. But if the LSNs are relative to the compressed stream,\nyou have to wait for that compression to finish before you can\ndetermine the LSN of the next record, which seems super-painful, and\nif they're relative to the uncompressed stream, then mapping them onto\nfixed-size files gets tricky.\n\nMy hunch is that we can squeeze more out of the existing architecture\nwith a lot less work than it would take to do major rearchitecture\nlike compressing everything. I don't know how we can agree on a way of\ndoing that because everybody's got slightly different ideas about the\nright way to do this. But if agreeing on how to evolve the system\nwe've got seems harder then rewriting it, we need to stop worrying\nabout WAL overhead and learn how to work together better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Feb 2024 10:42:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XLog size reductions: smaller XLRec block header for PG17" } ]
[ { "msg_contents": "Hi,\n\nI took some time to look at the Meson build for Postgres. I contribute\nsome of the time to Meson, itself. Within this patchset you will find\npretty small changes. Most of the commits are attempting to create more\nconsistency with the surrounding code. I think there is more that can be\ndone to improve the build a bit like including subprojects for optional\ndependencies like lz4, zstd, etc.\n\nWhile I was reading through the code, I also noticed a few XXX/FIXMEs. I\ndon't mind taking a look at those in the future, but I think I need more\ncontext for almost all of them since I am brand new to Postgres\ndevelopment. Since I also have _some_ sway in the Meson community, I can\nhelp get more attention to Meson issues that benefit Postgres. I\ncurrently note 3 Meson issues that are commented in the code for\ninstance. If anyone ever has any problems or questions about Meson or\nspecifically the Meson build of Postgres, I am more than happy to\nreceive an email to see if I can help out.\n\nHighlighting the biggest changes in this patchset:\n\n- Add Meson overrides\n- Remove Meson program options for specifying paths\n\nEverything but the last patch could most likely be backported to\nprevious maintained releases including Meson, though the patchset is\ninitially targeting the master branch.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Thu, 18 May 2023 10:36:59 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Meson build updates" }, { "msg_contents": "Received a review from a Meson maintainer. Here is a v2.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Fri, 09 Jun 2023 09:28:51 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-05-18 10:36:59 -0500, Tristan Partin wrote:\n> I took some time to look at the Meson build for Postgres. I contribute\n> some of the time to Meson, itself. Within this patchset you will find\n> pretty small changes. Most of the commits are attempting to create more\n> consistency with the surrounding code. I think there is more that can be\n> done to improve the build a bit like including subprojects for optional\n> dependencies like lz4, zstd, etc.\n\nThanks for looking over these!\n\n\n\n> From b35ecb2c8dcd71608f98af1e0ec19d965099ceab Mon Sep 17 00:00:00 2001\n> From: Tristan Partin <tristan@neon.tech>\n> Date: Wed, 17 May 2023 09:40:02 -0500\n> Subject: [PATCH postgres v1 12/17] Make finding pkg-config(python3) more\n> robust\n> \n> It is a possibility that the installation can't be found. Checking for\n> Python.h is redundant with what Meson does internally.\n\nThat's not what I saw - we had cases where Python.h was missing, but the\npython dependency was found. It's possible that this is dependent on the\nmeson version.\n\n\n> From 47394ffd113d4170e955bc033841cb7e18fd3ac4 Mon Sep 17 00:00:00 2001\n> From: Tristan Partin <tristan@neon.tech>\n> Date: Wed, 17 May 2023 09:44:49 -0500\n> Subject: [PATCH postgres v1 14/17] Reduce branching on Meson version\n> \n> This code had a branch depending on Meson version. Instead, we can just\n> move the system checks to the if statement. I believe this also keeps\n> selinux and systemd from being looked for on non-Linux systems when\n> using Meson < 0.59. Before they would be checked, but obviously fail.\n\nI like the current version better - depending on the meson version makes it\neasy to find what needs to be removed when we upgrade the meson minimum\nversion.\n\n\n\n> From 189d3ac3d5593ce3e475813e4830a29bb4e96f70 Mon Sep 17 00:00:00 2001\n> From: Tristan Partin <tristan@neon.tech>\n> Date: Wed, 17 May 2023 10:36:52 -0500\n> Subject: [PATCH postgres v1 16/17] Add Meson overrides\n> \n> Meson has the ability to do transparent overrides when projects are used\n> as subprojects. For instance, say I am building a Postgres extension. I\n> can define Postgres to be a subproject of my extension given the\n> following wrap file:\n> \n> [wrap-git]\n> url = https://git.postgresql.org/git/postgresql.git\n> revision = master\n> depth = 1\n> \n> [provide]\n> dependency_names = libpq\n> \n> Then in my extension (root project), I can have the following line\n> snippet:\n> \n> libpq = dependency('libpq')\n> \n> This will tell Meson to transparently compile libpq prior to it\n> compiling my extension (because I depend on libpq) if libpq isn't found\n> on the host system.\n> \n> I have also added overrides for the various public-facing exectuables.\n> Though I don't expect them to get much usage, might as well go ahead and\n> override them. They can be used by adding the following line to the\n> aforementioned wrap file:\n\nI think adding more boilerplate to all these places does have some cost. So\nI'm not really convinced this is worth doign.\n\n\n\n\n> From 5ee13f09e4101904dbc9887bd4844eb5f1cb4fea Mon Sep 17 00:00:00 2001\n> From: Tristan Partin <tristan@neon.tech>\n> Date: Wed, 17 May 2023 10:54:53 -0500\n> Subject: [PATCH postgres v1 17/17] Remove Meson program options for specifying\n> paths\n> \n> Meson has a built-in way to override paths without polluting project\n> build options called machine files.\n\nI think meson machine files are just about unusable. For one, you can't\nactually change any of the options without creating a new build dir. For\nanother, it's not something that easily can be done on the commandline.\n\nSo I really don't want to remove these options.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Jun 2023 09:41:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Fri Jun 9, 2023 at 11:41 AM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-05-18 10:36:59 -0500, Tristan Partin wrote:\n> > From b35ecb2c8dcd71608f98af1e0ec19d965099ceab Mon Sep 17 00:00:00 2001\n> > From: Tristan Partin <tristan@neon.tech>\n> > Date: Wed, 17 May 2023 09:40:02 -0500\n> > Subject: [PATCH postgres v1 12/17] Make finding pkg-config(python3) more\n> > robust\n> > \n> > It is a possibility that the installation can't be found. Checking for\n> > Python.h is redundant with what Meson does internally.\n>\n> That's not what I saw - we had cases where Python.h was missing, but the\n> python dependency was found. It's possible that this is dependent on the\n> meson version.\n\nEli corrected me on this. Please see version 2 of the patchset.\n>\n> > From 47394ffd113d4170e955bc033841cb7e18fd3ac4 Mon Sep 17 00:00:00 2001\n> > From: Tristan Partin <tristan@neon.tech>\n> > Date: Wed, 17 May 2023 09:44:49 -0500\n> > Subject: [PATCH postgres v1 14/17] Reduce branching on Meson version\n> > \n> > This code had a branch depending on Meson version. Instead, we can just\n> > move the system checks to the if statement. I believe this also keeps\n> > selinux and systemd from being looked for on non-Linux systems when\n> > using Meson < 0.59. Before they would be checked, but obviously fail.\n>\n> I like the current version better - depending on the meson version makes it\n> easy to find what needs to be removed when we upgrade the meson minimum\n> version.\n\nI think the savings of not looking up selinux/systemd on non-Linux\nsystems is pretty big. Would you accept a change of something like:\n\nif meson.version().version_compare('>=0.59')\n # do old stuff\nelse if host_system == 'linux'\n # do new stuff\nendif\n\nOtherwise, I am happy to remove the patch.\n\n> > From 189d3ac3d5593ce3e475813e4830a29bb4e96f70 Mon Sep 17 00:00:00 2001\n> > From: Tristan Partin <tristan@neon.tech>\n> > Date: Wed, 17 May 2023 10:36:52 -0500\n> > Subject: [PATCH postgres v1 16/17] Add Meson overrides\n> > \n> > Meson has the ability to do transparent overrides when projects are used\n> > as subprojects. For instance, say I am building a Postgres extension. I\n> > can define Postgres to be a subproject of my extension given the\n> > following wrap file:\n> > \n> > [wrap-git]\n> > url = https://git.postgresql.org/git/postgresql.git\n> > revision = master\n> > depth = 1\n> > \n> > [provide]\n> > dependency_names = libpq\n> > \n> > Then in my extension (root project), I can have the following line\n> > snippet:\n> > \n> > libpq = dependency('libpq')\n> > \n> > This will tell Meson to transparently compile libpq prior to it\n> > compiling my extension (because I depend on libpq) if libpq isn't found\n> > on the host system.\n> > \n> > I have also added overrides for the various public-facing exectuables.\n> > Though I don't expect them to get much usage, might as well go ahead and\n> > override them. They can be used by adding the following line to the\n> > aforementioned wrap file:\n>\n> I think adding more boilerplate to all these places does have some cost. So\n> I'm not really convinced this is worth doign.\n\nCould you explain more about what costs you foresee? I thought this was\na pretty free change :). Most of the meson.build files seems to be\npretty much copy/pastes of each other, so if a new executable came\naround, then someone would just get the override line for essentially\nfree, minus changing the binary name/executable name.\n\n> > From 5ee13f09e4101904dbc9887bd4844eb5f1cb4fea Mon Sep 17 00:00:00 2001\n> > From: Tristan Partin <tristan@neon.tech>\n> > Date: Wed, 17 May 2023 10:54:53 -0500\n> > Subject: [PATCH postgres v1 17/17] Remove Meson program options for specifying\n> > paths\n> > \n> > Meson has a built-in way to override paths without polluting project\n> > build options called machine files.\n>\n> I think meson machine files are just about unusable. For one, you can't\n> actually change any of the options without creating a new build dir. For\n> another, it's not something that easily can be done on the commandline.\n>\n> So I really don't want to remove these options.\n\nI felt like this would be the most controversial change. What could be\ndone in upstream Meson to make this a more enjoyable experience? I do\nhowever disagree with the usability of machine files. Could you add a\nlittle context about what you find unusable about them?\n\nHappy to revert the change after continuing the discussion, of course.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 09 Jun 2023 13:15:27 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-06-09 13:15:27 -0500, Tristan Partin wrote:\n> On Fri Jun 9, 2023 at 11:41 AM CDT, Andres Freund wrote:\n> > I like the current version better - depending on the meson version makes it\n> > easy to find what needs to be removed when we upgrade the meson minimum\n> > version.\n> \n> I think the savings of not looking up selinux/systemd on non-Linux\n> systems is pretty big. Would you accept a change of something like:\n\nFor selinux it's default disabled, so it doesn't make a practical\ndifference. Outside of linux newer versions of meson are more common IME, so\nI'm not really worried about it for systemd.\n\n\n> if meson.version().version_compare('>=0.59')\n> # do old stuff\n> else if host_system == 'linux'\n> # do new stuff\n> endif\n> \n> Otherwise, I am happy to remove the patch.\n\nHm, I don't quite know how this would end up looking like.\n\n\n> > > From 189d3ac3d5593ce3e475813e4830a29bb4e96f70 Mon Sep 17 00:00:00 2001\n> > > From: Tristan Partin <tristan@neon.tech>\n> > > Date: Wed, 17 May 2023 10:36:52 -0500\n> > > Subject: [PATCH postgres v1 16/17] Add Meson overrides\n> > > \n> > > Meson has the ability to do transparent overrides when projects are used\n> > > as subprojects. For instance, say I am building a Postgres extension. I\n> > > can define Postgres to be a subproject of my extension given the\n> > > following wrap file:\n> > > \n> > > [wrap-git]\n> > > url = https://git.postgresql.org/git/postgresql.git\n> > > revision = master\n> > > depth = 1\n> > > \n> > > [provide]\n> > > dependency_names = libpq\n> > > \n> > > Then in my extension (root project), I can have the following line\n> > > snippet:\n> > > \n> > > libpq = dependency('libpq')\n> > > \n> > > This will tell Meson to transparently compile libpq prior to it\n> > > compiling my extension (because I depend on libpq) if libpq isn't found\n> > > on the host system.\n> > > \n> > > I have also added overrides for the various public-facing exectuables.\n> > > Though I don't expect them to get much usage, might as well go ahead and\n> > > override them. They can be used by adding the following line to the\n> > > aforementioned wrap file:\n> >\n> > I think adding more boilerplate to all these places does have some cost. So\n> > I'm not really convinced this is worth doign.\n> \n> Could you explain more about what costs you foresee?\n\nRepetitive code that needs to be added to each further binary we add. I don't\nmind doing that if it has a use case, but I'm not sure I see the use case for\nrandom binaries...\n\n\n> > > From 5ee13f09e4101904dbc9887bd4844eb5f1cb4fea Mon Sep 17 00:00:00 2001\n> > > From: Tristan Partin <tristan@neon.tech>\n> > > Date: Wed, 17 May 2023 10:54:53 -0500\n> > > Subject: [PATCH postgres v1 17/17] Remove Meson program options for specifying\n> > > paths\n> > > \n> > > Meson has a built-in way to override paths without polluting project\n> > > build options called machine files.\n> >\n> > I think meson machine files are just about unusable. For one, you can't\n> > actually change any of the options without creating a new build dir. For\n> > another, it's not something that easily can be done on the commandline.\n> >\n> > So I really don't want to remove these options.\n> \n> I felt like this would be the most controversial change. What could be\n> done in upstream Meson to make this a more enjoyable experience?\n\nI think *requiring* separate files is a really poor experience when you come\nfrom some other system, where those could trivially be overwritten on the\ncommandline.\n\nThe biggest change to make them more usable would be to properly reconfigure\nwhen the contents of machine file change. IIRC configure is rerun, but the\nchanges aren't taken into account.\n\n\n> I do however disagree with the usability of machine files. Could you add a\n> little context about what you find unusable about them?\n\nI can quickly change a meson option and run a build and tests. Trivially\nscriptable. Whereas with a machine file I need to write code to edit a machine\nfile, re-configure from scratch, and only then can build / run tests.\n\nIt's particularly bad for cross builds, where unfortunately cross files can't\nbe avoided. It's imo the one area where autoconf beats meson handily.\n--host=x86_64-w64-mingw32 works for autoconf. Whereas for meson I need to\nmanually write a cross file with a bunch of binaries set.\nhttps://github.com/anarazel/postgres/commit/ae53f21d06b4dadf8e6b90df84000fad9a769eaf#diff-3420ebab4f1dbe2ba7102565b0b84e4d6d01fb8b3c1e375bd439eed604e743f8R1\n\nThere's some helpers aiming to generate cross files, but I've not been able to\ngenerate something useful for cross compiling to windows, for example.\n\nI've been unable to generate something like referenced in the above commit in\na reasonably concise way with env2mfile.\n\n\nIn the end, I also just don't see a meaningful benefit in forcing the use of\nmachine files.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Jun 2023 11:36:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Fri Jun 9, 2023 at 1:36 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-09 13:15:27 -0500, Tristan Partin wrote:\n> > On Fri Jun 9, 2023 at 11:41 AM CDT, Andres Freund wrote:\n> > > I like the current version better - depending on the meson version makes it\n> > > easy to find what needs to be removed when we upgrade the meson minimum\n> > > version.\n> > \n> > I think the savings of not looking up selinux/systemd on non-Linux\n> > systems is pretty big. Would you accept a change of something like:\n>\n> For selinux it's default disabled, so it doesn't make a practical\n> difference. Outside of linux newer versions of meson are more common IME, so\n> I'm not really worried about it for systemd.\n>\n>\n> > if meson.version().version_compare('>=0.59')\n> > # do old stuff\n> > else if host_system == 'linux'\n> > # do new stuff\n> > endif\n> > \n> > Otherwise, I am happy to remove the patch.\n>\n> Hm, I don't quite know how this would end up looking like.\n\nActually, nevermind. I don't know what I was talking about. I will just\ngo ahead and remove this patch from the set.\n\n> > > > From 189d3ac3d5593ce3e475813e4830a29bb4e96f70 Mon Sep 17 00:00:00 2001\n> > > > From: Tristan Partin <tristan@neon.tech>\n> > > > Date: Wed, 17 May 2023 10:36:52 -0500\n> > > > Subject: [PATCH postgres v1 16/17] Add Meson overrides\n> > > > \n> > > > Meson has the ability to do transparent overrides when projects are used\n> > > > as subprojects. For instance, say I am building a Postgres extension. I\n> > > > can define Postgres to be a subproject of my extension given the\n> > > > following wrap file:\n> > > > \n> > > > [wrap-git]\n> > > > url = https://git.postgresql.org/git/postgresql.git\n> > > > revision = master\n> > > > depth = 1\n> > > > \n> > > > [provide]\n> > > > dependency_names = libpq\n> > > > \n> > > > Then in my extension (root project), I can have the following line\n> > > > snippet:\n> > > > \n> > > > libpq = dependency('libpq')\n> > > > \n> > > > This will tell Meson to transparently compile libpq prior to it\n> > > > compiling my extension (because I depend on libpq) if libpq isn't found\n> > > > on the host system.\n> > > > \n> > > > I have also added overrides for the various public-facing exectuables.\n> > > > Though I don't expect them to get much usage, might as well go ahead and\n> > > > override them. They can be used by adding the following line to the\n> > > > aforementioned wrap file:\n> > >\n> > > I think adding more boilerplate to all these places does have some cost. So\n> > > I'm not really convinced this is worth doign.\n> > \n> > Could you explain more about what costs you foresee?\n>\n> Repetitive code that needs to be added to each further binary we add. I don't\n> mind doing that if it has a use case, but I'm not sure I see the use case for\n> random binaries...\n\nI want to make sure I am only adding it to things that are user-facing.\nSo if I have added the line to executables that are for internal use\nonly, please correct me. In general, I override for all user-facing\nprograms/dependencies because I never know how some end-user might use\nthe binaries.\n\nPerhaps what might sway you is that the old method (libpq = libpq_dep)\nis very error prone because variable names become part of the public API\nof your build description which no one likes. In the new way, which is\nonly possible when specifying overrides, as long as binary names or\npkg-config names don't change, you get stability no matter how you name\nyour variables.\n\n> > > > From 5ee13f09e4101904dbc9887bd4844eb5f1cb4fea Mon Sep 17 00:00:00 2001\n> > > > From: Tristan Partin <tristan@neon.tech>\n> > > > Date: Wed, 17 May 2023 10:54:53 -0500\n> > > > Subject: [PATCH postgres v1 17/17] Remove Meson program options for specifying\n> > > > paths\n> > > > \n> > > > Meson has a built-in way to override paths without polluting project\n> > > > build options called machine files.\n> > >\n> > > I think meson machine files are just about unusable. For one, you can't\n> > > actually change any of the options without creating a new build dir. For\n> > > another, it's not something that easily can be done on the commandline.\n> > >\n> > > So I really don't want to remove these options.\n> > \n> > I felt like this would be the most controversial change. What could be\n> > done in upstream Meson to make this a more enjoyable experience?\n>\n> I think *requiring* separate files is a really poor experience when you come\n> from some other system, where those could trivially be overwritten on the\n> commandline.\n\nHmm. I could maybe ask around for a --program command line option. Could\nyou provide the syntax for what autotools does? That way I can come to\nthe discussion in the Meson channel with prior art. I personally don't\nfind the files that annoying, but to each their own.\n\n> The biggest change to make them more usable would be to properly reconfigure\n> when the contents of machine file change. IIRC configure is rerun, but the\n> changes aren't taken into account.\n\nI could not reproduce this. Perhaps you were testing with an older Meson\nwhere that was the case\n\n# meson.build\nproject('mytest')\n\nmyprog = find_program('myprog')\nmessage(myprog.full_path())\n\ntest('dummy', find_program('echo'), args: [myprog.full_path()])\n\n# file.ini\n[binaries]\nmyprog = '/usr/bin/python3'\n\n# CLI\nmeson setup build\nmeson test -C build\nsed -i 's/python3/python2/' file.ini\nmeson test -C build\n\n> > I do however disagree with the usability of machine files. Could you add a\n> > little context about what you find unusable about them?\n>\n> I can quickly change a meson option and run a build and tests. Trivially\n> scriptable. Whereas with a machine file I need to write code to edit a machine\n> file, re-configure from scratch, and only then can build / run tests.\n>\n> It's particularly bad for cross builds, where unfortunately cross files can't\n> be avoided. It's imo the one area where autoconf beats meson handily.\n> --host=x86_64-w64-mingw32 works for autoconf. Whereas for meson I need to\n> manually write a cross file with a bunch of binaries set.\n> https://github.com/anarazel/postgres/commit/ae53f21d06b4dadf8e6b90df84000fad9a769eaf#diff-3420ebab4f1dbe2ba7102565b0b84e4d6d01fb8b3c1e375bd439eed604e743f8R1\n\nOne thing that would help you with cross files is to use constants[0].\n\n> There's some helpers aiming to generate cross files, but I've not been able to\n> generate something useful for cross compiling to windows, for example.\n>\n> I've been unable to generate something like referenced in the above commit in\n> a reasonably concise way with env2mfile.\n\nI, too, could not generate anything meaningful with the following\ncommand line.\n\nCC=testcc CXX=testcxx AR=testar STRIP=teststrip \\\n PKGCONFIG=testpkgconfig WINDRES=testwindres meson env2mfile \\\n --cross --system windows --cpu x86_64 --cpu-family x86_64 \\\n --endian little -o windows.ini\n\n[binaries]\n# Compilers\nc = ['testcc']\ncpp = ['testcxx']\n\n# Other binaries\n\n[properties]\n\n[host_machine]\ncpu = 'x86_64'\ncpu_family = 'x86_64'\nendian = 'little'\nsystem = 'windows'\n\n> In the end, I also just don't see a meaningful benefit in forcing the use of\n> machine files.\n\nI think it is best to use patterns tools want you to use. If aren't moved at\nall by the reconfigure behavior actually working, I will drop the patch.\n\n[0]: https://mesonbuild.com/Machine-files.html#constants\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 12 Jun 2023 11:54:42 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-06-12 11:54:42 -0500, Tristan Partin wrote:\n> On Fri Jun 9, 2023 at 1:36 PM CDT, Andres Freund wrote:\n> > The biggest change to make them more usable would be to properly reconfigure\n> > when the contents of machine file change. IIRC configure is rerun, but the\n> > changes aren't taken into account.\n> \n> I could not reproduce this. Perhaps you were testing with an older Meson\n> where that was the case\n> \n> # meson.build\n> project('mytest')\n> \n> myprog = find_program('myprog')\n> message(myprog.full_path())\n> \n> test('dummy', find_program('echo'), args: [myprog.full_path()])\n> \n> # file.ini\n> [binaries]\n> myprog = '/usr/bin/python3'\n> \n> # CLI\n> meson setup build\n> meson test -C build\n> sed -i 's/python3/python2/' file.ini\n> meson test -C build\n\nIt's possible that it doesn't happen in all contexts. I just reproduced the\nproblem I had, changing\n\n[binaries]\nllvm-config = '/usr/bin/llvm-config-13'\n\nto\n\n[binaries]\nllvm-config = '/usr/bin/llvm-config-14'\n\ndoes not change which version is used in an existing build tree, but does\nchange what's used in a new build tree.\n\nSame with e.g. changing the C compiler version in a machine file. That also\nonly takes effect in a new tree.\n\n\nThis is with meson HEAD, updated earlier today.\n\n\n> > In the end, I also just don't see a meaningful benefit in forcing the use of\n> > machine files.\n> \n> I think it is best to use patterns tools want you to use.\n\nSometimes. I'd perhaps have a different view if we weren't migrating from\nautoconf, where overwriting binaries was trivially possible...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 12 Jun 2023 10:20:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Mon Jun 12, 2023 at 12:20 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-12 11:54:42 -0500, Tristan Partin wrote:\n> > On Fri Jun 9, 2023 at 1:36 PM CDT, Andres Freund wrote:\n> > > The biggest change to make them more usable would be to properly reconfigure\n> > > when the contents of machine file change. IIRC configure is rerun, but the\n> > > changes aren't taken into account.\n> > \n> > I could not reproduce this. Perhaps you were testing with an older Meson\n> > where that was the case\n> > \n> > # meson.build\n> > project('mytest')\n> > \n> > myprog = find_program('myprog')\n> > message(myprog.full_path())\n> > \n> > test('dummy', find_program('echo'), args: [myprog.full_path()])\n> > \n> > # file.ini\n> > [binaries]\n> > myprog = '/usr/bin/python3'\n> > \n> > # CLI\n> > meson setup build\n> > meson test -C build\n> > sed -i 's/python3/python2/' file.ini\n> > meson test -C build\n>\n> It's possible that it doesn't happen in all contexts. I just reproduced the\n> problem I had, changing\n>\n> [binaries]\n> llvm-config = '/usr/bin/llvm-config-13'\n>\n> to\n>\n> [binaries]\n> llvm-config = '/usr/bin/llvm-config-14'\n>\n> does not change which version is used in an existing build tree, but does\n> change what's used in a new build tree.\n>\n> Same with e.g. changing the C compiler version in a machine file. That also\n> only takes effect in a new tree.\n>\n>\n> This is with meson HEAD, updated earlier today.\n\nMind opening a Meson issue if one doesn't exist already?\n\n> > > In the end, I also just don't see a meaningful benefit in forcing the use of\n> > > machine files.\n> > \n> > I think it is best to use patterns tools want you to use.\n>\n> Sometimes. I'd perhaps have a different view if we weren't migrating from\n> autoconf, where overwriting binaries was trivially possible...\n\nI'll see what I can advocate for, regardless. The following things seem\nrelevant. Might be useful to track in your meta issue on your fork.\n\nhttps://github.com/mesonbuild/meson/issues/7755\nhttps://github.com/mesonbuild/meson/pull/11561\nhttps://github.com/mesonbuild/meson/issues/6180\nhttps://github.com/mesonbuild/meson/issues/11294\n\nAttached you will find a v3 with the offending commits removed. I did\nleave the overrides in since you didn't mention it in your last email.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 12 Jun 2023 13:48:57 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On 12.06.23 20:48, Tristan Partin wrote:\n> Attached you will find a v3 with the offending commits removed. I did\n> leave the overrides in since you didn't mention it in your last email.\n\nPatches 1-14 look ok to me. (But I didn't test anything, so some \ncaveats about whether the non-cosmetic patches actually work apply.) If \nwe're fine-tuning the capitalization the options descriptions, let's \nalso turn \"tap tests\" into \"TAP tests\" and \"TCL\" into \"Tcl\".\n\nPatch 15 about the wrap integration, I'm not sure. I share the concerns \nabout whether this is worth maintaining. Maybe put this patch into the \ncommitfest separately, to allow for further study and testing. (The \nother patches, if they are acceptable, ought to go into PG16, I think.)\n\n\n\n", "msg_date": "Mon, 12 Jun 2023 23:24:26 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Mon Jun 12, 2023 at 4:24 PM CDT, Peter Eisentraut wrote:\n> On 12.06.23 20:48, Tristan Partin wrote:\n> > Attached you will find a v3 with the offending commits removed. I did\n> > leave the overrides in since you didn't mention it in your last email.\n>\n> Patches 1-14 look ok to me. (But I didn't test anything, so some \n> caveats about whether the non-cosmetic patches actually work apply.) If \n> we're fine-tuning the capitalization the options descriptions, let's \n> also turn \"tap tests\" into \"TAP tests\" and \"TCL\" into \"Tcl\".\n\nI'll get to that.\n\n> Patch 15 about the wrap integration, I'm not sure. I share the concerns \n> about whether this is worth maintaining. Maybe put this patch into the \n> commitfest separately, to allow for further study and testing. (The \n> other patches, if they are acceptable, ought to go into PG16, I think.)\n\nOk. I will split it off for further discussion. Expect a v4 tomorrow\nwith a few extra changes with regard to another review from a Meson\nmaintainer.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 12 Jun 2023 16:43:29 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Fri Jun 9, 2023 at 1:36 PM CDT, Andres Freund wrote:\n> On 2023-06-09 13:15:27 -0500, Tristan Partin wrote:\n> > On Fri Jun 9, 2023 at 11:41 AM CDT, Andres Freund wrote:\n> > > > From 189d3ac3d5593ce3e475813e4830a29bb4e96f70 Mon Sep 17 00:00:00 2001\n> > > > From: Tristan Partin <tristan@neon.tech>\n> > > > Date: Wed, 17 May 2023 10:36:52 -0500\n> > > > Subject: [PATCH postgres v1 16/17] Add Meson overrides\n> > > > \n> > > > Meson has the ability to do transparent overrides when projects are used\n> > > > as subprojects. For instance, say I am building a Postgres extension. I\n> > > > can define Postgres to be a subproject of my extension given the\n> > > > following wrap file:\n> > > > \n> > > > [wrap-git]\n> > > > url = https://git.postgresql.org/git/postgresql.git\n> > > > revision = master\n> > > > depth = 1\n> > > > \n> > > > [provide]\n> > > > dependency_names = libpq\n> > > > \n> > > > Then in my extension (root project), I can have the following line\n> > > > snippet:\n> > > > \n> > > > libpq = dependency('libpq')\n> > > > \n> > > > This will tell Meson to transparently compile libpq prior to it\n> > > > compiling my extension (because I depend on libpq) if libpq isn't found\n> > > > on the host system.\n> > > > \n> > > > I have also added overrides for the various public-facing exectuables.\n> > > > Though I don't expect them to get much usage, might as well go ahead and\n> > > > override them. They can be used by adding the following line to the\n> > > > aforementioned wrap file:\n> > >\n> > > I think adding more boilerplate to all these places does have some cost. So\n> > > I'm not really convinced this is worth doign.\n> > \n> > Could you explain more about what costs you foresee?\n>\n> Repetitive code that needs to be added to each further binary we add. I don't\n> mind doing that if it has a use case, but I'm not sure I see the use case for\n> random binaries...\n\nI was thinking today. When you initially wrote the build, did you try\nusing the src/bin/meson.build file as the place where all the binaries\nwere built? As you say, most of the src/bin/xxx/meson.build files are\nextrememly reptitive.\n\nWe had a similar-ish issue in my last project which I solved like:\n\nhttps://github.com/hse-project/hse/blob/master/tools/meson.build#L20-L405\n\nThis is a pattern I used quite frequently in that project. One benefit\nof this approach is that the binaries all end up next to each other in\nthe build tree which is eventually how they'll be laid out in the\ninstall destination. The other benefit is of course reducing reptitive\ncode.\n\n- ./build/src/bin/psql/psql\n+ ./build/src/bin/psql\n\nLet me know what you think.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 13 Jun 2023 14:56:36 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Mon Jun 12, 2023 at 4:43 PM CDT, Tristan Partin wrote:\n> On Mon Jun 12, 2023 at 4:24 PM CDT, Peter Eisentraut wrote:\n> > On 12.06.23 20:48, Tristan Partin wrote:\n> > > Attached you will find a v3 with the offending commits removed. I did\n> > > leave the overrides in since you didn't mention it in your last email.\n> >\n> > Patches 1-14 look ok to me. (But I didn't test anything, so some \n> > caveats about whether the non-cosmetic patches actually work apply.) If \n> > we're fine-tuning the capitalization the options descriptions, let's \n> > also turn \"tap tests\" into \"TAP tests\" and \"TCL\" into \"Tcl\".\n>\n> I'll get to that.\n\nDone.\n\n> > Patch 15 about the wrap integration, I'm not sure. I share the concerns \n> > about whether this is worth maintaining. Maybe put this patch into the \n> > commitfest separately, to allow for further study and testing. (The \n> > other patches, if they are acceptable, ought to go into PG16, I think.)\n>\n> Ok. I will split it off for further discussion. Expect a v4 tomorrow\n> with a few extra changes with regard to another review from a Meson\n> maintainer.\n\nAttached. I did have an idea to help with repetitive build descriptions\nin:\nhttps://www.postgresql.org/message-id/CTBSCT2V1TVP.2AUJVJLNWQVG3%40gonk.\nI kind of want to see Andres' response, before this gets merged. But\nthat could also be a follow on commit if he likes the idea. I'll leave\nit up to you to decide.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 13 Jun 2023 15:12:35 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Wow. I didn't attach them...\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Tue, 13 Jun 2023 15:13:17 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Forgot that I had gotten a review from a Meson maintainer. The last two\npatches in this set are new. One is just a simple spelling correction.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Tue, 13 Jun 2023 15:47:08 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On 13.06.23 22:47, Tristan Partin wrote:\n> Forgot that I had gotten a review from a Meson maintainer. The last two\n> patches in this set are new. One is just a simple spelling correction.\n\nI have committed patches 0001-0006, 0008, 0010, 0013, 0014, 0016, which \nare all pretty much cosmetic.\n\nThe following patches are now still pending further review:\n\nv5-0007-Tie-adding-C-support-to-the-llvm-Meson-option.patch\nv5-0009-Remove-return-code-check.patch\nv5-0011-Pass-feature-option-through-to-required-kwarg.patch\nv5-0012-Make-finding-pkg-config-python3-more-robust.patch\nv5-0015-Clean-up-some-usage-of-Meson-features.patch\n\n\n\n", "msg_date": "Thu, 29 Jun 2023 13:18:21 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Thu Jun 29, 2023 at 6:18 AM CDT, Peter Eisentraut wrote:\n> On 13.06.23 22:47, Tristan Partin wrote:\n> > Forgot that I had gotten a review from a Meson maintainer. The last two\n> > patches in this set are new. One is just a simple spelling correction.\n>\n> I have committed patches 0001-0006, 0008, 0010, 0013, 0014, 0016, which \n> are all pretty much cosmetic.\n\nThanks Peter.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 29 Jun 2023 08:15:35 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-06-29 13:18:21 +0200, Peter Eisentraut wrote:\n> On 13.06.23 22:47, Tristan Partin wrote:\n> > Forgot that I had gotten a review from a Meson maintainer. The last two\n> > patches in this set are new. One is just a simple spelling correction.\n>\n> I have committed patches 0001-0006, 0008, 0010, 0013, 0014, 0016, which are\n> all pretty much cosmetic.\n\nThanks Peter, Tristan! I largely couldn't muster an opinion on most of\nthese...\n\n\n> The following patches are now still pending further review:\n>\n> v5-0007-Tie-adding-C-support-to-the-llvm-Meson-option.patch\n\nHm. One minor disadvantage of this is that if no c++ compiler was found, you\ncan't really see anything about llvm in the the output, nor in meson-log.txt,\nmaking it somewhat hard to figure out why llvm was disabled.\n\nI think something like\n\nelif llvmopt.auto()\n message('llvm requires a C++ compiler')\nendif\n\nShould do the trick?\n\n\n> v5-0009-Remove-return-code-check.patch\n\nPushed.\n\n\n> v5-0011-Pass-feature-option-through-to-required-kwarg.patch\n\nI'm a bit confused how it ended how it's looking like it is right now, but\n... :)\n\nI'm thinking of merging 0011 and relevant parts of 0012 and 0015 into one.\n\n\n> v5-0012-Make-finding-pkg-config-python3-more-robust.patch\n\nThe commit message here is clearly outdated (still talking about Python.h\ncheck not being required). Does the remainder actually add any robustness?\n\nI'm on board with removing unnecessary .enabled(), but from what I understand\nwe don't gain anything from adding the if python3_inst.found() branch?\n\n\n> v5-0015-Clean-up-some-usage-of-Meson-features.patch\n\nFor me that's not really an improvement in legibility, the indentation for the\nbulk of each test helps parse things visually. In some cases the removed \"if\nfeature.disabled()\" actually leads to tests being executed when a feature is\ndisabled, e.g. perl -MConfig ... would now be run even perl is disabled.\n\n\nAttached my version of 0007 and 0011 (with some changes from 0012 and\n0015). I'm running a test of those with the extended CI I have in a branch...\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 29 Jun 2023 10:35:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Thu Jun 29, 2023 at 12:35 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-29 13:18:21 +0200, Peter Eisentraut wrote:\n> > On 13.06.23 22:47, Tristan Partin wrote:\n> > > Forgot that I had gotten a review from a Meson maintainer. The last two\n> > > patches in this set are new. One is just a simple spelling correction.\n> >\n> > I have committed patches 0001-0006, 0008, 0010, 0013, 0014, 0016, which are\n> > all pretty much cosmetic.\n>\n> Thanks Peter, Tristan! I largely couldn't muster an opinion on most of\n> these...\n>\n>\n> > The following patches are now still pending further review:\n> >\n> > v5-0007-Tie-adding-C-support-to-the-llvm-Meson-option.patch\n>\n> Hm. One minor disadvantage of this is that if no c++ compiler was found, you\n> can't really see anything about llvm in the the output, nor in meson-log.txt,\n> making it somewhat hard to figure out why llvm was disabled.\n>\n> I think something like\n>\n> elif llvmopt.auto()\n> message('llvm requires a C++ compiler')\n> endif\n>\n> Should do the trick?\n\nYour patch looks great to me.\n\n> > v5-0011-Pass-feature-option-through-to-required-kwarg.patch\n>\n> I'm a bit confused how it ended how it's looking like it is right now, but\n> ... :)\n>\n> I'm thinking of merging 0011 and relevant parts of 0012 and 0015 into one.\n\nYour patch looks great to me.\n\n> > v5-0012-Make-finding-pkg-config-python3-more-robust.patch\n>\n> The commit message here is clearly outdated (still talking about Python.h\n> check not being required). Does the remainder actually add any robustness?\n>\n> I'm on board with removing unnecessary .enabled(), but from what I understand\n> we don't gain anything from adding the if python3_inst.found() branch?\n\nAttached is a more up to date patch, which removes the old part of the\ncommit message. I guess robust is in the eye of the beholder. It is\ndefinitely possible for the installation to not be found (Python.h not\nexisting in newer versions of Meson as an example). All my patch would\ndo is keep the build from crashing if and only if the installation\nwasn't found.\n\nThe unnecessary .enabled() could be folded into the other patch if you\nso chose.\n\n> > v5-0015-Clean-up-some-usage-of-Meson-features.patch\n>\n> For me that's not really an improvement in legibility, the indentation for the\n> bulk of each test helps parse things visually. In some cases the removed \"if\n> feature.disabled()\" actually leads to tests being executed when a feature is\n> disabled, e.g. perl -MConfig ... would now be run even perl is disabled.\n\nMakes sense to not take the patch then.\n\n> Attached my version of 0007 and 0011 (with some changes from 0012 and\n> 0015). I'm running a test of those with the extended CI I have in a branch...\n\nThanks for the further review. Did you by chance see my other email in\nanother branch of this thread[0]?\n\n[0]: https://www.postgresql.org/message-id/CTBSCT2V1TVP.2AUJVJLNWQVG3@gonk\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Thu, 29 Jun 2023 13:34:42 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-06-29 13:34:42 -0500, Tristan Partin wrote:\n> On Thu Jun 29, 2023 at 12:35 PM CDT, Andres Freund wrote:\n> > > v5-0012-Make-finding-pkg-config-python3-more-robust.patch\n> >\n> > The commit message here is clearly outdated (still talking about Python.h\n> > check not being required). Does the remainder actually add any robustness?\n> >\n> > I'm on board with removing unnecessary .enabled(), but from what I understand\n> > we don't gain anything from adding the if python3_inst.found() branch?\n> \n> Attached is a more up to date patch, which removes the old part of the\n> commit message. I guess robust is in the eye of the beholder. It is\n> definitely possible for the installation to not be found (Python.h not\n> existing in newer versions of Meson as an example). All my patch would\n> do is keep the build from crashing if and only if the installation\n> wasn't found.\n\nAh - I somehow thought .find_installation().dependency() would return a\nnot-found dependency when the install wasn't located.\n\n\n> > Attached my version of 0007 and 0011 (with some changes from 0012 and\n> > 0015). I'm running a test of those with the extended CI I have in a branch...\n> \n> Thanks for the further review. Did you by chance see my other email in\n> another branch of this thread[0]?\n> \n> [0]: https://www.postgresql.org/message-id/CTBSCT2V1TVP.2AUJVJLNWQVG3@gonk\n\nI had planned to, but somehow forgot. Will reply.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Jun 2023 11:58:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-06-13 14:56:36 -0500, Tristan Partin wrote:\n> I was thinking today. When you initially wrote the build, did you try\n> using the src/bin/meson.build file as the place where all the binaries\n> were built? As you say, most of the src/bin/xxx/meson.build files are\n> extrememly reptitive.\n\n> We had a similar-ish issue in my last project which I solved like:\n> \n> https://github.com/hse-project/hse/blob/master/tools/meson.build#L20-L405\n> \n> This is a pattern I used quite frequently in that project. One benefit\n> of this approach is that the binaries all end up next to each other in\n> the build tree which is eventually how they'll be laid out in the\n> install destination. The other benefit is of course reducing reptitive\n> code.\n\nI think the build directory and the source code directory not matching in\nstructure would have made it considerably harder sell for people to migrate.\n\nI.e. I considered it, but due to meson's \"no outputs outside of the current\ndirectory\" rule, it didn't (and sadly still doesn't) really seem viable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Jun 2023 12:02:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Thu Jun 29, 2023 at 1:58 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-29 13:34:42 -0500, Tristan Partin wrote:\n> > On Thu Jun 29, 2023 at 12:35 PM CDT, Andres Freund wrote:\n> > > > v5-0012-Make-finding-pkg-config-python3-more-robust.patch\n> > >\n> > > The commit message here is clearly outdated (still talking about Python.h\n> > > check not being required). Does the remainder actually add any robustness?\n> > >\n> > > I'm on board with removing unnecessary .enabled(), but from what I understand\n> > > we don't gain anything from adding the if python3_inst.found() branch?\n> > \n> > Attached is a more up to date patch, which removes the old part of the\n> > commit message. I guess robust is in the eye of the beholder. It is\n> > definitely possible for the installation to not be found (Python.h not\n> > existing in newer versions of Meson as an example). All my patch would\n> > do is keep the build from crashing if and only if the installation\n> > wasn't found.\n>\n> Ah - I somehow thought .find_installation().dependency() would return a\n> not-found dependency when the install wasn't located.\n\nInspecting the Meson source code... If find_installation() fails, you\nget returned a NonExistingExternalProgram, which doesn't implement\ndependency()[0].\n\n> > > Attached my version of 0007 and 0011 (with some changes from 0012 and\n> > > 0015). I'm running a test of those with the extended CI I have in a branch...\n> > \n> > Thanks for the further review. Did you by chance see my other email in\n> > another branch of this thread[0]?\n> > \n> > [0]: https://www.postgresql.org/message-id/CTBSCT2V1TVP.2AUJVJLNWQVG3@gonk\n>\n> I had planned to, but somehow forgot. Will reply.\n\nThanks!\n\n[0]: https://github.com/mesonbuild/meson/blob/master/mesonbuild/programs.py#L333\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 29 Jun 2023 14:03:19 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Thu Jun 29, 2023 at 2:02 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-13 14:56:36 -0500, Tristan Partin wrote:\n> > I was thinking today. When you initially wrote the build, did you try\n> > using the src/bin/meson.build file as the place where all the binaries\n> > were built? As you say, most of the src/bin/xxx/meson.build files are\n> > extrememly reptitive.\n>\n> > We had a similar-ish issue in my last project which I solved like:\n> > \n> > https://github.com/hse-project/hse/blob/master/tools/meson.build#L20-L405\n> > \n> > This is a pattern I used quite frequently in that project. One benefit\n> > of this approach is that the binaries all end up next to each other in\n> > the build tree which is eventually how they'll be laid out in the\n> > install destination. The other benefit is of course reducing reptitive\n> > code.\n>\n> I think the build directory and the source code directory not matching in\n> structure would have made it considerably harder sell for people to migrate.\n>\n> I.e. I considered it, but due to meson's \"no outputs outside of the current\n> directory\" rule, it didn't (and sadly still doesn't) really seem viable.\n\nYeah, I guess it is a matter if you like the layout being closer to the\ninstallation or the source tree at the expense of repetition. I am\npartial to the installation since it is less to type if you run a binary\nfrom the build directory and less repetition, but all good. Maybe\nsomething that could be reconsidered when autotools is dropped.\n\nI still think the overrides are important, at the very least for libpq,\nbut I will defer to your aforementioned decision for now.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 29 Jun 2023 14:07:19 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-06-29 14:07:19 -0500, Tristan Partin wrote:\n> I still think the overrides are important, at the very least for libpq,\n> but I will defer to your aforementioned decision for now.\n\nlibpq makes sense to me, fwiw. Just doing it for all binaries individually\ndidn't seem as obviously beneficial.\n\nFWIW, it seems it could be handled somewhat centrally for binaries, the\nbin_targets array should have all that's needed?\n\nSome things won't work from the build directory, btw. E.g. initdb or postgres\nitself.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Jun 2023 12:13:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Thu Jun 29, 2023 at 2:13 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-29 14:07:19 -0500, Tristan Partin wrote:\n> > I still think the overrides are important, at the very least for libpq,\n> > but I will defer to your aforementioned decision for now.\n>\n> libpq makes sense to me, fwiw. Just doing it for all binaries individually\n> didn't seem as obviously beneficial.\n>\n> FWIW, it seems it could be handled somewhat centrally for binaries, the\n> bin_targets array should have all that's needed?\n\nI will send a follow-up patch with at least libpq overridden. I will\ninvestigate the bin_targets.\n\n> Some things won't work from the build directory, btw. E.g. initdb or postgres\n> itself.\n\nI have already fallen victim to this! Lesson learned :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 29 Jun 2023 14:17:30 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Thu Jun 29, 2023 at 2:17 PM CDT, Tristan Partin wrote:\n> On Thu Jun 29, 2023 at 2:13 PM CDT, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2023-06-29 14:07:19 -0500, Tristan Partin wrote:\n> > > I still think the overrides are important, at the very least for libpq,\n> > > but I will defer to your aforementioned decision for now.\n> >\n> > libpq makes sense to me, fwiw. Just doing it for all binaries individually\n> > didn't seem as obviously beneficial.\n> >\n> > FWIW, it seems it could be handled somewhat centrally for binaries, the\n> > bin_targets array should have all that's needed?\n>\n> I will send a follow-up patch with at least libpq overridden. I will\n> investigate the bin_targets.\n\nAttached is a patch which does just that without overriding the\nbinaries. I investigated the bin_targets stuff, but some test\nexecutables get added there, so it wouldn't work out that well.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Wed, 12 Jul 2023 11:10:21 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On 2023-Jul-12, Tristan Partin wrote:\n\n> Attached is a patch which does just that without overriding the\n> binaries. I investigated the bin_targets stuff, but some test\n> executables get added there, so it wouldn't work out that well.\n\nThis seems useful. Maybe we should have some documentation changes to\ngo with it, because otherwise it seems a bit too obscure.\n\nMaybe there are other subdirs where this would be useful. ecpg maybe?\n(Much less widely used, but if it's this simple, it shouldn't be much of\na burden)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 12 Jul 2023 18:39:31 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "On Wed Jul 12, 2023 at 11:39 AM CDT, Alvaro Herrera wrote:\n> On 2023-Jul-12, Tristan Partin wrote:\n>\n> > Attached is a patch which does just that without overriding the\n> > binaries. I investigated the bin_targets stuff, but some test\n> > executables get added there, so it wouldn't work out that well.\n>\n> This seems useful. Maybe we should have some documentation changes to\n> go with it, because otherwise it seems a bit too obscure.\n\nDo you have a place in mind on where to document it?\n\n> Maybe there are other subdirs where this would be useful. ecpg maybe?\n> (Much less widely used, but if it's this simple, it shouldn't be much of\n> a burden)\n\nA previous version of this patch[0] did it to all public facing\nbinaries. Andres wasn't super interested in that.\n\n[0]: https://www.postgresql.org/message-id/CSPIJVUDZFKX.3KHMOAVGF94RV%40c3po\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 12 Jul 2023 11:55:50 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-06-29 13:34:42 -0500, Tristan Partin wrote:\n> On Thu Jun 29, 2023 at 12:35 PM CDT, Andres Freund wrote:\n> > Hm. One minor disadvantage of this is that if no c++ compiler was found, you\n> > can't really see anything about llvm in the the output, nor in meson-log.txt,\n> > making it somewhat hard to figure out why llvm was disabled.\n> >\n> > I think something like\n> >\n> > elif llvmopt.auto()\n> > message('llvm requires a C++ compiler')\n> > endif\n> >\n> > Should do the trick?\n> \n> Your patch looks great to me.\n> \n> > > v5-0011-Pass-feature-option-through-to-required-kwarg.patch\n> >\n> > I'm a bit confused how it ended how it's looking like it is right now, but\n> > ... :)\n> >\n> > I'm thinking of merging 0011 and relevant parts of 0012 and 0015 into one.\n> \n> Your patch looks great to me.\n\nPushed these. Thanks!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jul 2023 16:30:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Did you need anything more from the \"Make finding pkg-config(python3)\nmore robust\" patch? That one doesn't seem to have been applied yet.\n\nThanks for your reviews thus far.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 12 Jul 2023 18:53:03 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Meson build updates" }, { "msg_contents": "Hi,\n\nOn 2023-07-12 18:53:03 -0500, Tristan Partin wrote:\n> Did you need anything more from the \"Make finding pkg-config(python3)\n> more robust\" patch? That one doesn't seem to have been applied yet.\n\nSorry, was overloaded at the time and then lost track of it. Pushed now!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Oct 2023 12:35:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson build updates" } ]
[ { "msg_contents": "Hi,\n\nI've open-coded $subject many times. I wonder if we should add at least a\nrestricted version of it.\n\nI did find one past discussion of it on the list:\nhttps://www.postgresql.org/message-id/24948.1259797531%40sss.pgh.pa.us\n\nWe have workarounds for it on the wiki:\nhttps://wiki.postgresql.org/wiki/Working_with_Dates_and_Times_in_PostgreSQL#4._Multiplication_and_division_of_INTERVALS_is_under_development_and_discussion_at_this_time\n\nThere are plenty of search results with various, often quite wrong,\nworkarounds.\n\n\nOf course, it's true that there are plenty intervals where division would not\nresult in clearly determinable result. E.g. '1 month'::interval / '1 day'::interval.\n\nI think there's no clear result whenever the month component is non-zero,\nalthough possibly there are some cases of using months that could be made work\n(e.g. '12 months' / '1 month').\n\n\nIn the cases I have wanted interval division, I typically dealt with intervals\nwithout the month component - typically the intervals are the result of\nsubtracting timestamps or such.\n\nOne typical usecase for me is to divide the total runtime of a benchmark by\nthe time taken for some portion of that (e.g. time spent waiting for IO).\n\n\nWhat about an interval / interval -> double operator that errors out whenever\nmonth is non-zero? As far as I can tell that would always be deterministic.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 May 2023 13:49:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Add operator for dividing interval by an interval" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> What about an interval / interval -> double operator that errors out whenever\n> month is non-zero? As far as I can tell that would always be deterministic.\n\nWe have months, days, and microseconds, and microseconds-per-day isn't\nmuch more stable than days-per-month (because DST). By the time you\nrestrict this to give deterministic results, I think it won't be\nparticularly useful.\n\nYou could arbitrarily define months as 30 days and days as 24 hours,\nwhich is what some other interval functions do, and then the double\nresult would be well-defined; but how useful is it really?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 May 2023 17:03:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add operator for dividing interval by an interval" }, { "msg_contents": "Hi,\n\nOn 2023-05-18 17:03:24 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > What about an interval / interval -> double operator that errors out whenever\n> > month is non-zero? As far as I can tell that would always be deterministic.\n>\n> We have months, days, and microseconds, and microseconds-per-day isn't\n> much more stable than days-per-month (because DST).\n\nI was about to counter that, if you subtract a timestamp before/after DST\nchanges, you currently don't get a full day for the \"shorter day\":\n\nSET timezone = 'America/Los_Angeles';\nSELECT '2023-03-13 23:00:00-07'::timestamptz - '2023-03-11 23:00:00-08'::timestamptz;\n┌────────────────┐\n│ ?column? │\n├────────────────┤\n│ 1 day 23:00:00 │\n└────────────────┘\n\nwhich afaics would make it fine to just use 24h days when dividing intervals.\n\n\nHowever, that seems to lead to quite broken results:\n\nSET timezone = 'America/Los_Angeles';\nWITH s AS (SELECT '2023-03-11 23:00-08'::timestamptz a, '2023-03-13 23:00-07'::timestamptz b) SELECT a, b, b - a AS b_min_a, a + (b - a) FROM s;\n┌────────────────────────┬────────────────────────┬────────────────┬────────────────────────┐\n│ a │ b │ b_min_a │ ?column? │\n├────────────────────────┼────────────────────────┼────────────────┼────────────────────────┤\n│ 2023-03-11 23:00:00-08 │ 2023-03-13 23:00:00-07 │ 1 day 23:00:00 │ 2023-03-13 22:00:00-07 │\n└────────────────────────┴────────────────────────┴────────────────┴────────────────────────┘\n\n\nI subsequently found a comment that seems to reference this in timestamp_mi().\n\t/*----------\n\t *\tThis is wrong, but removing it breaks a lot of regression tests.\n\t *\tFor example:\n\t *\n\n\nHow's this not a significant bug that we need to fix?\n\n\nI'm not sure this ought to be fixed in timestamp_mi() - perhaps the order of\noperations in timestamp_pl_interval() would be a better place?\n\n\nWe probably should document that interval math isn't associative:\n\npostgres[2807421][1]=# SELECT ('2023-03-11 23:00:00-08'::timestamptz + '1 day'::interval) + '23h'::interval;\n┌────────────────────────┐\n│ ?column? │\n├────────────────────────┤\n│ 2023-03-13 22:00:00-07 │\n└────────────────────────┘\n\npostgres[2807421][1]=# SELECT ('2023-03-11 23:00:00-08'::timestamptz + '23h'::interval) + '1day'::interval;\n┌────────────────────────┐\n│ ?column? │\n├────────────────────────┤\n│ 2023-03-13 23:00:00-07 │\n└────────────────────────┘\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 19 May 2023 13:01:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Add operator for dividing interval by an interval" } ]
[ { "msg_contents": "I have completed the first draft of the PG 16 release notes. You can\nsee the output here:\n\n\thttps://momjian.us/pgsql_docs/release-16.html\n\nI will adjust it to the feedback I receive; that URL will quickly show\nall updates.\n\nI learned a few things creating it this time:\n\n* I can get confused over C function names and SQL function names in\n commit messages.\n\n* The sections and ordering of the entries can greatly clarify the\n items.\n\n* The feature count is slightly higher than recent releases:\n\n\trelease-10: 189\n\trelease-11: 170\n\trelease-12: 180\n\trelease-13: 178\n\trelease-14: 220\n\trelease-15: 184\n-->\trelease-16: 200\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 18 May 2023 16:49:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "PG 16 draft release notes ready" }, { "msg_contents": "On 5/18/23 4:49 PM, Bruce Momjian wrote:\r\n> I have completed the first draft of the PG 16 release notes. You can\r\n> see the output here:\r\n> \r\n> \thttps://momjian.us/pgsql_docs/release-16.html\r\n> \r\n> I will adjust it to the feedback I receive; that URL will quickly show\r\n> all updates.\r\n\r\nThanks for going through this. The release announcement draft will \r\nfollow shortly after in a different thread.\r\n\r\n> I learned a few things creating it this time:\r\n> \r\n> * I can get confused over C function names and SQL function names in\r\n> commit messages.\r\n> \r\n> * The sections and ordering of the entries can greatly clarify the\r\n> items.\r\n> \r\n> * The feature count is slightly higher than recent releases:\r\n> \r\n> \trelease-10: 189\r\n> \trelease-11: 170\r\n> \trelease-12: 180\r\n> \trelease-13: 178\r\n> \trelease-14: 220\r\n> \trelease-15: 184\r\n> -->\trelease-16: 200\r\n\r\nThis definitely feels like a very full release. Personally I'm very excited.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Thu, 18 May 2023 17:26:17 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 5/18/23 4:49 PM, Bruce Momjian wrote:\r\n> I have completed the first draft of the PG 16 release notes. You can\r\n> see the output here:\r\n> \r\n> \thttps://momjian.us/pgsql_docs/release-16.html\r\n> \r\n> I will adjust it to the feedback I receive; that URL will quickly show\r\n> all updates.\r\n\r\nStill reading, but saw this:\r\n\r\n Allow incremental sorts in more cases, including DISTINCT (David \r\nRowley)window\r\n\r\nI didn't realize we had a DISTINCT (David Rowley) window, but it sounds \r\nlike an awesome feature ;)\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Thu, 18 May 2023 17:39:08 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 05:39:08PM -0400, Jonathan Katz wrote:\n> On 5/18/23 4:49 PM, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-16.html\n> > \n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> \n> Still reading, but saw this:\n> \n> Allow incremental sorts in more cases, including DISTINCT (David\n> Rowley)window\n> \n> I didn't realize we had a DISTINCT (David Rowley) window, but it sounds like\n> an awesome feature ;)\n\nI have fixed this and several others. The URL will show the new text.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 18 May 2023 17:46:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 1:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n>\n> I learned a few things creating it this time:\n>\n> * I can get confused over C function names and SQL function names in\n> commit messages.\n\nThe commit history covering pg_walinspect was complicated. Some of the\nnewly added stuff was revised multiple times, by multiple authors due\nto changing ideas about the best UI. Here is some concrete feedback\nabout that:\n\n* Two functions that were in 15 that each end in *_till_end_of_wal()\nwere removed for 16, since the same functionality is now provided\nthrough a more intuitive UI: we now tolerate invalid end_lsn values\n\"from the future\", per a new \"Tip\" in the pg_walinspect documentation\nfor 16.\n\nIn my opinion this should (at most) be covered as a compatibility\nitem. It's not really new functionality.\n\n* There is one truly new pg_walinspect function added to 16:\npg_get_wal_block_info(). Its main purpose is to see how each\nindividual block changed -- it's far easier to track how blocks\nchanged over time using the new function. The original \"record\norientated\" functions made that very difficult (regex hacks were\nrequired).\n\npg_get_wal_block_info first appeared under another name, and had\nsomewhat narrower functionality to the final version, all of which\nshouldn't matter to users -- since they never saw a stable release\nwith any of that. There is no point in telling users about the commits\nthat changed the name/functionality of pg_get_wal_block_info that came\nonly a couple of months after the earliest version was commited -- to\nusers, it is simply a new function with new functionality.\n\nI also suggest merging the pg_waldump items with the section you've\nadded covering pg_walinspect. A \"pg_walinspect and pg_waldump\" section\nseems natural to me. Some of the enhancements in this area benefit\npg_walinspect and pg_waldump in exactly the same way, since\npg_walinspect is essentially a backend/SQL interface equivalent of\npg_waldump's frontend/CLI interface. Melanie's work on improving the\ndescriptions output for WAL records like Heap's PRUNE and VACUUM\nrecords is a great example of this -- it does exactly the same thing\nfor pg_walinspect and pg_waldump, without directly targeting either\n(it also affects the wal_debug developer option).\n\nIt might also make sense to say that the enhanced WAL record\ndescriptions from Melanie generally apply to records used by VACUUM\nonly.\n\nNote also that the item \"Add pg_waldump option --save-fullpage to dump\nfull page images (David Christensen)\" is tangentially related to\npg_get_wal_block_info(), since you can also get FPIs using\npg_get_wal_block_info() (in fact, that was originally its main\npurpose). I'm not saying that you necessarily need to connect them\ntogether in any way, but you might consider it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 May 2023 14:53:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 02:53:25PM -0700, Peter Geoghegan wrote:\n> On Thu, May 18, 2023 at 1:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> >\n> > I learned a few things creating it this time:\n> >\n> > * I can get confused over C function names and SQL function names in\n> > commit messages.\n> \n> The commit history covering pg_walinspect was complicated. Some of the\n> newly added stuff was revised multiple times, by multiple authors due\n> to changing ideas about the best UI. Here is some concrete feedback\n> about that:\n> \n> * Two functions that were in 15 that each end in *_till_end_of_wal()\n> were removed for 16, since the same functionality is now provided\n> through a more intuitive UI: we now tolerate invalid end_lsn values\n> \"from the future\", per a new \"Tip\" in the pg_walinspect documentation\n> for 16.\n> \n> In my opinion this should (at most) be covered as a compatibility\n> item. It's not really new functionality.\n\nSo, I looked at this and the problem is that this is best as a single\nrelease note entry because we are removing and adding, and if I moved it\nto compatibility, I am concerned the new feature will be missed. Since\nWAL inspection is a utility operation, inn general, I think having it in\nthe pg_walinspect section makes the most sense.\n \n> * There is one truly new pg_walinspect function added to 16:\n> pg_get_wal_block_info(). Its main purpose is to see how each\n> individual block changed -- it's far easier to track how blocks\n> changed over time using the new function. The original \"record\n> orientated\" functions made that very difficult (regex hacks were\n> required).\n> \n> pg_get_wal_block_info first appeared under another name, and had\n> somewhat narrower functionality to the final version, all of which\n> shouldn't matter to users -- since they never saw a stable release\n> with any of that. There is no point in telling users about the commits\n> that changed the name/functionality of pg_get_wal_block_info that came\n> only a couple of months after the earliest version was commited -- to\n> users, it is simply a new function with new functionality.\n\nRight.\n\n> I also suggest merging the pg_waldump items with the section you've\n> added covering pg_walinspect. A \"pg_walinspect and pg_waldump\" section\n> seems natural to me. Some of the enhancements in this area benefit\n> pg_walinspect and pg_waldump in exactly the same way, since\n> pg_walinspect is essentially a backend/SQL interface equivalent of\n> pg_waldump's frontend/CLI interface. Melanie's work on improving the\n> descriptions output for WAL records like Heap's PRUNE and VACUUM\n> records is a great example of this -- it does exactly the same thing\n> for pg_walinspect and pg_waldump, without directly targeting either\n> (it also affects the wal_debug developer option).\n\nWell, pg_waldump is an installed binary while pg_walinspect is an\nextension, so I am not sure where I would put a merged section.\n\n> It might also make sense to say that the enhanced WAL record\n> descriptions from Melanie generally apply to records used by VACUUM\n> only.\n\nOkay, I went with:\n\n\tImprove descriptions of pg_walinspect WAL record descriptions\n\t(Melanie Plageman, Peter Geoghegan)\n\n> Note also that the item \"Add pg_waldump option --save-fullpage to dump\n> full page images (David Christensen)\" is tangentially related to\n> pg_get_wal_block_info(), since you can also get FPIs using\n> pg_get_wal_block_info() (in fact, that was originally its main\n> purpose). I'm not saying that you necessarily need to connect them\n> together in any way, but you might consider it.\n\nWell, there is so much _new_ in that tool that listing everything new\nseems confusing.\n\nFYI, I have just added an item about more aggressing freezing:\n\n\t<!--\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2022-09-08 [d977ffd92] Instrument freezing in autovacuum log reports.\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2022-11-15 [9e5405993] Deduplicate freeze plans in freeze WAL records.\n\tAuthor: Peter Geoghegan <pg@bowt.ie>\n\t2022-12-28 [1de58df4f] Add page-level freezing to VACUUM.\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tDuring non-freeze operations, perform page freezing where appropriate\n\tPeter Geoghegan)\n\t</para>\n\t\n\t<para>\n\tThis makes full-table freeze vacuums less necessary.\n\t</para>\n\t</listitem>\n\nAll changes committed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 18 May 2023 18:51:59 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 3:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> So, I looked at this and the problem is that this is best as a single\n> release note entry because we are removing and adding, and if I moved it\n> to compatibility, I am concerned the new feature will be missed. Since\n> WAL inspection is a utility operation, inn general, I think having it in\n> the pg_walinspect section makes the most sense.\n\nI don't understand what you mean by that. The changes to\n*_till_end_of_wal() (the way that those duplicative functions were\nremoved, and more permissive end_lsn behavior was added) is unrelated\nto all of the other changes. Plus it's just not very important.\n\n> Okay, I went with:\n>\n> Improve descriptions of pg_walinspect WAL record descriptions\n> (Melanie Plageman, Peter Geoghegan)\n>\n> > Note also that the item \"Add pg_waldump option --save-fullpage to dump\n> > full page images (David Christensen)\" is tangentially related to\n> > pg_get_wal_block_info(), since you can also get FPIs using\n> > pg_get_wal_block_info() (in fact, that was originally its main\n> > purpose). I'm not saying that you necessarily need to connect them\n> > together in any way, but you might consider it.\n>\n> Well, there is so much _new_ in that tool that listing everything new\n> seems confusing.\n\nThere is pretty much one truly new piece of functionality added to\npg_walinspect (the function called pg_get_wal_block_info was added) --\nsince the enhancement to rmgr description output applies equally to\npg_waldump, no matter where you place it in the release notes. So not\nsure what you mean.\n\n> All changes committed.\n\nEven after these changes, the release notes still refer to a function\ncalled \"pg_get_wal_block\". There is no such function, though -- not in\nPostgres 16, and not in any other major version.\n\nAs I said, there is a new function called \"pg_get_wal_block_info\". It\nshould simply be presented as a whole new function that offers novel\nnew functionality compared to what was available in Postgres 15 --\nwithout any further elaboration. (It happens to be true that\npg_get_wal_block_info only reached its final form following multiple\nrounds of work in multiple commits, but that is of no consequence to\nusers -- even the earliest form of the function appeared in a commit\nin the Postgres 16 cycle.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 May 2023 16:12:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, 18 May 2023 at 22:49, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n>\n> https://momjian.us/pgsql_docs/release-16.html\n>\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n\nI'm not sure if bugfixes like these are considered for release notes,\nbut I'm putting it up here just in case:\n\nAs of 8fcb32db (new, only in PG16) we now enforce limits on the size\nof WAL records during construction, where previously we hoped that the\nWAL records didn't exceed those limits.\nThis change is immediately user-visible through a change in behaviour\nof `pg_logical_emit_message(true, repeat('_', 2^30 - 10), repeat('_',\n2^30 - 10))`, and extensions that implement their own rmgrs could also\nsee a change in behavior from this change.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n", "msg_date": "Fri, 19 May 2023 01:33:17 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 04:12:26PM -0700, Peter Geoghegan wrote:\n> On Thu, May 18, 2023 at 3:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > So, I looked at this and the problem is that this is best as a single\n> > release note entry because we are removing and adding, and if I moved it\n> > to compatibility, I am concerned the new feature will be missed. Since\n> > WAL inspection is a utility operation, inn general, I think having it in\n> > the pg_walinspect section makes the most sense.\n> \n> I don't understand what you mean by that. The changes to\n> *_till_end_of_wal() (the way that those duplicative functions were\n> removed, and more permissive end_lsn behavior was added) is unrelated\n> to all of the other changes. Plus it's just not very important.\n\nI see what you mean now. I have moved the function removal to the\nincompatibilities section and kept the existing entry but remove the\ntext about the removed functions.\n\n> > Okay, I went with:\n> >\n> > Improve descriptions of pg_walinspect WAL record descriptions\n> > (Melanie Plageman, Peter Geoghegan)\n> >\n> > > Note also that the item \"Add pg_waldump option --save-fullpage to dump\n> > > full page images (David Christensen)\" is tangentially related to\n> > > pg_get_wal_block_info(), since you can also get FPIs using\n> > > pg_get_wal_block_info() (in fact, that was originally its main\n> > > purpose). I'm not saying that you necessarily need to connect them\n> > > together in any way, but you might consider it.\n> >\n> > Well, there is so much _new_ in that tool that listing everything new\n> > seems confusing.\n> \n> There is pretty much one truly new piece of functionality added to\n> pg_walinspect (the function called pg_get_wal_block_info was added) --\n> since the enhancement to rmgr description output applies equally to\n> pg_waldump, no matter where you place it in the release notes. So not\n> sure what you mean.\n\nI see what you mean now. I have removed the mention of\npg_get_wal_block_info() and moved the three items back into the\nextension section since there are only three pg_walinspect items now.\n\n> > All changes committed.\n> \n> Even after these changes, the release notes still refer to a function\n> called \"pg_get_wal_block\". There is no such function, though -- not in\n> Postgres 16, and not in any other major version.\n\nOh\n\n> \n> As I said, there is a new function called \"pg_get_wal_block_info\". It\n> should simply be presented as a whole new function that offers novel\n> new functionality compared to what was available in Postgres 15 --\n> without any further elaboration. (It happens to be true that\n> pg_get_wal_block_info only reached its final form following multiple\n> rounds of work in multiple commits, but that is of no consequence to\n> users -- even the earliest form of the function appeared in a commit\n> in the Postgres 16 cycle.)\n\nDone. Please see the URL for the updated text, diff attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 18 May 2023 21:44:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 01:33:17AM +0200, Matthias van de Meent wrote:\n> On Thu, 18 May 2023 at 22:49, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> >\n> > https://momjian.us/pgsql_docs/release-16.html\n> >\n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> \n> I'm not sure if bugfixes like these are considered for release notes,\n> but I'm putting it up here just in case:\n> \n> As of 8fcb32db (new, only in PG16) we now enforce limits on the size\n> of WAL records during construction, where previously we hoped that the\n> WAL records didn't exceed those limits.\n> This change is immediately user-visible through a change in behaviour\n> of `pg_logical_emit_message(true, repeat('_', 2^30 - 10), repeat('_',\n> 2^30 - 10))`, and extensions that implement their own rmgrs could also\n> see a change in behavior from this change.\n\nI saw that commit but I considered it sufficiently rare and sufficiently\ninternal that I did not include it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 18 May 2023 21:45:33 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 6:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I don't understand what you mean by that. The changes to\n> > *_till_end_of_wal() (the way that those duplicative functions were\n> > removed, and more permissive end_lsn behavior was added) is unrelated\n> > to all of the other changes. Plus it's just not very important.\n>\n> I see what you mean now. I have moved the function removal to the\n> incompatibilities section and kept the existing entry but remove the\n> text about the removed functions.\n\nYour patch now has two separate items for \"[5c1b66280] Rework design\nof functions in pg_walinspect\", but even one item is arguably one too\nmany. The \"ending LSN\" item (the second item for this same commit)\nshould probably be removed altogether. If you're going to keep the\nsentences that appear under that second item, then it should at least\nbe consolidated with the first item, in order that commit 5c1b66280\nisn't listed twice.\n\nNote also that the patch doesn't remove a remaining reference to an\nupdate in how pg_get_wal_block_info() works, which (as I've said) is a\nbrand new function as far as users are concerned. Users don't need to\nhear that it has been updated, since these release notes will also be\nthe first time they've been presented with any information about\npg_get_wal_block_info(). (This isn't very important; again, I suggest\nthat you avoid saying anything about any specific function, even if\nyou feel strongly that the \"ending LSN\" issue must be spelled out like\nthis.)\n\n> > There is pretty much one truly new piece of functionality added to\n> > pg_walinspect (the function called pg_get_wal_block_info was added) --\n> > since the enhancement to rmgr description output applies equally to\n> > pg_waldump, no matter where you place it in the release notes. So not\n> > sure what you mean.\n>\n> I see what you mean now. I have removed the mention of\n> pg_get_wal_block_info() and moved the three items back into the\n> extension section since there are only three pg_walinspect items now.\n\nThe wording for this item as it appears in the patch is: \"Improve\ndescriptions of pg_walinspect WAL record descriptions\". I suggest the\nfollowing wording be used instead: \"Provide more detailed descriptions\nof certain WAL records in the output of pg_walinspect and pg_waldump\".\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 May 2023 19:15:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 4:49 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n>\n> https://momjian.us/pgsql_docs/release-16.html\n>\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n>\n> I learned a few things creating it this time:\n>\n> * I can get confused over C function names and SQL function names in\n> commit messages.\n>\n> * The sections and ordering of the entries can greatly clarify the\n> items.\n>\n> * The feature count is slightly higher than recent releases:\n>\n> release-10: 189\n> release-11: 170\n> release-12: 180\n> release-13: 178\n> release-14: 220\n> release-15: 184\n> --> release-16: 200\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n\n\n\n* When granting role membership, require the granted-by role to be a role\n> that has appropriate permissions (Robert Haas)\n> This is a requirement even when the superuser is granting role membership.\n\n\nan exception would be the granted-by is the bootstrap superuser.\n\nOn Fri, May 19, 2023 at 4:49 AM Bruce Momjian <bruce@momjian.us> wrote:I have completed the first draft of the PG 16 release notes.  You can\nsee the output here:\n\n        https://momjian.us/pgsql_docs/release-16.html\n\nI will adjust it to the feedback I receive;  that URL will quickly show\nall updates.\n\nI learned a few things creating it this time:\n\n*  I can get confused over C function names and SQL function names in\n   commit messages.\n\n*  The sections and ordering of the entries can greatly clarify the\n   items.\n\n*  The feature count is slightly higher than recent releases:\n\n        release-10:  189\n        release-11:  170\n        release-12:  180\n        release-13:  178\n        release-14:  220\n        release-15:  184\n-->     release-16:  200\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  Only you can decide what is important to you.* When granting role membership, require the granted-by role to be a role that has appropriate permissions (Robert Haas)This is a requirement even when the superuser is granting role membership.an exception would be the granted-by is the bootstrap superuser.", "msg_date": "Fri, 19 May 2023 10:41:59 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 07:15:26PM -0700, Peter Geoghegan wrote:\n> On Thu, May 18, 2023 at 6:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > I don't understand what you mean by that. The changes to\n> > > *_till_end_of_wal() (the way that those duplicative functions were\n> > > removed, and more permissive end_lsn behavior was added) is unrelated\n> > > to all of the other changes. Plus it's just not very important.\n> >\n> > I see what you mean now. I have moved the function removal to the\n> > incompatibilities section and kept the existing entry but remove the\n> > text about the removed functions.\n> \n> Your patch now has two separate items for \"[5c1b66280] Rework design\n> of functions in pg_walinspect\", but even one item is arguably one too\n> many. The \"ending LSN\" item (the second item for this same commit)\n\nI see your point. pg_get_wal_block_info() doesn't exist in pre-PG 16,\nso I have removed that text from the release note item, but kept the\nother two functions.\n\n> should probably be removed altogether. If you're going to keep the\n> sentences that appear under that second item, then it should at least\n> be consolidated with the first item, in order that commit 5c1b66280\n> isn't listed twice.\n\nWe can list items twice if they have different focuses.\n\n> Note also that the patch doesn't remove a remaining reference to an\n> update in how pg_get_wal_block_info() works, which (as I've said) is a\n> brand new function as far as users are concerned. Users don't need to\n> hear that it has been updated, since these release notes will also be\n> the first time they've been presented with any information about\n> pg_get_wal_block_info(). (This isn't very important; again, I suggest\n> that you avoid saying anything about any specific function, even if\n> you feel strongly that the \"ending LSN\" issue must be spelled out like\n> this.)\n\nAgreed.\n\n> > > There is pretty much one truly new piece of functionality added to\n> > > pg_walinspect (the function called pg_get_wal_block_info was added) --\n> > > since the enhancement to rmgr description output applies equally to\n> > > pg_waldump, no matter where you place it in the release notes. So not\n> > > sure what you mean.\n> >\n> > I see what you mean now. I have removed the mention of\n> > pg_get_wal_block_info() and moved the three items back into the\n> > extension section since there are only three pg_walinspect items now.\n> \n> The wording for this item as it appears in the patch is: \"Improve\n> descriptions of pg_walinspect WAL record descriptions\". I suggest the\n> following wording be used instead: \"Provide more detailed descriptions\n> of certain WAL records in the output of pg_walinspect and pg_waldump\".\n\nI went with, \"Add detailed descriptions of WAL records in pg_walinspect\nand pg_waldump (Melanie Plageman, Peter Geoghegan)\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 18 May 2023 23:12:39 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 10:41:59AM +0800, jian he wrote:\n> On Fri, May 19, 2023 at 4:49 AM Bruce Momjian <bruce@momjian.us> wrote:\n> * When granting role membership, require the granted-by role to be a role\n> that has appropriate permissions (Robert Haas)\n> This is a requirement even when the superuser is granting role membership.\n> \n> \n> an exception would be the granted-by is the bootstrap superuser.\n\nOkay, updated text is:\n\n\t<listitem>\n\t<para>\n\tWhen granting role membership, require the granted-by role to be a role that has appropriate permissions (Robert Haas)\n\t</para>\n\t\n\t<para>\n\tThis is a requirement even when a non-bootstrap superuser is granting role membership.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 18 May 2023 23:15:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nOn 5/18/23 10:49 PM, Bruce Momjian wrote:\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n> \n> \thttps://momjian.us/pgsql_docs/release-16.html\n> \n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n> \n\nThanks!\n\n\"\nThis adds the function pg_log_standby_snapshot(). TEXT?:\n\"\n\nMy proposal:\n\nThis adds the function pg_log_standby_snapshot() to log details of the current snapshot\nto WAL. If the primary is idle, the slot creation on a standby can take a while.\nThis function can be used on the primary to speed up the logical slot creation on\nthe standby.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 May 2023 09:49:18 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 09:49:18AM +0200, Drouvot, Bertrand wrote:\n> Thanks!\n> \n> \"\n> This adds the function pg_log_standby_snapshot(). TEXT?:\n> \"\n> \n> My proposal:\n> \n> This adds the function pg_log_standby_snapshot() to log details of the current snapshot\n> to WAL. If the primary is idle, the slot creation on a standby can take a while.\n> This function can be used on the primary to speed up the logical slot creation on\n> the standby.\n\nYes, I got this concept from the commit message, but I am unclear on\nwhat is actually happening so I can clearly explain it. Slot creation\non the standby needs a snapshot, and that is only created when there is\nactivity, or happens periodically, and this forces it to happen, or\nsomething? And what snapshot is this? The current session's?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 19 May 2023 08:29:06 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 04:49:47PM -0400, Bruce Momjian wrote:\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n> \n> \thttps://momjian.us/pgsql_docs/release-16.html\n> \n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n\nThanks!\n\n> Allow GRANT to give vacuum and analyze permission to users beyond the\n> table owner or superusers (Nathan Bossart)\n\nThis one was effectively reverted in favor of the MAINTAIN privilege.\n\n> Create a predefined role with permission to perform maintenance\n> operations (Nathan Bossart)\n\nIMO this should also mention the grantable MAINTAIN privilege.\nAlternatively, the item above about granting vacuum/analyze privileges\ncould be adjusted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 May 2023 07:31:40 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "The intro in \"E.1.3. Changes\" says \"... between PostgreSQL 15 and the\nprevious major release\".\n\nThat should be \"... between PostgreSQL __16__ ...\" right?\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nThe intro in \"E.1.3. Changes\" says \"... between PostgreSQL 15 and the previous major release\".That should be \"... between PostgreSQL __16__ ...\" right?Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Fri, 19 May 2023 11:07:29 -0400", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 07:31:40AM -0700, Nathan Bossart wrote:\n> On Thu, May 18, 2023 at 04:49:47PM -0400, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-16.html\n> > \n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> \n> Thanks!\n> \n> > Allow GRANT to give vacuum and analyze permission to users beyond the\n> > table owner or superusers (Nathan Bossart)\n> \n> This one was effectively reverted in favor of the MAINTAIN privilege.\n\nOkay, removed.\n\n> > Create a predefined role with permission to perform maintenance\n> > operations (Nathan Bossart)\n> \n> IMO this should also mention the grantable MAINTAIN privilege.\n> Alternatively, the item above about granting vacuum/analyze privileges\n> could be adjusted.\n\nVery good point --- here is the new text:\n\n\t<!--\n\tAuthor: Jeff Davis <jdavis@postgresql.org>\n\t2022-12-13 [60684dd83] Add grantable MAINTAIN privilege and pg_maintain role.\n\tAuthor: Andrew Dunstan <andrew@dunslane.net>\n\t2022-11-28 [4441fc704] Provide non-superuser predefined roles for vacuum and an\n\tAuthor: Jeff Davis <jdavis@postgresql.org>\n\t2023-01-14 [ff9618e82] Fix MAINTAIN privileges for toast tables and partitions.\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tCreate a predefined role and grantable privilege with permission to perform maintenance operations (Nathan Bossart)\n\t</para>\n\t\n\t<para>\n\tThe predefined role is is called pg_maintain.\n\t</para>\n\t</listitem>\n\nI will commit this change now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 19 May 2023 12:27:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 11:07:29AM -0400, Sehrope Sarkuni wrote:\n> The intro in \"E.1.3. Changes\" says \"... between PostgreSQL 15 and the previous\n> major release\".\n> \n> That should be \"... between PostgreSQL __16__ ...\" right?\n\nOh, I thought I had changed all those --- fixed now, thanks!\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 19 May 2023 12:30:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 4:49 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n>\n> https://momjian.us/pgsql_docs/release-16.html\n>\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n>\n> I learned a few things creating it this time:\n>\n> * I can get confused over C function names and SQL function names in\n> commit messages.\n>\n> * The sections and ordering of the entries can greatly clarify the\n> items.\n>\n> * The feature count is slightly higher than recent releases:\n>\n> release-10: 189\n> release-11: 170\n> release-12: 180\n> release-13: 178\n> release-14: 220\n> release-15: 184\n> --> release-16: 200\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n>\n>\n>\n\nAdd function pg_dissect_walfile_name() to report the segment and timeline\n> values of WAL file names (Bharath Rupireddy)\n\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=13e0d7a603852b8b05c03b45228daabffa0cced2\nthe function rename to pg_split_walfile_name.\n\nseems didn't mention pg_input_is_valid,pg_input_error_info?\nhttps://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-VALIDITY-TABLE\n\nOn Fri, May 19, 2023 at 4:49 AM Bruce Momjian <bruce@momjian.us> wrote:I have completed the first draft of the PG 16 release notes.  You can\nsee the output here:\n\n        https://momjian.us/pgsql_docs/release-16.html\n\nI will adjust it to the feedback I receive;  that URL will quickly show\nall updates.\n\nI learned a few things creating it this time:\n\n*  I can get confused over C function names and SQL function names in\n   commit messages.\n\n*  The sections and ordering of the entries can greatly clarify the\n   items.\n\n*  The feature count is slightly higher than recent releases:\n\n        release-10:  189\n        release-11:  170\n        release-12:  180\n        release-13:  178\n        release-14:  220\n        release-15:  184\n-->     release-16:  200\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  Only you can decide what is important to you.\n\n\nAdd function pg_dissect_walfile_name() to report the segment and timeline values of WAL file names (Bharath Rupireddy)https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=13e0d7a603852b8b05c03b45228daabffa0cced2the function rename to pg_split_walfile_name.seems didn't mention pg_input_is_valid,pg_input_error_info? https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-VALIDITY-TABLE", "msg_date": "Sat, 20 May 2023 00:59:35 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "The v16 release notes are problematic in a PDF docs build:\n\n[WARN] FOUserAgent - Glyph \"?\" (0x142, lslash) not available in font \"Times-Roman\".\n\nThis is evidently from\n\nAdd functions to add, subtract, and generate timestamptz values in a specified time zone (Przemysław Sztoch, Gurjeet Singh)\n\nChange date_trunc(unit, timestamptz, time_zone) to be an immutable function (Przemysław Sztoch)\n\nsince \"ł\" doesn't exist in ISO8859-1. I'd suggest dumbing these\ndown to plain \"l\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 May 2023 19:05:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Sorry for changing the subject line.....\n\nthese two commits seems not mentioned.\nFix ts_headline() edge cases for empty query and empty search text.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=029dea882a7aa34f46732473eed7c917505e6481\n\nSimplify the implementations of the to_reg* functions.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ea7329c9a79ade27b5d3742d1a41ce6d0d9aca8\n\nSorry for changing the subject line..... these two commits seems not mentioned.Fix ts_headline() edge cases for empty query and empty search text.https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=029dea882a7aa34f46732473eed7c917505e6481Simplify the implementations of the to_reg* functions.https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ea7329c9a79ade27b5d3742d1a41ce6d0d9aca8", "msg_date": "Sat, 20 May 2023 10:59:20 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, 19 May 2023 at 22:59, jian he <jian.universality@gmail.com> wrote:\n\n>\n> Sorry for changing the subject line.....\n>\n> these two commits seems not mentioned.\n>\n\nOn a similar topic, should every committed item from the commitfest be\nmentioned, or only ones that are significant enough?\n\nI’m wondering because I had a role in this very small item, yet I don’t see\nit listed in the psql section:\n\nhttps://commitfest.postgresql.org/42/4133/\n\nIt’s OK if we don’t mention every single change, I just want to make sure I\nunderstand.\n\nOn Fri, 19 May 2023 at 22:59, jian he <jian.universality@gmail.com> wrote:Sorry for changing the subject line..... these two commits seems not mentioned.On a similar topic, should every committed item from the commitfest be mentioned, or only ones that are significant enough?I’m wondering because I had a role in this very small item, yet I don’t see it listed in the psql section:https://commitfest.postgresql.org/42/4133/It’s OK if we don’t mention every single change, I just want to make sure I understand.", "msg_date": "Fri, 19 May 2023 23:08:18 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nOn 5/19/23 2:29 PM, Bruce Momjian wrote:\n> On Fri, May 19, 2023 at 09:49:18AM +0200, Drouvot, Bertrand wrote:\n>> Thanks!\n>>\n>> \"\n>> This adds the function pg_log_standby_snapshot(). TEXT?:\n>> \"\n>>\n>> My proposal:\n>>\n>> This adds the function pg_log_standby_snapshot() to log details of the current snapshot\n>> to WAL. If the primary is idle, the slot creation on a standby can take a while.\n>> This function can be used on the primary to speed up the logical slot creation on\n>> the standby.\n> \n> Yes, I got this concept from the commit message, but I am unclear on\n> what is actually happening so I can clearly explain it. Slot creation\n> on the standby needs a snapshot, and that is only created when there is\n> activity, or happens periodically, and this forces it to happen, or\n> something? And what snapshot is this? The current session's?\n> \n\nIt's the snapshot of running transactions (aka the xl_running_xacts WAL record) that is used during the\nlogical slot creation to determine if the logical decoding find a consistent state to start with.\n\nOn a primary this WAL record is being emitted during the logical slot creation, but on a standby\nwe can't write WAL records (so we are waiting for the primary to emit it).\n\nOutside of logical slot creation, this WAL record is also emitted during checkpoint or periodically\nby the bgwriter.\n\nWhat about?\n\nThis adds the function pg_log_standby_snapshot() to emit the WAL record that contains the list\nof running transactions.\n\nIf the primary is idle, the logical slot creation on a standby can take a while (waiting for this WAL record\nto be replayed to determine if the logical decoding find a consistent state to start with).\n\nIn that case, this new function can be used on the primary to speed up the logical slot\ncreation on the standby.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 20 May 2023 10:37:58 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 07:05:12PM -0400, Tom Lane wrote:\n> The v16 release notes are problematic in a PDF docs build:\n> \n> [WARN] FOUserAgent - Glyph \"?\" (0x142, lslash) not available in font \"Times-Roman\".\n> \n> This is evidently from\n> \n> Add functions to add, subtract, and generate timestamptz values in a specified time zone (Przemysław Sztoch, Gurjeet Singh)\n> \n> Change date_trunc(unit, timestamptz, time_zone) to be an immutable function (Przemysław Sztoch)\n> \n> since \"ł\" doesn't exist in ISO8859-1. I'd suggest dumbing these\n> down to plain \"l\".\n\nDone. I know we used to be limited to Latin-1 but when my build of HTML\nworked, I thought that had changed. :-(\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 18:01:33 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, May 19, 2023 at 07:05:12PM -0400, Tom Lane wrote:\n>> ... \"ł\" doesn't exist in ISO8859-1. I'd suggest dumbing these\n>> down to plain \"l\".\n\n> Done. I know we used to be limited to Latin-1 but when my build of HTML\n> worked, I thought that had changed. :-(\n\nYeah, I think the HTML toolchain is better than it used to be on\nthis score. But PDF is still limited.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 May 2023 18:07:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, May 20, 2023 at 06:07:08PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Fri, May 19, 2023 at 07:05:12PM -0400, Tom Lane wrote:\n> >> ... \"ł\" doesn't exist in ISO8859-1. I'd suggest dumbing these\n> >> down to plain \"l\".\n> \n> > Done. I know we used to be limited to Latin-1 but when my build of HTML\n> > worked, I thought that had changed. :-(\n> \n> Yeah, I think the HTML toolchain is better than it used to be on\n> this score. But PDF is still limited.\n\nAh, makes sense. I will need to test the PDF build next time.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 18:48:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, May 20, 2023 at 12:59:35AM +0800, jian he wrote:\n> Add function pg_dissect_walfile_name() to report the segment and timeline\n> values of WAL file names (Bharath Rupireddy)\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=\n> 13e0d7a603852b8b05c03b45228daabffa0cced2\n> the function rename to pg_split_walfile_name.\n\nFixed. I copied the commit that did the rename, but forgot to actually\nupdate the release note text to match.\n\n> seems didn't mention pg_input_is_valid,pg_input_error_info? \n> https://www.postgresql.org/docs/devel/functions-info.html#\n> FUNCTIONS-INFO-VALIDITY-TABLE\n\nGood point. I incorrectly interpreted the commit text as part of our\ntest infrastuture and not the addition of two SQL functions:\n\n Add test scaffolding for soft error reporting from input functions.\n\n pg_input_is_valid() returns boolean, while pg_input_error_message()\n returns the primary error message if the input is bad, or NULL\n if the input is OK. The main reason for having two functions is\n so that we can test both the details-wanted and the no-details-wanted\n code paths.\n\nI have added this release note item:\n\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2022-12-09 [1939d2628] Add test scaffolding for soft error reporting from input\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAdd functions pg_input_is_valid() and pg_input_error_message() to check for type conversion errors (Tom Lane)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 19:22:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, May 20, 2023 at 10:59:20AM +0800, jian he wrote:\n> \n> Sorry for changing the subject line..... \n> \n> these two commits seems not mentioned.\n> Fix ts_headline() edge cases for empty query and empty search text.\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=\n> 029dea882a7aa34f46732473eed7c917505e6481\n\nI usually don't cover bug fixes for rare cases that used to generate\nerrors. However, the bigger issue is that this commit did not appear in\nmy output of git_changelog because it was backpatched, as indicated in\nthe commit text.\n\n> Simplify the implementations of the to_reg* functions.\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=\n\nThe commit for this is:\n\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2022-12-27 [3ea7329c9] Simplify the implementations of the to_reg*\n\tfunctions.\n\t\n\t Simplify the implementations of the to_reg* functions.\n\t\n\t Given the soft-input-error feature, we can reduce these functions\n\t to be just thin wrappers around a soft-error call of the\n\t corresponding datatype input function. This means less code and\n\t more certainty that the to_reg* functions match the normal input\n\t behavior.\n\n-->\t Notably, it also means that they will accept numeric OID input,\n-->\t which they didn't before. It's not clear to me if that omission\n\t had more than laziness behind it, but it doesn't seem like\n\t something we need to work hard to preserve.\n\t\n\t Discussion: https://postgr.es/m/3910031.1672095600@sss.pgh.pa.us\n\nThe change is that to_reg* functions can now accept OIDs, which I didn't\nnotice when I read the commit message. I have added this release note\nitem:\n\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2022-12-27 [3ea7329c9] Simplify the implementations of the to_reg* functions.\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow to_reg* functions to accept OIDs parameters (Tom Lane)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 20:32:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 11:08:18PM -0400, Isaac Morland wrote:\n> On Fri, 19 May 2023 at 22:59, jian he <jian.universality@gmail.com> wrote:\n> \n> \n> Sorry for changing the subject line..... \n> \n> these two commits seems not mentioned.\n> \n> \n> On a similar topic, should every committed item from the commitfest be\n> mentioned, or only ones that are significant enough?\n> \n> I’m wondering because I had a role in this very small item, yet I don’t see it\n> listed in the psql section:\n> \n> https://commitfest.postgresql.org/42/4133/\n> \n> It’s OK if we don’t mention every single change, I just want to make sure I\n> understand.\n\nI have never considered the presence of an item in the commitfest as an\nindicator of importance to be in the release notes. The major release\nnotes, for me, is a balance of listing the most visible changes without\ngoing into unmanageable detail.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 20:35:39 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I have added this release note item:\n\n> \tAdd functions pg_input_is_valid() and pg_input_error_message() to check for type conversion errors (Tom Lane)\n\npg_input_error_message got renamed to pg_input_error_info later,\ncf b8da37b3a (which maybe should be included in the comment\nfor this entry).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 May 2023 20:51:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, May 20, 2023 at 10:37:58AM +0200, Drouvot, Bertrand wrote:\n> It's the snapshot of running transactions (aka the xl_running_xacts WAL record) that is used during the\n> logical slot creation to determine if the logical decoding find a consistent state to start with.\n> \n> On a primary this WAL record is being emitted during the logical slot creation, but on a standby\n> we can't write WAL records (so we are waiting for the primary to emit it).\n> \n> Outside of logical slot creation, this WAL record is also emitted during checkpoint or periodically\n> by the bgwriter.\n> \n> What about?\n> \n> This adds the function pg_log_standby_snapshot() to emit the WAL record that contains the list\n> of running transactions.\n> \n> If the primary is idle, the logical slot creation on a standby can take a while (waiting for this WAL record\n> to be replayed to determine if the logical decoding find a consistent state to start with).\n> \n> In that case, this new function can be used on the primary to speed up the logical slot\n> creation on the standby.\n\nOkay, this helps. I split the entry into two with this text:\n\n\t<!--\n\tAuthor: Andres Freund <andres@anarazel.de>\n\t2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow logical decoding on standbys (Bertrand Drouvot, Andres Freund, Amit Khandekar)\n\t</para>\n\t</listitem>\n\t\n\t<!--\n\tAuthor: Andres Freund <andres@anarazel.de>\n\t2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAdd function pg_log_standby_snapshot() to force creation of a WAL snapshot (Bertrand Drouvot)\n\t</para>\n\t\n\t<para>\n\tWAL snapshots are required for logical slot creation so this function speeds their creation on standbys.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 20:53:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, May 20, 2023 at 08:51:21PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I have added this release note item:\n> \n> > \tAdd functions pg_input_is_valid() and pg_input_error_message() to check for type conversion errors (Tom Lane)\n> \n> pg_input_error_message got renamed to pg_input_error_info later,\n> cf b8da37b3a (which maybe should be included in the comment\n> for this entry).\n\nOh, I skipped the original entry so I skipped that one too. I have\nadjusted the release note item and added the commit to the comment:\n\n\t<!--\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\t2022-12-09 [1939d2628] Add test scaffolding for soft error reporting from input\n\tAuthor: Michael Paquier <michael@paquier.xyz>\n\t2023-02-28 [b8da37b3a] Rework pg_input_error_message(), now renamed pg_input_er\n\t-->\n\t\n\t<listitem>\n\t<para>\n-->\tAdd functions pg_input_is_valid() and pg_input_error_info() to check for type conversion errors (Tom Lane)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 20:57:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "2023年5月19日(金) 5:49 Bruce Momjian <bruce@momjian.us>:\n>\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n>\n> https://momjian.us/pgsql_docs/release-16.html\n>\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n\nHi\n\nBelow are a few commits which are not referenced in the current iteration\nof the release notes, but which seem worthy of inclusion.\nApologies if they have been previously discussed, or I'm overlooking something\nobvious.\n\nd09dbeb9b Speedup hash index builds by skipping needless binary searches\n \"Testing has shown that this can improve hash index build speeds by 5-15%\n with a unique set of integer values.\"\n\ne09d7a126 Improve speed of hash index build.\n \"This seems to be good for overall\n speedup of 5%-9%, depending on the incoming data.\"\n\n594f8d377 Allow batching of inserts during cross-partition updates.\n seems reasonable to mention this as it's related to 97da48246, which\n is mentioned in the notes\n\n1349d2790 Improve performance of ORDER BY / DISTINCT aggregates\n This is the basis for da5800d5, which is mentioned in the notes, but AFAICS\n the latter is an implementation fix for the former (haven't looked\ninto either\n in detail though).\n\nThe following are probably not headline features, but are the kind of\nbehavioural\nchanges I'd expect to find in the release notes (when, at some point\nin the far and\ndistant future, trying to work out when they were introduced when considering\napplication compatibility etc.):\n\n13a185f54 Allow publications with schema and table of the same schema.\n2ceea5adb Accept \"+infinity\" in date and timestamp[tz] input.\nd540a02a7 Display the leader apply worker's PID for parallel apply workers.\n\n\nRegards\n\nIan Barwick\n\n\n", "msg_date": "Sun, 21 May 2023 21:30:01 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, May 21, 2023 at 09:30:01PM +0900, Ian Lawrence Barwick wrote:\n> 2023年5月19日(金) 5:49 Bruce Momjian <bruce@momjian.us>:\n> >\n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> >\n> > https://momjian.us/pgsql_docs/release-16.html\n> >\n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> \n> Hi\n> \n> Below are a few commits which are not referenced in the current iteration\n> of the release notes, but which seem worthy of inclusion.\n> Apologies if they have been previously discussed, or I'm overlooking something\n> obvious.\n> \n> d09dbeb9b Speedup hash index builds by skipping needless binary searches\n> \"Testing has shown that this can improve hash index build speeds by 5-15%\n> with a unique set of integer values.\"\n> \n> e09d7a126 Improve speed of hash index build.\n> \"This seems to be good for overall\n> speedup of 5%-9%, depending on the incoming data.\"\n\nFor the above two items, I mention items that would change user behavior\nlike new features or changes that are significant enough that they would\nchange user behavior. For example, if a new join method increases\nperformance by 5x, that could change user behavior. Based on the quoted\nnumbers above, I didn't think \"hash now faster\" would be appropriate to\nmention. Right?\n\n> 594f8d377 Allow batching of inserts during cross-partition updates.\n> seems reasonable to mention this as it's related to 97da48246, which\n> is mentioned in the notes\n\nI wasn't sure if that was significant, based on the above logic, but\n97da48246 has a user API to control it so I mentioned that one.\n\n> 1349d2790 Improve performance of ORDER BY / DISTINCT aggregates\n> This is the basis for da5800d5, which is mentioned in the notes, but AFAICS\n> the latter is an implementation fix for the former (haven't looked\n> into either\n> in detail though).\n\nI have added this commit to the existing entry, thanks.\n\n> The following are probably not headline features, but are the kind of\n> behavioural\n> changes I'd expect to find in the release notes (when, at some point\n> in the far and\n> distant future, trying to work out when they were introduced when considering\n> application compatibility etc.):\n> \n> 13a185f54 Allow publications with schema and table of the same schema.\n\nThis seemed like a rare enough case that I did not add it.\n\n> 2ceea5adb Accept \"+infinity\" in date and timestamp[tz] input.\n\nI have this but didn't add that commit, added.\n\n> d540a02a7 Display the leader apply worker's PID for parallel apply workers.\n\nParallelism of apply is a new feature and I don't normally mention\noutput _additions_ that are related to new features.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 21 May 2023 11:52:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sun, May 21, 2023 at 09:30:01PM +0900, Ian Lawrence Barwick wrote:\n>> 2ceea5adb Accept \"+infinity\" in date and timestamp[tz] input.\n\n> I have this but didn't add that commit, added.\n\nThat's really not related to the commit you added it to...\n\nI don't have time today to read through all the relnotes, but I went\nthrough those that have my name on them. Suggested wording modifications\nattached.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 21 May 2023 13:11:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nThanks for the release notes!\n\n> <!--\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-06 [00d1e02be] hio: Use ExtendBufferedRelBy() to extend tables more eff\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-06 [26158b852] Use ExtendBufferedRelTo() in XLogReadBufferExtended()\n> -->\n> \n> <listitem>\n> <para>\n> Allow more efficient addition of multiple heap and index pages (Andres Freund)\n> </para>\n> </listitem>\n\nWhile the case of extending by multiple pages improved the most, even\nextending by a single page at a time got a good bit more scalable. Maybe just\n\"Improve efficiency of extending relations\"?\n\n\nI think:\n\n> <!--\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n> -->\n> \n> <listitem>\n> <para>\n> Allow logical decoding on standbys (Bertrand Drouvot, Andres Freund, Amit Khandekar)\n> </para>\n> </listitem>\n\npretty much includes:\n\n> <!--\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-07 [be87200ef] Support invalidating replication slots due to horizon an\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-08 [26669757b] Handle logical slot conflicts on standby\n> -->\n> \n> <listitem>\n> <para>\n> Allow invalidation of replication slots due to row removal, wal_level, and conflicts (Bertrand Drouvot, Andres Freund, Amit Khandekar)\n> </para>\n\nas it is a prerequisite.\n\nI'd probably also merge\n\n> <!--\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n> -->\n> \n> <listitem>\n> <para>\n> Add function pg_log_standby_snapshot() to force creation of a WAL snapshot (Bertrand Drouvot)\n> </para>\n> \n> <para>\n> WAL snapshots are required for logical slot creation so this function speeds their creation on standbys.\n> </para>\n> </listitem>\n\nAs there really isn't a use case outside of logical decoding on a standby.\n\n\n> <!--\n> Author: Andres Freund <andres@anarazel.de>\n> 2022-07-17 [089480c07] Default to hidden visibility for extension libraries whe\n> Author: Andres Freund <andres@anarazel.de>\n> 2022-07-17 [8cf64d35e] Mark all symbols exported from extension libraries PGDLL\n> -->\n> \n> <listitem>\n> <para>\n> Prevent extension libraries from export their symbols by default (Andres Freund, Tom Lane)\n> </para>\n> </listitem>\n\ns/export/exporting/?\n\n\nLooking through the release notes, I didn't see an entry for\n\ncommit c6e0fe1f2a08505544c410f613839664eea9eb21\nAuthor: David Rowley <drowley@postgresql.org>\nDate: 2022-08-29 17:15:00 +1200\n \n Improve performance of and reduce overheads of memory management\n\neven though I think that's one of the more impactful improvements. What was\nthe reason for leaving that out?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 May 2023 10:13:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 5/21/23 1:13 PM, Andres Freund wrote:\r\n\r\n> \r\n> Looking through the release notes, I didn't see an entry for\r\n> \r\n> commit c6e0fe1f2a08505544c410f613839664eea9eb21\r\n> Author: David Rowley <drowley@postgresql.org>\r\n> Date: 2022-08-29 17:15:00 +1200\r\n> \r\n> Improve performance of and reduce overheads of memory management\r\n> \r\n> even though I think that's one of the more impactful improvements. What was\r\n> the reason for leaving that out?\r\n\r\nIIUC in[1], would this \"just speed up\" read-heavy workloads?\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/CAApHDvpjauCRXcgcaL6%2Be3eqecEHoeRm9D-kcbuvBitgPnW%3Dvw%40mail.gmail.com", "msg_date": "Sun, 21 May 2023 14:46:56 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 5/18/23 4:49 PM, Bruce Momjian wrote:\r\n> I have completed the first draft of the PG 16 release notes.\r\n\r\nOne thing that we could attempt for this beta is to include a \r\nprospective list of \"major features + enhancements.\" Of course it can \r\nchange before the GA, but it'll give readers some idea of things to test.\r\n\r\nI'd propose the following (in no particular order):\r\n\r\n* General performance improvements for read-heavy workloads (looking for \r\nclarification for that in[1])\r\n\r\n* Parallel execution of queries that use `FULL` and `OUTER` joins\r\n\r\n* Logical replication allowed from read-only standbys\r\n\r\n* Logical replication subscribers can apply large transactions in parallel\r\n\r\n* Monitoring of I/O statistics through the `pg_stat_io` view\r\n\r\n* Addition of SQL/JSON constructors and identity functions\r\n\r\n* Optimizations to the vacuum freezing strategy\r\n\r\n* Support for regular expressions for matching usernames and databases \r\nnames in `pg_hba.conf`, and user names in `pg_ident.conf`\r\n\r\nThe above is tossing items at the wall, and I'm OK with any or all being \r\nmodified or replaced.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://postgr.es/m/CAApHDvpjauCRXcgcaL6+e3eqecEHoeRm9D-kcbuvBitgPnW=vw@mail.gmail.com", "msg_date": "Sun, 21 May 2023 15:04:58 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nOn May 21, 2023 11:46:56 AM PDT, \"Jonathan S. Katz\" <jkatz@postgresql.org> wrote:\n>On 5/21/23 1:13 PM, Andres Freund wrote:\n>\n>> \n>> Looking through the release notes, I didn't see an entry for\n>> \n>> commit c6e0fe1f2a08505544c410f613839664eea9eb21\n>> Author: David Rowley <drowley@postgresql.org>\n>> Date: 2022-08-29 17:15:00 +1200\n>> Improve performance of and reduce overheads of memory management\n>> \n>> even though I think that's one of the more impactful improvements. What was\n>> the reason for leaving that out?\n>\n>IIUC in[1], would this \"just speed up\" read-heavy workloads?\n\nI don't think so. It can speed up write workloads as well. But more importantly it can noticeably reduce memory usage, including for things like the relcache.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sun, 21 May 2023 12:24:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, May 21, 2023 at 01:11:05PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sun, May 21, 2023 at 09:30:01PM +0900, Ian Lawrence Barwick wrote:\n> >> 2ceea5adb Accept \"+infinity\" in date and timestamp[tz] input.\n> \n> > I have this but didn't add that commit, added.\n> \n> That's really not related to the commit you added it to...\n> \n> I don't have time today to read through all the relnotes, but I went\n> through those that have my name on them. Suggested wording modifications\n> attached.\n\nThese were all good, patch applied, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 21 May 2023 15:57:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 5/21/23 3:24 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On May 21, 2023 11:46:56 AM PDT, \"Jonathan S. Katz\" <jkatz@postgresql.org> wrote:\r\n>> On 5/21/23 1:13 PM, Andres Freund wrote:\r\n>>\r\n>>>\r\n>>> Looking through the release notes, I didn't see an entry for\r\n>>>\r\n>>> commit c6e0fe1f2a08505544c410f613839664eea9eb21\r\n>>> Author: David Rowley <drowley@postgresql.org>\r\n>>> Date: 2022-08-29 17:15:00 +1200\r\n>>> Improve performance of and reduce overheads of memory management\r\n>>>\r\n>>> even though I think that's one of the more impactful improvements. What was\r\n>>> the reason for leaving that out?\r\n>>\r\n>> IIUC in[1], would this \"just speed up\" read-heavy workloads?\r\n> \r\n> I don't think so. It can speed up write workloads as well. But more importantly it can noticeably reduce memory usage, including for things like the relcache.\r\n\r\nCool! I'll dive more into the thread later to learn more.\r\n\r\nJonathan", "msg_date": "Sun, 21 May 2023 19:51:50 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 5/21/23 3:04 PM, Jonathan S. Katz wrote:\r\n> On 5/18/23 4:49 PM, Bruce Momjian wrote:\r\n>> I have completed the first draft of the PG 16 release notes.\r\n> \r\n> One thing that we could attempt for this beta is to include a \r\n> prospective list of \"major features + enhancements.\" Of course it can \r\n> change before the GA, but it'll give readers some idea of things to test.\r\n> \r\n> I'd propose the following (in no particular order):\r\n> \r\n> * General performance improvements for read-heavy workloads (looking for \r\n> clarification for that in[1])\r\n\r\nPer [1] this sounds like it should be:\r\n\r\n* Optimization to reduce overall memory usage, including general \r\nperformance improvements.\r\n\r\nWe can get more specific for the GA.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/5749E807-A5B7-4CC7-8282-84F6F0D4D1D0%40anarazel.de", "msg_date": "Sun, 21 May 2023 19:53:38 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "In E.1.2. Migration to Version 16, probably need mention, some\nprivilege command cannot restore.\nif new cluster bootstrap superuser name is not the same as old one. \"GRANT\nx TO y GRANTED BY no_bootstrap_superuser; \" will have error.\n\n---pg15 dump content.\nCREATE ROLE jian;\nALTER ROLE jian WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN\nREPLICATION BYPASSRLS;\nCREATE ROLE regress_priv_user1;\nALTER ROLE regress_priv_user1 WITH NOSUPERUSER INHERIT NOCREATEROLE\nNOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS;\nCREATE ROLE regress_priv_user2;\nALTER ROLE regress_priv_user2 WITH NOSUPERUSER INHERIT NOCREATEROLE\nNOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS;\nCREATE ROLE su1;\nALTER ROLE su1 WITH SUPERUSER INHERIT CREATEROLE NOCREATEDB LOGIN\nNOREPLICATION NOBYPASSRLS;\nGRANT regress_priv_user1 TO regress_priv_user2 GRANTED BY su1;\n\n-----------restore in pg16\n\\i /home/jian/Desktop/dumpall_schema.sql\n2023-05-22 08:46:00.170 CST [456584] ERROR: permission denied to grant\nprivileges as role \"su1\"\n2023-05-22 08:46:00.170 CST [456584] DETAIL: The grantor must have the\nADMIN option on role \"regress_priv_user1\".\n2023-05-22 08:46:00.170 CST [456584] STATEMENT: GRANT regress_priv_user1\nTO regress_priv_user2 GRANTED BY su1;\npsql:/home/jian/Desktop/dumpall_schema.sql:32: ERROR: permission denied to\ngrant privileges as role \"su1\"\nDETAIL: The grantor must have the ADMIN option on role\n\"regress_priv_user1\".\n\nIn E.1.2. Migration to Version 16, probably need mention, some privilege command cannot restore.if new cluster bootstrap superuser name is not the same as old one. \"GRANT x TO y GRANTED BY no_bootstrap_superuser; \" will have error.---pg15 dump content.CREATE ROLE jian;ALTER ROLE jian WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;CREATE ROLE regress_priv_user1;ALTER ROLE regress_priv_user1 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS;CREATE ROLE regress_priv_user2;ALTER ROLE regress_priv_user2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS;CREATE ROLE su1;ALTER ROLE su1 WITH SUPERUSER INHERIT CREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS;GRANT regress_priv_user1 TO regress_priv_user2 GRANTED BY su1;-----------restore in pg16\\i /home/jian/Desktop/dumpall_schema.sql2023-05-22 08:46:00.170 CST [456584] ERROR:  permission denied to grant privileges as role \"su1\"2023-05-22 08:46:00.170 CST [456584] DETAIL:  The grantor must have the ADMIN option on role \"regress_priv_user1\".2023-05-22 08:46:00.170 CST [456584] STATEMENT:  GRANT regress_priv_user1 TO regress_priv_user2 GRANTED BY su1;psql:/home/jian/Desktop/dumpall_schema.sql:32: ERROR:  permission denied to grant privileges as role \"su1\"DETAIL:  The grantor must have the ADMIN option on role \"regress_priv_user1\".", "msg_date": "Mon, 22 May 2023 09:03:11 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, May 21, 2023 at 10:13:41AM -0700, Andres Freund wrote:\n> Hi,\n> \n> Thanks for the release notes!\n> \n> > <!--\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-06 [00d1e02be] hio: Use ExtendBufferedRelBy() to extend tables more eff\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-06 [26158b852] Use ExtendBufferedRelTo() in XLogReadBufferExtended()\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Allow more efficient addition of multiple heap and index pages (Andres Freund)\n> > </para>\n> > </listitem>\n> \n> While the case of extending by multiple pages improved the most, even\n> extending by a single page at a time got a good bit more scalable. Maybe just\n> \"Improve efficiency of extending relations\"?\n\nOkay, I made this change:\n\n\t-Allow more efficient addition of multiple heap and index pages (Andres Freund)\n\t+Allow more efficient addition of heap and index pages (Andres Freund)\n\n> I think:\n> \n> > <!--\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Allow logical decoding on standbys (Bertrand Drouvot, Andres Freund, Amit Khandekar)\n> > </para>\n> > </listitem>\n> \n> pretty much includes:\n> \n> > <!--\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-07 [be87200ef] Support invalidating replication slots due to horizon an\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-08 [26669757b] Handle logical slot conflicts on standby\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Allow invalidation of replication slots due to row removal, wal_level, and conflicts (Bertrand Drouvot, Andres Freund, Amit Khandekar)\n> > </para>\n> \n> as it is a prerequisite.\n\nOkay, I merged the commit entries and the authors, and removed the item.\n\n> I'd probably also merge\n> \n> > <!--\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Add function pg_log_standby_snapshot() to force creation of a WAL snapshot (Bertrand Drouvot)\n> > </para>\n> > \n> > <para>\n> > WAL snapshots are required for logical slot creation so this function speeds their creation on standbys.\n> > </para>\n> > </listitem>\n> \n> As there really isn't a use case outside of logical decoding on a standby.\n\nOkay new merged item is:\n\n\t<!--\n\tAuthor: Andres Freund <andres@anarazel.de>\n\t2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n\tAuthor: Andres Freund <andres@anarazel.de>\n\t2023-04-07 [be87200ef] Support invalidating replication slots due to horizon an\n\tAuthor: Andres Freund <andres@anarazel.de>\n\t2023-04-08 [26669757b] Handle logical slot conflicts on standby\n\tAuthor: Andres Freund <andres@anarazel.de>\n\t2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow logical decoding on standbys (Bertrand Drouvot, Andres Freund, Amit Khandekar, Bertrand Drouvot)\n\t</para>\n\t</listitem>\n\t\n\t<listitem>\n\t<para>\n\tNew function pg_log_standby_snapshot() forces creation of WAL snapshots.\n\tSnapshots are required for logical slot creation so this function speeds their creation on standbys.\n\t</para>\n\t</listitem>\n\n> > <!--\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2022-07-17 [089480c07] Default to hidden visibility for extension libraries whe\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2022-07-17 [8cf64d35e] Mark all symbols exported from extension libraries PGDLL\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Prevent extension libraries from export their symbols by default (Andres Freund, Tom Lane)\n> > </para>\n> > </listitem>\n> \n> s/export/exporting/?\n\nSeems Tom's patch already fixed that.\n\n> Looking through the release notes, I didn't see an entry for\n> \n> commit c6e0fe1f2a08505544c410f613839664eea9eb21\n> Author: David Rowley <drowley@postgresql.org>\n> Date: 2022-08-29 17:15:00 +1200\n> \n> Improve performance of and reduce overheads of memory management\n> \n> even though I think that's one of the more impactful improvements. What was\n> the reason for leaving that out?\n\nIf you read my previous email:\n\n> For the above two items, I mention items that would change user \n> like new features or changes that are significant enough that they would\n> change user behavior. For example, if a new join method increases\n> performance by 5x, that could change user behavior. Based on the quoted\n> numbers above, I didn't think \"hash now faster\" would be appropriate to\n> mention. Right?\n\nI can see this item as a big win, but I don't know how to describe it in a way\nthat is helpful for the user to know.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 21 May 2023 22:46:58 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Mon, May 22, 2023 at 09:03:11AM +0800, jian he wrote:\n> In E.1.2. Migration to Version 16, probably need mention, some\n> privilege command cannot restore.\n> if new cluster bootstrap superuser name is not the same as old one. \"GRANT x TO\n> y GRANTED BY no_bootstrap_superuser; \" will have error.\n> \n> ---pg15 dump content.\n> CREATE ROLE jian;\n> ALTER ROLE jian WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION\n> BYPASSRLS;\n> CREATE ROLE regress_priv_user1;\n> ALTER ROLE regress_priv_user1 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB\n> LOGIN NOREPLICATION NOBYPASSRLS;\n> CREATE ROLE regress_priv_user2;\n> ALTER ROLE regress_priv_user2 WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB\n> LOGIN NOREPLICATION NOBYPASSRLS;\n> CREATE ROLE su1;\n> ALTER ROLE su1 WITH SUPERUSER INHERIT CREATEROLE NOCREATEDB LOGIN NOREPLICATION\n> NOBYPASSRLS;\n> GRANT regress_priv_user1 TO regress_priv_user2 GRANTED BY su1;\n> \n> -----------restore in pg16\n> \\i /home/jian/Desktop/dumpall_schema.sql\n> 2023-05-22 08:46:00.170 CST [456584] ERROR:  permission denied to grant\n> privileges as role \"su1\"\n> 2023-05-22 08:46:00.170 CST [456584] DETAIL:  The grantor must have the ADMIN\n> option on role \"regress_priv_user1\".\n> 2023-05-22 08:46:00.170 CST [456584] STATEMENT:  GRANT regress_priv_user1 TO\n> regress_priv_user2 GRANTED BY su1;\n> psql:/home/jian/Desktop/dumpall_schema.sql:32: ERROR:  permission denied to\n> grant privileges as role \"su1\"\n> DETAIL:  The grantor must have the ADMIN option on role \"regress_priv_user1\".\n\nAgreed, new text:\n\n\t<!--\n\tAuthor: Robert Haas <rhaas@postgresql.org>\n\t2022-07-26 [e530be2c5] Do not allow removal of superuser privileges from bootst\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tPrevent removal of superuser privileges for the bootstrap user (Robert Haas)\n\t</para>\n\t\n\t<para>\n-->\tRestoring such users could lead to errors.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 21 May 2023 22:50:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, May 21, 2023 at 10:13:41AM -0700, Andres Freund wrote:\n> Hi,\n> \n> Thanks for the release notes!\n> \n> > <!--\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-06 [00d1e02be] hio: Use ExtendBufferedRelBy() to extend tables more eff\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-06 [26158b852] Use ExtendBufferedRelTo() in XLogReadBufferExtended()\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Allow more efficient addition of multiple heap and index pages (Andres Freund)\n> > </para>\n> > </listitem>\n> \n> While the case of extending by multiple pages improved the most, even\n> extending by a single page at a time got a good bit more scalable. Maybe just\n> \"Improve efficiency of extending relations\"?\n\nDo average users know heap and index files are both relations? That\nseems too abstract so I spelled out heap and index pages.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 21 May 2023 22:52:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nOn 2023-05-21 22:46:58 -0400, Bruce Momjian wrote:\n> > Looking through the release notes, I didn't see an entry for\n> >\n> > commit c6e0fe1f2a08505544c410f613839664eea9eb21\n> > Author: David Rowley <drowley@postgresql.org>\n> > Date: 2022-08-29 17:15:00 +1200\n> >\n> > Improve performance of and reduce overheads of memory management\n> >\n> > even though I think that's one of the more impactful improvements. What was\n> > the reason for leaving that out?\n>\n> If you read my previous email:\n>\n> > For the above two items, I mention items that would change user\n> > like new features or changes that are significant enough that they would\n> > change user behavior. For example, if a new join method increases\n> > performance by 5x, that could change user behavior. Based on the quoted\n> > numbers above, I didn't think \"hash now faster\" would be appropriate to\n> > mention. Right?\n\nI continue, as in past releases, to think that this is a bad policy. For\nexisting workloads performance improvements are commonly a more convincing\nreason to upgrade than new features - they allow users to scale the workload\nfurther, without needing application changes.\n\nOf course there are performance improvement that are too miniscule to be worth\nmentioning, but it's not a common case.\n\nAnd here it's not just performance, but also memory usage, including steady\nstate memory usage.\n\n\n> I can see this item as a big win, but I don't know how to describe it in a way\n> that is helpful for the user to know.\n\nIn doubt the subject of the commit would just work IMO.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 May 2023 10:59:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "I have added the major features to the release notes with the attached\npatch.\n\n---------------------------------------------------------------------------\n\nOn Sun, May 21, 2023 at 07:53:38PM -0400, Jonathan Katz wrote:\n> On 5/21/23 3:04 PM, Jonathan S. Katz wrote:\n> > On 5/18/23 4:49 PM, Bruce Momjian wrote:\n> > > I have completed the first draft of the PG 16 release notes.\n> > \n> > One thing that we could attempt for this beta is to include a\n> > prospective list of \"major features + enhancements.\" Of course it can\n> > change before the GA, but it'll give readers some idea of things to\n> > test.\n> > \n> > I'd propose the following (in no particular order):\n> > \n> > * General performance improvements for read-heavy workloads (looking for\n> > clarification for that in[1])\n> \n> Per [1] this sounds like it should be:\n> \n> * Optimization to reduce overall memory usage, including general performance\n> improvements.\n> \n> We can get more specific for the GA.\n> \n> Thanks,\n> \n> Jonathan\n> \n> [1] https://www.postgresql.org/message-id/5749E807-A5B7-4CC7-8282-84F6F0D4D1D0%40anarazel.de\n> \n\n\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Mon, 22 May 2023 14:01:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nOn 2023-05-21 22:52:09 -0400, Bruce Momjian wrote:\n> On Sun, May 21, 2023 at 10:13:41AM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > Thanks for the release notes!\n> > \n> > > <!--\n> > > Author: Andres Freund <andres@anarazel.de>\n> > > 2023-04-06 [00d1e02be] hio: Use ExtendBufferedRelBy() to extend tables more eff\n> > > Author: Andres Freund <andres@anarazel.de>\n> > > 2023-04-06 [26158b852] Use ExtendBufferedRelTo() in XLogReadBufferExtended()\n> > > -->\n> > > \n> > > <listitem>\n> > > <para>\n> > > Allow more efficient addition of multiple heap and index pages (Andres Freund)\n> > > </para>\n> > > </listitem>\n> > \n> > While the case of extending by multiple pages improved the most, even\n> > extending by a single page at a time got a good bit more scalable. Maybe just\n> > \"Improve efficiency of extending relations\"?\n> \n> Do average users know heap and index files are both relations? That\n> seems too abstract so I spelled out heap and index pages.\n\nI don't know about average users - but I think users that read the release\nnotes do know.\n\nI am a bit on the fence about \"addition\" vs \"extending\" - for me it's not\nclear what \"adding pages\" really means, but I might be too deep into this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 May 2023 11:03:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Mon, May 22, 2023 at 10:59:36AM -0700, Andres Freund wrote:\n> On 2023-05-21 22:46:58 -0400, Bruce Momjian wrote:\n> > > For the above two items, I mention items that would change user\n> > > like new features or changes that are significant enough that they would\n> > > change user behavior. For example, if a new join method increases\n> > > performance by 5x, that could change user behavior. Based on the quoted\n> > > numbers above, I didn't think \"hash now faster\" would be appropriate to\n> > > mention. Right?\n> \n> I continue, as in past releases, to think that this is a bad policy. For\n> existing workloads performance improvements are commonly a more convincing\n> reason to upgrade than new features - they allow users to scale the workload\n> further, without needing application changes.\n> \n> Of course there are performance improvement that are too miniscule to be worth\n> mentioning, but it's not a common case.\n> \n> And here it's not just performance, but also memory usage, including steady\n> state memory usage.\n\nUnderstood. I continue to need help determining which items to include.\nCan you suggest some text? This?\n\n\tImprove efficiency of memory usage to allow for better scaling\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 22 May 2023 14:04:27 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Mon, May 22, 2023 at 11:03:31AM -0700, Andres Freund wrote:\n> On 2023-05-21 22:52:09 -0400, Bruce Momjian wrote:\n> > Do average users know heap and index files are both relations? That\n> > seems too abstract so I spelled out heap and index pages.\n> \n> I don't know about average users - but I think users that read the release\n> notes do know.\n> \n> I am a bit on the fence about \"addition\" vs \"extending\" - for me it's not\n> clear what \"adding pages\" really means, but I might be too deep into this.\n\nI am worried \"extending\" and \"extensions\" might be too close a wording\nsince we often mention extensions. I tried \"increase the file eize\" but\nthat seemed wordy. Ideas?\n\nPersonally, while I consider heap and indexes to be both relations at\nthe SQL level, at the file system level I tend to think of them as\ndifferent, but perhaps that is just me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 22 May 2023 14:07:01 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, May 21, 2023 at 3:05 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> * Support for regular expressions for matching usernames and databases\n> names in `pg_hba.conf`, and user names in `pg_ident.conf`\n\nI suggest that this is not a major feature.\n\nPerhaps the work that I did to improve CREATEROLE could be considered\nfor inclusion in the major features list. In previous releases,\nsomeone with CREATEROLE can hack the PG OS account. Now they can't. In\nprevious releases, someone with CREATEROLE can manage all\nnon-superuser roles, but now they can manage the roles they create (or\nones they are given explicit authority to manage). You can even\ncontrol whether or not such users automatically inherit the privileges\nof roles they create, as superusers inherit all privileges. There is\ncertainly some argument that this is not a sufficiently significant\nset of changes to justify a major feature mention, and even if it is,\nit's not clear to me exactly how it would be best worded. And yet I\nfeel like it's very likely that if we look back on this release in 3\nyears, those changes will have had a significant impact on many\nPostgreSQL deployments, above all in the cloud, whereas I think it\nlikely that the ability to have regular expressions in pg_hba.conf and\npg_ident.conf will have had very little effect by comparison.\n\nOf course, there is always a possibility that I'm over-estimating the\nimpact of my own work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 May 2023 16:18:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi Bruce,\n\n> Add support for SSE2 (Streaming SIMD Extensions 2) vector operations on\nx86-64 architectures (John Naylor)\n\n> Add support for Advanced SIMD (Single Instruction Multiple Data) (NEON)\ninstructions on ARM architectures (Nathan Bossart)\n\nNit: It's a bit odd that SIMD is spelled out in only the Arm entry, and\nperhaps expanding the abbreviations can be left out.\n\n> Allow arrays searches to use vector operations on x86-64 architectures\n(John Naylor)\n\nWe can leave out the architecture here (see below). Typo: \"array searches\"\n\nAll the above seem appropriate for the \"source code\" section, but the\nfollowing entries might be better in the \"performance\" section:\n\n> Allow ASCII string detection to use vector operations on x86-64\narchitectures (John Naylor)\n> Allow JSON string processing to use vector operations on x86-64\narchitectures (John Naylor)\n>\n> ARM?\n\nArm as well. For anything using 16-byte vectors the two architectures are\nequivalently supported. For all the applications, I would just say \"vector\"\nor \"SIMD\".\n\nAnd here maybe /processing/parsing/.\n\n> Allow xid/subxid searches to use vector operations on x86-64\narchitectures (Nathan Bossart)\n\nWhen moved to the performance section, it would be something like \"improve\nscalability when a large number of write transactions are in progress\".\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHi Bruce,> Add support for SSE2 (Streaming SIMD Extensions 2) vector operations on x86-64 architectures (John Naylor)> Add support for Advanced SIMD (Single Instruction Multiple Data) (NEON) instructions on ARM architectures (Nathan Bossart)Nit: It's a bit odd that SIMD is spelled out in only the Arm entry, and perhaps expanding the abbreviations can be left out.> Allow arrays searches to use vector operations on x86-64 architectures (John Naylor)We can leave out the architecture here (see below). Typo: \"array searches\"All the above seem appropriate for the \"source code\" section, but the following entries might be better in the \"performance\" section:> Allow ASCII string detection to use vector operations on x86-64 architectures (John Naylor)> Allow JSON string processing to use vector operations on x86-64 architectures (John Naylor)>> ARM?Arm as well. For anything using 16-byte vectors the two architectures are equivalently supported. For all the applications, I would just say \"vector\" or \"SIMD\".And here maybe /processing/parsing/.> Allow xid/subxid searches to use vector operations on x86-64 architectures (Nathan Bossart)When moved to the performance section, it would be something like \"improve scalability when a large number of write transactions are in progress\".-- John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 23 May 2023 09:58:30 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, May 23, 2023 at 09:58:30AM +0700, John Naylor wrote:\n> Hi Bruce,\n> \n> > Add support for SSE2 (Streaming SIMD Extensions 2) vector operations on\n> x86-64 architectures (John Naylor)\n> \n> > Add support for Advanced SIMD (Single Instruction Multiple Data) (NEON)\n> instructions on ARM architectures (Nathan Bossart)\n> \n> Nit: It's a bit odd that SIMD is spelled out in only the Arm entry, and perhaps\n> expanding the abbreviations can be left out.\n\nThe issue is that x86-64's SSE2 uses an embedded acronym:\n\n\tSSE2 (Streaming SIMD Extensions 2)\n\nso technically it is:\n\n\tSSE2 (Streaming (Single Instruction Multiple Data) Extensions 2\n\nbut embedded acronyms is something I wanted to avoid. ;-)\n\n> > Allow arrays searches to use vector operations on x86-64 architectures (John\n> Naylor)\n> \n> We can leave out the architecture here (see below). Typo: \"array searches\"\n\nBoth fixed.\n\n> All the above seem appropriate for the \"source code\" section, but the following\n> entries might be better in the \"performance\" section:\n> \n> > Allow ASCII string detection to use vector operations on x86-64 architectures\n> (John Naylor)\n> > Allow JSON string processing to use vector operations on x86-64 architectures\n> (John Naylor)\n> >\n> > ARM?\n> \n> Arm as well. For anything using 16-byte vectors the two architectures are\n> equivalently supported. For all the applications, I would just say \"vector\" or\n> \"SIMD\".\n\nOkay, I kept \"vector\". I don't think moving them into performance makes\nsense because there I don't think this would impact user behavior or\nchoice, and it can't be controlled.\n\n> And here maybe /processing/parsing/.\n\nDone.\n\n> > Allow xid/subxid searches to use vector operations on x86-64 architectures\n> (Nathan Bossart)\n> \n> When moved to the performance section, it would be something like \"improve\n> scalability when a large number of write transactions are in progress\".\n\nUh, again, see above, this does not impact user behavior or choices. I\nassume this is x86-64-only.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 23 May 2023 00:26:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, May 23, 2023 at 11:26 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, May 23, 2023 at 09:58:30AM +0700, John Naylor wrote:\n> > > Allow ASCII string detection to use vector operations on x86-64\narchitectures\n> > (John Naylor)\n> > > Allow JSON string processing to use vector operations on x86-64\narchitectures\n> > (John Naylor)\n> > >\n> > > ARM?\n> >\n> > Arm as well. For anything using 16-byte vectors the two architectures\nare\n> > equivalently supported. For all the applications, I would just say\n\"vector\" or\n> > \"SIMD\".\n>\n> Okay, I kept \"vector\". I don't think moving them into performance makes\n> sense because there I don't think this would impact user behavior or\n> choice, and it can't be controlled.\n\nWell, these two items were only committed because of measurable speed\nincreases, and have zero effect on how developers work with \"source code\",\nso that's a category error.\n\nWhether they rise to the significance of warranting inclusion in release\nnotes is debatable.\n\n> > > Allow xid/subxid searches to use vector operations on x86-64\narchitectures\n> > (Nathan Bossart)\n> >\n> > When moved to the performance section, it would be something like\n\"improve\n> > scalability when a large number of write transactions are in progress\".\n>\n> Uh, again, see above, this does not impact user behavior or choices.\n\nSo that turns a scalability improvement into \"source code\"?\n\n> I assume this is x86-64-only.\n\nAu contraire, I said \"For anything using 16-byte vectors the two\narchitectures are equivalently supported\". It's not clear from looking at\nindividual commit messages, that's why I piped in to help.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, May 23, 2023 at 11:26 AM Bruce Momjian <bruce@momjian.us> wrote:>> On Tue, May 23, 2023 at 09:58:30AM +0700, John Naylor wrote:> > > Allow ASCII string detection to use vector operations on x86-64 architectures> > (John Naylor)> > > Allow JSON string processing to use vector operations on x86-64 architectures> > (John Naylor)> > >> > > ARM?> >> > Arm as well. For anything using 16-byte vectors the two architectures are> > equivalently supported. For all the applications, I would just say \"vector\" or> > \"SIMD\".>> Okay, I kept \"vector\".  I don't think moving them into performance makes> sense because there I don't think this would impact user behavior or> choice, and it can't be controlled.Well, these two items were only committed because of measurable speed increases, and have zero effect on how developers work with \"source code\", so that's a category error.Whether they rise to the significance of warranting inclusion in release notes is debatable.> > > Allow xid/subxid searches to use vector operations on x86-64 architectures> > (Nathan Bossart)> >> > When moved to the performance section, it would be something like \"improve> > scalability when a large number of write transactions are in progress\".>> Uh, again, see above, this does not impact user behavior or choices.  So that turns a scalability improvement into \"source code\"?> I assume this is x86-64-only.Au contraire, I said \"For anything using 16-byte vectors the two architectures are equivalently supported\". It's not clear from looking at individual commit messages, that's why I piped in to help.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 23 May 2023 12:14:04 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Mon, 22 May 2023 at 07:05, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> * Parallel execution of queries that use `FULL` and `OUTER` joins\n\nI think this should be `RIGHT` joins rather than `OUTER` joins.\n\nLEFT joins have been parallelizable I think for a long time now.\n\nDavid\n\n\n", "msg_date": "Wed, 24 May 2023 08:37:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, 23 May 2023 at 06:04, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, May 22, 2023 at 10:59:36AM -0700, Andres Freund wrote:\n> > And here it's not just performance, but also memory usage, including steady\n> > state memory usage.\n>\n> Understood. I continue to need help determining which items to include.\n> Can you suggest some text? This?\n>\n> Improve efficiency of memory usage to allow for better scaling\n\nMaybe something like:\n\n* Reduce palloc() memory overhead for all memory allocations down to 8\nbytes on all platforms. (Andres Freund, David Rowley)\n\nThis allows more efficient use of memory and is especially useful in\nqueries which perform operations (such as sorting or hashing) that\nrequire more than work_mem.\n\nDavid\n\n\n", "msg_date": "Wed, 24 May 2023 08:48:56 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 5/23/23 4:37 PM, David Rowley wrote:\r\n> On Mon, 22 May 2023 at 07:05, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> * Parallel execution of queries that use `FULL` and `OUTER` joins\r\n> \r\n> I think this should be `RIGHT` joins rather than `OUTER` joins.\r\n> \r\n> LEFT joins have been parallelizable I think for a long time now.\r\n\r\nI had grabbed it from this line:\r\n\r\n Allow outer and full joins to be performed in parallel (Melanie \r\nPlageman, Thomas Munro)\r\n\r\nIf we want to be specific on RIGHT joins, I can update it in the release \r\nannouncement, but it may be too late for the release notes (at least for \r\nbeta 1).\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 23 May 2023 18:27:23 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 5/22/23 4:18 PM, Robert Haas wrote:\r\n> On Sun, May 21, 2023 at 3:05 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> * Support for regular expressions for matching usernames and databases\r\n>> names in `pg_hba.conf`, and user names in `pg_ident.conf`\r\n> \r\n> I suggest that this is not a major feature.\r\n> \r\n> Perhaps the work that I did to improve CREATEROLE could be considered\r\n> for inclusion in the major features list. In previous releases,\r\n> someone with CREATEROLE can hack the PG OS account. Now they can't. In\r\n> previous releases, someone with CREATEROLE can manage all\r\n> non-superuser roles, but now they can manage the roles they create (or\r\n> ones they are given explicit authority to manage). You can even\r\n> control whether or not such users automatically inherit the privileges\r\n> of roles they create, as superusers inherit all privileges. There is\r\n> certainly some argument that this is not a sufficiently significant\r\n> set of changes to justify a major feature mention, and even if it is,\r\n> it's not clear to me exactly how it would be best worded. And yet I\r\n> feel like it's very likely that if we look back on this release in 3\r\n> years, those changes will have had a significant impact on many\r\n> PostgreSQL deployments, above all in the cloud, whereas I think it\r\n> likely that the ability to have regular expressions in pg_hba.conf and\r\n> pg_ident.conf will have had very little effect by comparison.\r\n> \r\n> Of course, there is always a possibility that I'm over-estimating the\r\n> impact of my own work.\r\n\r\nIn general, I'm completely fine with people advocating for their own \r\nfeatures during this process, in case there's something that I missed.\r\n\r\nFor this case, while I think this work is very impactful, but I don't \r\nknow if I'd call it a major feature vs. modifying an unintended \r\nbehavior. Additionally, folks have likely put mitigations in place for \r\nthis through the years. I'm happy to be convinced otherwise.\r\n\r\nThe regular expressions in the files adds an ability that both we didn't \r\nhave before, and has been a request I've heard from users with very \r\nlarge deployments. For them, it'll help simplify a lot of their \r\nconfigurations/automations for setting this up en masse. Again, I'm \r\nhappy to be convinced otherwise.\r\n\r\nI wanted to use the beta release to allow for us to see 1/ how people \r\nultimately test these things and 2/ help better sift out what will be \r\ncalled a major feature. We could end up shuffling items in the list or \r\ncompletely rewriting it, so it's not set in stone.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 23 May 2023 18:38:37 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 24, 2023 at 08:37:45AM +1200, David Rowley wrote:\n> On Mon, 22 May 2023 at 07:05, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > * Parallel execution of queries that use `FULL` and `OUTER` joins\n> \n> I think this should be `RIGHT` joins rather than `OUTER` joins.\n> \n> LEFT joins have been parallelizable I think for a long time now.\n\nWell, since we can swap left/right easily, why would we not have just\nhave swappted the tables and done the join in the past? I think there\nare two things missing in my description.\n\nFirst, I need to mention parallel _hash_ join. Second, I think this\nitem is saying that the _inner_ side of a parallel hash join can be an\nOUTER or FULL join. How about?\n\n\tAllow hash joins to be parallelized where the inner side is\n\tprocessed as an OUTER or FULL join (Melanie Plageman, Thomas Munro)\n\nIn this case, the inner side is the hashed side.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 23 May 2023 23:54:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, May 23, 2023 at 06:27:23PM -0400, Jonathan Katz wrote:\n> On 5/23/23 4:37 PM, David Rowley wrote:\n> > On Mon, 22 May 2023 at 07:05, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > > * Parallel execution of queries that use `FULL` and `OUTER` joins\n> > \n> > I think this should be `RIGHT` joins rather than `OUTER` joins.\n> > \n> > LEFT joins have been parallelizable I think for a long time now.\n> \n> I had grabbed it from this line:\n> \n> Allow outer and full joins to be performed in parallel (Melanie Plageman,\n> Thomas Munro)\n> \n> If we want to be specific on RIGHT joins, I can update it in the release\n> announcement, but it may be too late for the release notes (at least for\n> beta 1).\n\nWe will have many more edits before final so I would not worry about\nadjusting the beta1 wording.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 23 May 2023 23:56:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, May 23, 2023 at 12:14:04PM +0700, John Naylor wrote:\n> On Tue, May 23, 2023 at 11:26 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > Allow xid/subxid searches to use vector operations on x86-64\n> architectures\n> > > (Nathan Bossart)\n> > >\n> > > When moved to the performance section, it would be something like \"improve\n> > > scalability when a large number of write transactions are in progress\".\n> >\n> > Uh, again, see above, this does not impact user behavior or choices.  \n> \n> So that turns a scalability improvement into \"source code\"?\n> \n> > I assume this is x86-64-only.\n> \n> Au contraire, I said \"For anything using 16-byte vectors the two architectures\n> are equivalently supported\". It's not clear from looking at individual commit\n> messages, that's why I piped in to help.\n\nOkay, updated text:\n\n\tAllow xid/subxid searches to use vector operations (Nathan Bossart)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 24 May 2023 00:07:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, 24 May 2023 at 15:54, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, May 24, 2023 at 08:37:45AM +1200, David Rowley wrote:\n> > On Mon, 22 May 2023 at 07:05, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > > * Parallel execution of queries that use `FULL` and `OUTER` joins\n> >\n> > I think this should be `RIGHT` joins rather than `OUTER` joins.\n> >\n> > LEFT joins have been parallelizable I think for a long time now.\n>\n> Well, since we can swap left/right easily, why would we not have just\n> have swappted the tables and done the join in the past? I think there\n> are two things missing in my description.\n>\n> First, I need to mention parallel _hash_ join. Second, I think this\n> item is saying that the _inner_ side of a parallel hash join can be an\n> OUTER or FULL join. How about?\n>\n> Allow hash joins to be parallelized where the inner side is\n> processed as an OUTER or FULL join (Melanie Plageman, Thomas Munro)\n>\n> In this case, the inner side is the hashed side.\n\nI think Jonathan's text is safe to swap OUTER to RIGHT as it mentions\n\"execution\". For the release notes, maybe the mention of it can be\nmoved away from \"E.1.3.1.1. Optimizer\" and put under \"E.1.3.1.2.\nGeneral Performance\" and ensure we mention that we're talking about\nthe executor?\n\nI'm thinking it might be confusing if we claim that this is something\nthat we switched on in the planner. It was a limitation with the\nexecutor which the planner was just onboard with not producing plans\nfor.\n\nDavid\n\n\n", "msg_date": "Wed, 24 May 2023 16:13:14 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, May 23, 2023 at 12:14:04PM +0700, John Naylor wrote:\n> On Tue, May 23, 2023 at 11:26 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Tue, May 23, 2023 at 09:58:30AM +0700, John Naylor wrote:\n> > > > Allow ASCII string detection to use vector operations on x86-64\n> architectures\n> > > (John Naylor)\n> > > > Allow JSON string processing to use vector operations on x86-64\n> architectures\n> > > (John Naylor)\n> > > >\n> > > > ARM?\n> > >\n> > > Arm as well. For anything using 16-byte vectors the two architectures are\n> > > equivalently supported. For all the applications, I would just say \"vector\"\n> or\n> > > \"SIMD\".\n> >\n> > Okay, I kept \"vector\".  I don't think moving them into performance makes\n> > sense because there I don't think this would impact user behavior or\n> > choice, and it can't be controlled.\n> \n> Well, these two items were only committed because of measurable speed\n> increases, and have zero effect on how developers work with \"source code\", so\n> that's a category error.\n> \n> Whether they rise to the significance of warranting inclusion in release notes\n> is debatable.\n\nOkay, let's dissect this. First, I am excited about these features\nbecause I think they show innovation, particularly for high scaling, so\nI want to highlight this.\n\nSecond, you might be correct that the section is wrong. I thought of\nCPU instructions as something tied to the compiler, so part of the build\nprocess or source code, but the point we should be make is that we have\nthese acceleration, not how it is implemented. We can move the entire\ngroup to the \"General Performance\" section, or we can split it out:\n\nKeep in source code:\n\n\tAdd support for SSE2 (Streaming SIMD Extensions 2) vector operations on\n\tx86-64 architectures (John Naylor)\n\t\n\tAdd support for Advanced SIMD (Single Instruction Multiple Data) (NEON)\n\tinstructions on ARM architectures (Nathan Bossart)\n\nmove to General Performance:\n\n\tAllow xid/subxid searches to use vector operations (Nathan Bossart)\n\n\tAllow ASCII string detection to use vector operations (John Naylor)\n\nand add these to data types:\n\n\tAllow JSON string parsing to use vector operations (John Naylor)\n\n\tAllow array searches to use vector operations (John Naylor)\t\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 24 May 2023 00:19:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 24, 2023 at 11:19 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Second, you might be correct that the section is wrong. I thought of\n> CPU instructions as something tied to the compiler, so part of the build\n> process or source code, but the point we should be make is that we have\n> these acceleration, not how it is implemented. We can move the entire\n> group to the \"General Performance\" section, or we can split it out:\n\nSplitting out like that seems like a good idea to me.\n\n> Keep in source code:\n>\n> Add support for SSE2 (Streaming SIMD Extensions 2) vector\noperations on\n> x86-64 architectures (John Naylor)\n>\n> Add support for Advanced SIMD (Single Instruction Multiple Data)\n(NEON)\n> instructions on ARM architectures (Nathan Bossart)\n>\n> move to General Performance:\n>\n> Allow xid/subxid searches to use vector operations (Nathan\nBossart)\n>\n> Allow ASCII string detection to use vector operations (John\nNaylor)\n\n(The ASCII part is most relevant for COPY FROM, just in case that matters.)\n\n> and add these to data types:\n>\n> Allow JSON string parsing to use vector operations (John Naylor)\n>\n> Allow array searches to use vector operations (John Naylor)\n\nThe last one refers to new internal functions, so it could stay in source\ncode. (Either way, we don't want to imply that arrays of SQL types are\naccelerated this way, it's so far only for internal arrays.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, May 24, 2023 at 11:19 AM Bruce Momjian <bruce@momjian.us> wrote:>> Second, you might be correct that the section is wrong.  I thought of> CPU instructions as something tied to the compiler, so part of the build> process or source code, but the point we should be make is that we have> these acceleration, not how it is implemented.  We can move the entire> group to the \"General Performance\" section, or we can split it out:Splitting out like that seems like a good idea to me. > Keep in source code:>>         Add support for SSE2 (Streaming SIMD Extensions 2) vector operations on>         x86-64 architectures (John Naylor)>>         Add support for Advanced SIMD (Single Instruction Multiple Data) (NEON)>         instructions on ARM architectures (Nathan Bossart)>> move to General Performance:>>         Allow xid/subxid searches to use vector operations (Nathan Bossart)>>         Allow ASCII string detection to use vector operations (John Naylor)(The ASCII part is most relevant for COPY FROM, just in case that matters.)> and add these to data types:>>         Allow JSON string parsing to use vector operations (John Naylor)>>         Allow array searches to use vector operations (John Naylor)     The last one refers to new internal functions, so it could stay in source code. (Either way, we don't want to imply that arrays of SQL types are accelerated this way, it's so far only for internal arrays.)--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 May 2023 12:23:02 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 24, 2023 at 12:23:02PM +0700, John Naylor wrote:\n> \n> On Wed, May 24, 2023 at 11:19 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Second, you might be correct that the section is wrong.  I thought of\n> > CPU instructions as something tied to the compiler, so part of the build\n> > process or source code, but the point we should be make is that we have\n> > these acceleration, not how it is implemented.  We can move the entire\n> > group to the \"General Performance\" section, or we can split it out:\n> \n> Splitting out like that seems like a good idea to me. \n\nOkay, items split into sections and several merged. I left the\nCPU-specific parts in Source Code, and moved the rest into a merged item\nin General Performance, but moved the JSON item to Data Types.\n\nPatch attached, and you can see the results at:\n\n\thttps://momjian.us/pgsql_docs/release-16.html\n\n> The last one refers to new internal functions, so it could stay in source code.\n> (Either way, we don't want to imply that arrays of SQL types are accelerated\n> this way, it's so far only for internal arrays.)\n\nGood point. I called them \"C arrays\" but it it into the General\nPerformance item.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 24 May 2023 09:58:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Op 5/24/23 om 15:58 schreef Bruce Momjian:\n> On Wed, May 24, 2023 at 12:23:02PM +0700, John Naylor wrote:\n>>\n>> On Wed, May 24, 2023 at 11:19 AM Bruce Momjian <bruce@momjian.us> wrote:\n\nTypos:\n\n'from standbys servers' should be\n'from standby servers'\n\n'reindexedb' should be\n'reindexdb'\n (2x: the next line mentions, erroneously, 'reindexedb --system')\n\n'created only created' should be\n'only created'\n (I think)\n\n'could could' should be\n'could'\n\n'are now require the role' should be\n'now require the role'\n\n'values is' should be\n'value is'\n\n'to marked' should be\n'to be marked'\n\n\nthanks,\nErik\n\n\n\n\n", "msg_date": "Wed, 24 May 2023 16:57:59 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 24, 2023 at 04:57:59PM +0200, Erik Rijkers wrote:\n> Op 5/24/23 om 15:58 schreef Bruce Momjian:\n> > On Wed, May 24, 2023 at 12:23:02PM +0700, John Naylor wrote:\n> > > \n> > > On Wed, May 24, 2023 at 11:19 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> Typos:\n> \n> 'from standbys servers' should be\n> 'from standby servers'\n> \n> 'reindexedb' should be\n> 'reindexdb'\n> (2x: the next line mentions, erroneously, 'reindexedb --system')\n> \n> 'created only created' should be\n> 'only created'\n> (I think)\n> \n> 'could could' should be\n> 'could'\n> \n> 'are now require the role' should be\n> 'now require the role'\n> \n> 'values is' should be\n> 'value is'\n> \n> 'to marked' should be\n> 'to be marked'\n\nAll good, patch attached and applied. Updated docs are at:\n\n\thttps://momjian.us/pgsql_docs/release-16.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 24 May 2023 12:17:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 5/24/23 12:13 AM, David Rowley wrote:\r\n> On Wed, 24 May 2023 at 15:54, Bruce Momjian <bruce@momjian.us> wrote:\r\n>>\r\n>> On Wed, May 24, 2023 at 08:37:45AM +1200, David Rowley wrote:\r\n>>> On Mon, 22 May 2023 at 07:05, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>>>> * Parallel execution of queries that use `FULL` and `OUTER` joins\r\n>>>\r\n>>> I think this should be `RIGHT` joins rather than `OUTER` joins.\r\n>>>\r\n>>> LEFT joins have been parallelizable I think for a long time now.\r\n>>\r\n>> Well, since we can swap left/right easily, why would we not have just\r\n>> have swappted the tables and done the join in the past? I think there\r\n>> are two things missing in my description.\r\n>>\r\n>> First, I need to mention parallel _hash_ join. Second, I think this\r\n>> item is saying that the _inner_ side of a parallel hash join can be an\r\n>> OUTER or FULL join. How about?\r\n>>\r\n>> Allow hash joins to be parallelized where the inner side is\r\n>> processed as an OUTER or FULL join (Melanie Plageman, Thomas Munro)\r\n>>\r\n>> In this case, the inner side is the hashed side.\r\n> \r\n> I think Jonathan's text is safe to swap OUTER to RIGHT as it mentions\r\n> \"execution\".\r\n\r\nI made this swap in the release announcement. Thanks!\r\n\r\nJonathan", "msg_date": "Wed, 24 May 2023 12:58:07 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 24, 2023 at 08:48:56AM +1200, David Rowley wrote:\n> On Tue, 23 May 2023 at 06:04, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Mon, May 22, 2023 at 10:59:36AM -0700, Andres Freund wrote:\n> > > And here it's not just performance, but also memory usage, including steady\n> > > state memory usage.\n> >\n> > Understood. I continue to need help determining which items to include.\n> > Can you suggest some text? This?\n> >\n> > Improve efficiency of memory usage to allow for better scaling\n> \n> Maybe something like:\n> \n> * Reduce palloc() memory overhead for all memory allocations down to 8\n> bytes on all platforms. (Andres Freund, David Rowley)\n> \n> This allows more efficient use of memory and is especially useful in\n> queries which perform operations (such as sorting or hashing) that\n> require more than work_mem.\n\nWell, this would go in the source code section, but it seems too\ninternal and global to mention.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 24 May 2023 13:43:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 24, 2023 at 01:43:50PM -0400, Bruce Momjian wrote:\n> > * Reduce palloc() memory overhead for all memory allocations down to 8\n> > bytes on all platforms. (Andres Freund, David Rowley)\n> > \n> > This allows more efficient use of memory and is especially useful in\n> > queries which perform operations (such as sorting or hashing) that\n> > require more than work_mem.\n> \n> Well, this would go in the source code section, but it seems too\n> internal and global to mention.\n\nWhat was the previous memory allocation overhead?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 24 May 2023 13:45:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 24, 2023 at 8:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Okay, items split into sections and several merged. I left the\n> CPU-specific parts in Source Code, and moved the rest into a merged item\n> in General Performance, but moved the JSON item to Data Types.\n\nIt looks like it got moved to Functions actually?\n\n> > The last one refers to new internal functions, so it could stay in\nsource code.\n> > (Either way, we don't want to imply that arrays of SQL types are\naccelerated\n> > this way, it's so far only for internal arrays.)\n>\n> Good point. I called them \"C arrays\" but it it into the General\n> Performance item.\n\nLooks good to me, although...\n\n> Allow xid/subxid searches and ASCII string detection to use vector\noperations (Nathan Bossart)\n\nNathan wrote the former, I did the latter.\n\nThanks for working on this!\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, May 24, 2023 at 8:58 PM Bruce Momjian <bruce@momjian.us> wrote:>> Okay, items split into sections and several merged.  I left the> CPU-specific parts in Source Code, and moved the rest into a merged item> in General Performance, but moved the JSON item to Data Types.It looks like it got moved to Functions actually?> > The last one refers to new internal functions, so it could stay in source code.> > (Either way, we don't want to imply that arrays of SQL types are accelerated> > this way, it's so far only for internal arrays.)>> Good point.  I called them \"C arrays\" but it it into the General> Performance item.Looks good to me, although...> Allow xid/subxid searches and ASCII string detection to use vector operations (Nathan Bossart)Nathan wrote the former, I did the latter.Thanks for working on this!--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 May 2023 08:31:29 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 25, 2023 at 08:31:29AM +0700, John Naylor wrote:\n> \n> On Wed, May 24, 2023 at 8:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Okay, items split into sections and several merged.  I left the\n> > CPU-specific parts in Source Code, and moved the rest into a merged item\n> > in General Performance, but moved the JSON item to Data Types.\n> \n> It looks like it got moved to Functions actually?\n> \n> > > The last one refers to new internal functions, so it could stay in source\n> code.\n> > > (Either way, we don't want to imply that arrays of SQL types are\n> accelerated\n> > > this way, it's so far only for internal arrays.)\n> >\n> > Good point.  I called them \"C arrays\" but it it into the General\n> > Performance item.\n> \n> Looks good to me, although...\n> \n> > Allow xid/subxid searches and ASCII string detection to use vector operations\n> (Nathan Bossart)\n> \n> Nathan wrote the former, I did the latter.\n> \n> Thanks for working on this!\n\nUgh, I have to remember to merge authors when I merge items --- fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 24 May 2023 22:02:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, 25 May 2023 at 05:45, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, May 24, 2023 at 01:43:50PM -0400, Bruce Momjian wrote:\n> > > * Reduce palloc() memory overhead for all memory allocations down to 8\n> > > bytes on all platforms. (Andres Freund, David Rowley)\n> > >\n> > > This allows more efficient use of memory and is especially useful in\n> > > queries which perform operations (such as sorting or hashing) that\n> > > require more than work_mem.\n> >\n> > Well, this would go in the source code section, but it seems too\n> > internal and global to mention.\n>\n> What was the previous memory allocation overhead?\n\nOn 64-bit builds, it was 16 bytes for AllocSet contexts, 24 bytes for\ngeneration contexts and 16 bytes for slab contexts.\n\nDavid\n\n\n", "msg_date": "Thu, 25 May 2023 17:57:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n>\n> \thttps://momjian.us/pgsql_docs/release-16.html\n>\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n\nThe bit about auto_explain and query parameters says:\n\n> Allow auto_explain to log query parameters used in executing prepared\n> statements (Dagfinn Ilmari Mannsåker)\n>\n> This is controlled by auto_explain.log_parameter_max_length, and by\n> default query parameters will be logged with no length\n> restriction. SHOULD THIS BE MORE CLEARLY IDENTIFIED AS CONTROLLING THE\n> EXECUTION OF PREPARED STATEMENTS?\n\nThis is wrong, the logging applies to all query parameters, not just for\nprepared statements (and has nothing to do with controlling the\nexecution thereof). That was just the only way to test it when it was\nwritten, because psql's \\bind command exist yet then.\n\nShould we perhaps add some tests for that, like the attached?\n\n- ilmari", "msg_date": "Thu, 25 May 2023 21:20:11 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, 2023-05-18 at 16:49 -0400, Bruce Momjian wrote:\n> I have completed the first draft of the PG 16 release notes.\n\nI found two typos.\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 25 May 2023 23:51:24 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 2023-May-25, Laurenz Albe wrote:\n\n> @@ -1335,7 +1335,7 @@ Author: Peter Eisentraut <peter@eisentraut.org>\n> \n> <listitem>\n> <para>\n> -Add Windows process the system collations (Jose Santamaria Flecha)\n> +Add Windows to process the system collations (Jose Santamaria Flecha)\n> ADD THIS?\n> </para>\n> </listitem>\n\nHmm, not sure this describes the change properly. Maybe something like\n\"On Windows, system locales are now imported automatically. Previously,\nonly ICU locales were imported automatically on Windows.\"\n\nMaybe the Windows improvements should be listed together in a separate\nsection.\n\nAlso, \"Juan José Santamaría Flecha\" is the spelling Juan José uses for\nhis name.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 26 May 2023 12:21:23 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 25, 2023 at 09:20:11PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> \n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> >\n> > \thttps://momjian.us/pgsql_docs/release-16.html\n> >\n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> \n> The bit about auto_explain and query parameters says:\n> \n> > Allow auto_explain to log query parameters used in executing prepared\n> > statements (Dagfinn Ilmari Mannsåker)\n> >\n> > This is controlled by auto_explain.log_parameter_max_length, and by\n> > default query parameters will be logged with no length\n> > restriction. SHOULD THIS BE MORE CLEARLY IDENTIFIED AS CONTROLLING THE\n> > EXECUTION OF PREPARED STATEMENTS?\n> \n> This is wrong, the logging applies to all query parameters, not just for\n> prepared statements (and has nothing to do with controlling the\n> execution thereof). That was just the only way to test it when it was\n> written, because psql's \\bind command exist yet then.\n\nI see your point. How is this?\n\n\tAllow auto_explain to log query parameters used by parameterized\n\tstatements (Dagfinn Ilmari Mannsåker)\n\n\tThis affects queries using server-side PRAPARE/EXECUTE\n\tand client-side parse/bind. Logging is controlled by\n\tauto_explain.log_parameter_max_length;\tby default query\n\tparameters will be logged with no length restriction.\n\n\n> Should we perhaps add some tests for that, like the attached?\n\nSorry, I don't know the answer.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 27 May 2023 21:34:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 25, 2023 at 11:51:24PM +0200, Laurenz Albe wrote:\n> On Thu, 2023-05-18 at 16:49 -0400, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 16 release notes.\n> \n> I found two typos.\n\n> diff --git a/doc/src/sgml/release-16.sgml b/doc/src/sgml/release-16.sgml\n> index faecae7c42..7dad0b8550 100644\n> --- a/doc/src/sgml/release-16.sgml\n> +++ b/doc/src/sgml/release-16.sgml\n> @@ -1294,7 +1294,7 @@ Determine the ICU default locale from the environment (Jeff Davis)\n> </para>\n> \n> <para>\n> -However, ICU doesn't support the C local so UTF-8 is used in such cases. Previously the default was always UTF-8.\n> +However, ICU doesn't support the C locale so UTF-8 is used in such cases. Previously the default was always UTF-8.\n> </para>\n> </listitem>\n\nI have made this change.\n\n> @@ -1335,7 +1335,7 @@ Author: Peter Eisentraut <peter@eisentraut.org>\n> \n> <listitem>\n> <para>\n> -Add Windows process the system collations (Jose Santamaria Flecha)\n> +Add Windows to process the system collations (Jose Santamaria Flecha)\n> ADD THIS?\n> </para>\n> </listitem>\n\nI will deal with this item in the email from Álvaro Herrera.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 27 May 2023 22:21:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 26, 2023 at 12:21:23PM +0200, Álvaro Herrera wrote:\n> On 2023-May-25, Laurenz Albe wrote:\n> \n> > @@ -1335,7 +1335,7 @@ Author: Peter Eisentraut <peter@eisentraut.org>\n> > \n> > <listitem>\n> > <para>\n> > -Add Windows process the system collations (Jose Santamaria Flecha)\n> > +Add Windows to process the system collations (Jose Santamaria Flecha)\n> > ADD THIS?\n> > </para>\n> > </listitem>\n> \n> Hmm, not sure this describes the change properly. Maybe something like\n> \"On Windows, system locales are now imported automatically. Previously,\n> only ICU locales were imported automatically on Windows.\"\n> \n> Maybe the Windows improvements should be listed together in a separate\n> section.\n> \n> Also, \"Juan José Santamaría Flecha\" is the spelling Juan José uses for\n> his name.\n\nOkay, I reword this and fixed Juan's name, attached, and applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Sat, 27 May 2023 23:03:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, May 21, 2023 at 10:47 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Okay new merged item is:\n>\n> <!--\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-07 [be87200ef] Support invalidating replication slots due to horizon an\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-08 [26669757b] Handle logical slot conflicts on standby\n> Author: Andres Freund <andres@anarazel.de>\n> 2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n> -->\n>\n> <listitem>\n> <para>\n> Allow logical decoding on standbys (Bertrand Drouvot, Andres Freund, Amit Khandekar, Bertrand Drouvot)\n> </para>\n> </listitem>\n>\n> <listitem>\n> <para>\n> New function pg_log_standby_snapshot() forces creation of WAL snapshots.\n> Snapshots are required for logical slot creation so this function speeds their creation on standbys.\n> </para>\n> </listitem>\n>\n\nBertrand Drouvot is mentioned two times in this item and commit\n0fdab27ad is listed two times. Is it intentional?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 May 2023 10:08:41 -0400", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Mon, May 29, 2023 at 10:08:41AM -0400, Masahiko Sawada wrote:\n> On Sun, May 21, 2023 at 10:47 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Okay new merged item is:\n> >\n> > <!--\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-07 [be87200ef] Support invalidating replication slots due to horizon an\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-08 [26669757b] Handle logical slot conflicts on standby\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2023-04-08 [0fdab27ad] Allow logical decoding on standbys\n> > -->\n> >\n> > <listitem>\n> > <para>\n> > Allow logical decoding on standbys (Bertrand Drouvot, Andres Freund, Amit Khandekar, Bertrand Drouvot)\n> > </para>\n> > </listitem>\n> >\n> > <listitem>\n> > <para>\n> > New function pg_log_standby_snapshot() forces creation of WAL snapshots.\n> > Snapshots are required for logical slot creation so this function speeds their creation on standbys.\n> > </para>\n> > </listitem>\n> >\n> \n> Bertrand Drouvot is mentioned two times in this item and commit\n> 0fdab27ad is listed two times. Is it intentional?\n\nThanks, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 29 May 2023 13:49:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, May 23, 2023 at 11:54:30PM -0400, Bruce Momjian wrote:\n> On Wed, May 24, 2023 at 08:37:45AM +1200, David Rowley wrote:\n> > On Mon, 22 May 2023 at 07:05, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > > * Parallel execution of queries that use `FULL` and `OUTER` joins\n> > \n> > I think this should be `RIGHT` joins rather than `OUTER` joins.\n> > \n> > LEFT joins have been parallelizable I think for a long time now.\n> \n> Well, since we can swap left/right easily, why would we not have just\n> have swappted the tables and done the join in the past? I think there\n> are two things missing in my description.\n> \n> First, I need to mention parallel _hash_ join. Second, I think this\n> item is saying that the _inner_ side of a parallel hash join can be an\n> OUTER or FULL join. How about?\n> \n> \tAllow hash joins to be parallelized where the inner side is\n> \tprocessed as an OUTER or FULL join (Melanie Plageman, Thomas Munro)\n> \n> In this case, the inner side is the hashed side.\n\nI went with this text:\n\n\tAllow parallelization of FULL and internal right OUTER hash joins\n\t(Melanie Plageman, Thomas Munro)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 29 May 2023 14:38:05 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 24, 2023 at 04:13:14PM +1200, David Rowley wrote:\n> On Wed, 24 May 2023 at 15:54, Bruce Momjian <bruce@momjian.us> wrote:\n> > First, I need to mention parallel _hash_ join. Second, I think this\n> > item is saying that the _inner_ side of a parallel hash join can be an\n> > OUTER or FULL join. How about?\n> >\n> > Allow hash joins to be parallelized where the inner side is\n> > processed as an OUTER or FULL join (Melanie Plageman, Thomas Munro)\n> >\n> > In this case, the inner side is the hashed side.\n> \n> I think Jonathan's text is safe to swap OUTER to RIGHT as it mentions\n> \"execution\". For the release notes, maybe the mention of it can be\n> moved away from \"E.1.3.1.1. Optimizer\" and put under \"E.1.3.1.2.\n> General Performance\" and ensure we mention that we're talking about\n> the executor?\n> \n> I'm thinking it might be confusing if we claim that this is something\n> that we switched on in the planner. It was a limitation with the\n> executor which the planner was just onboard with not producing plans\n> for.\n\nWell, I try to keep plan changes in the optimizer section because that\nis where the decisions are made, and how people think of plans since\nEXPLAIN makes them visible. I agree it is an executor change but I\nthink that distinction will be more confusing than helpful.\n\nFrankly, almost all the optimizer items are really executor changes. \nMaybe the \"Optimizer\" title needs to be changed, but I do think it is\ngood to group plan changes together.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 29 May 2023 14:46:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, May 27, 2023 at 09:34:37PM -0400, Bruce Momjian wrote:\n> > > This is controlled by auto_explain.log_parameter_max_length, and by\n> > > default query parameters will be logged with no length\n> > > restriction. SHOULD THIS BE MORE CLEARLY IDENTIFIED AS CONTROLLING THE\n> > > EXECUTION OF PREPARED STATEMENTS?\n> > \n> > This is wrong, the logging applies to all query parameters, not just for\n> > prepared statements (and has nothing to do with controlling the\n> > execution thereof). That was just the only way to test it when it was\n> > written, because psql's \\bind command exist yet then.\n> \n> I see your point. How is this?\n> \n> \tAllow auto_explain to log query parameters used by parameterized\n> \tstatements (Dagfinn Ilmari Mannsåker)\n> \n> \tThis affects queries using server-side PRAPARE/EXECUTE\n> \tand client-side parse/bind. Logging is controlled by\n> \tauto_explain.log_parameter_max_length;\tby default query\n> \tparameters will be logged with no length restriction.\n\nDone, attached patch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Tue, 30 May 2023 06:03:19 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n\n> On Sat, May 27, 2023 at 09:34:37PM -0400, Bruce Momjian wrote:\n>> > > This is controlled by auto_explain.log_parameter_max_length, and by\n>> > > default query parameters will be logged with no length\n>> > > restriction. SHOULD THIS BE MORE CLEARLY IDENTIFIED AS CONTROLLING THE\n>> > > EXECUTION OF PREPARED STATEMENTS?\n>> > \n>> > This is wrong, the logging applies to all query parameters, not just for\n>> > prepared statements (and has nothing to do with controlling the\n>> > execution thereof). That was just the only way to test it when it was\n>> > written, because psql's \\bind command exist yet then.\n>> \n>> I see your point. How is this?\n>> \n>> \tAllow auto_explain to log query parameters used by parameterized\n>> \tstatements (Dagfinn Ilmari Mannsåker)\n>> \n>> \tThis affects queries using server-side PRAPARE/EXECUTE\n>> \tand client-side parse/bind. Logging is controlled by\n>> \tauto_explain.log_parameter_max_length;\tby default query\n>> \tparameters will be logged with no length restriction.\n>\n> Done, attached patch applied.\n\nThat works for me. Thanks!\n\n- ilmari\n\n\n", "msg_date": "Tue, 30 May 2023 11:28:24 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nOn Thu, May 18, 2023 at 4:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n>\n\nI have one suggestion on this item:\n\n<!--\nAuthor: Amit Kapila <akapila@postgresql.org>\n2022-07-21 [366283961] Allow users to skip logical replication of data having o\nAuthor: Amit Kapila <akapila@postgresql.org>\n2022-09-08 [875693019] Raise a warning if there is a possibility of data from m\n-->\n\n<listitem>\n<para>\nAllow logical replication subscribers to process only changes that\nhave no origin (Vignesh C, Amit Kapila)\n</para>\n\n<para>\nThis can be used to avoid replication loops.\n</para>\n</listitem>\n\nI think it's better to mention the new 'origin' option as other new\nsubscription options are mentioned. For example,\n\n<para>\nThis can be used to avoid replication loops. This can be controlled by\nthe subscription \"origin\" option.\n</para>\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 30 May 2023 06:33:09 -0400", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, May 30, 2023 at 06:33:09AM -0400, Masahiko Sawada wrote:\n> Hi,\n> \n> On Thu, May 18, 2023 at 4:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> >\n> \n> I have one suggestion on this item:\n> \n> <!--\n> Author: Amit Kapila <akapila@postgresql.org>\n> 2022-07-21 [366283961] Allow users to skip logical replication of data having o\n> Author: Amit Kapila <akapila@postgresql.org>\n> 2022-09-08 [875693019] Raise a warning if there is a possibility of data from m\n> -->\n> \n> <listitem>\n> <para>\n> Allow logical replication subscribers to process only changes that\n> have no origin (Vignesh C, Amit Kapila)\n> </para>\n> \n> <para>\n> This can be used to avoid replication loops.\n> </para>\n> </listitem>\n> \n> I think it's better to mention the new 'origin' option as other new\n> subscription options are mentioned. For example,\n> \n> <para>\n> This can be used to avoid replication loops. This can be controlled by\n> the subscription \"origin\" option.\n> </para>\n\nGreat, new text is:\n\n\tThis can be used to avoid replication loops. This is controlled\n\tby the new CREATE SUBSCRIPTION \"origin\" option.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 30 May 2023 19:07:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 25, 2023 at 05:57:25PM +1200, David Rowley wrote:\n> On Thu, 25 May 2023 at 05:45, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Wed, May 24, 2023 at 01:43:50PM -0400, Bruce Momjian wrote:\n> > > > * Reduce palloc() memory overhead for all memory allocations down to 8\n> > > > bytes on all platforms. (Andres Freund, David Rowley)\n> > > >\n> > > > This allows more efficient use of memory and is especially useful in\n> > > > queries which perform operations (such as sorting or hashing) that\n> > > > require more than work_mem.\n> > >\n> > > Well, this would go in the source code section, but it seems too\n> > > internal and global to mention.\n> >\n> > What was the previous memory allocation overhead?\n> \n> On 64-bit builds, it was 16 bytes for AllocSet contexts, 24 bytes for\n> generation contexts and 16 bytes for slab contexts.\n\nOkay, item added to Source Code:\n\n\t<!--\n\tAuthor: David Rowley <drowley@postgresql.org>\n\t2022-08-29 [c6e0fe1f2] Improve performance of and reduce overheads of memory ma\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tReduce overhead of memory allocations (Andres Freund, David Rowley)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 30 May 2023 19:31:59 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, 31 May 2023 at 11:32, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, May 25, 2023 at 05:57:25PM +1200, David Rowley wrote:\n> > On 64-bit builds, it was 16 bytes for AllocSet contexts, 24 bytes for\n> > generation contexts and 16 bytes for slab contexts.\n>\n> Okay, item added to Source Code:\n\nI don't think this should go under \"E.1.3.11. Source Code\". The patch\nwas entirely aimed to increase performance, not just of allocations\nthemselves, but of any operations which uses palloc'd memory. This is\ndue to the patch increasing the density of memory allocation on blocks\nmalloc'd by our memory context code so that fewer CPU cache lines need\nto be touched in the entire backend process for *all* memory that's\nallocated with palloc. The performance increase here can be fairly\nsignificant for small-sized palloc requests when CPU cache pressure is\nhigh. Since CPU caches aren't that big, it does not take much of a\nquery to put the cache pressure up. Hashing or sorting a few million\nrows is going to do that.\n\nThe patch here was born out of the regression report I made in [1],\nwhich I mention in [2] about the prototype patch Andres wrote to fix\nthe performance regression.\n\nI think \"E.1.3.1.2. General Performance\" might be a better location.\nHaving it under \"Source Code\" makes it sound like it was some\nrefactoring work. That's certainly not the case.\n\nA bit more detail:\n\nHere's a small histogram of the number of allocations in various size\nbuckets from running make check with some debug output in\nAllocSetAlloc and GenerationAlloc to record the size of the\nallocation:\n\n bucket | number_of_allocations | percent_of_total_allocations\n----------------+-----------------------+---------\n up to 16 bytes | 8,881,106 | 31.39\n up to 32 bytes | 4,579,608 | 16.18\n up to 64 bytes | 6,574,107 | 23.23\n above 64 bytes | 8,260,714 | 29.19\n\nSo quite a large portion of our allocations (at least in our test\nsuite) are small. Halving the 16-byte chunk header down 8 bytes on a\n16-byte allocation means a 25% memory saving.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqXpLzav6dUeR5vO_RBh_feHrHMLhigVQXw9jHCyKP9PA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvowHNSVLhMc0cnovg8PfnYQZxit-gP_bn3xkT4rZX3G0w%40mail.gmail.com\n\n\n", "msg_date": "Wed, 31 May 2023 18:03:01 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, May 31, 2023 at 06:03:01PM +1200, David Rowley wrote:\n> I don't think this should go under \"E.1.3.11. Source Code\". The patch\n> was entirely aimed to increase performance, not just of allocations\n> themselves, but of any operations which uses palloc'd memory. This is\n> due to the patch increasing the density of memory allocation on blocks\n> malloc'd by our memory context code so that fewer CPU cache lines need\n> to be touched in the entire backend process for *all* memory that's\n> allocated with palloc. The performance increase here can be fairly\n> significant for small-sized palloc requests when CPU cache pressure is\n> high. Since CPU caches aren't that big, it does not take much of a\n> query to put the cache pressure up. Hashing or sorting a few million\n> rows is going to do that.\n> \n> The patch here was born out of the regression report I made in [1],\n> which I mention in [2] about the prototype patch Andres wrote to fix\n> the performance regression.\n> \n> I think \"E.1.3.1.2. General Performance\" might be a better location.\n> Having it under \"Source Code\" makes it sound like it was some\n> refactoring work. That's certainly not the case.\n\nOkay, moved.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 31 May 2023 07:01:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hello,\n\nOn Thu, 18 May 2023 16:49:47 -0400\nBruce Momjian <bruce@momjian.us> wrote:\n\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n\nThanks for the release notes.\n\n> \n> \thttps://momjian.us/pgsql_docs/release-16.html\n> \n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n\nI didn't find the following in the release note. This might be\nconsidered as a bug fix, but the change in this commit can potentially\nimpact applications. Is it worth including it in the release note?\n\ncommit 43351557d0d2b9c5e20298b5fee2849abef86aff\nMake materialized views participate in predicate locking\n\nRegards,\nYugo Nagata\n\n\n> I learned a few things creating it this time:\n> \n> * I can get confused over C function names and SQL function names in\n> commit messages.\n> \n> * The sections and ordering of the entries can greatly clarify the\n> items.\n> \n> * The feature count is slightly higher than recent releases:\n> \n> \trelease-10: 189\n> \trelease-11: 170\n> \trelease-12: 180\n> \trelease-13: 178\n> \trelease-14: 220\n> \trelease-15: 184\n> -->\trelease-16: 200\n> \n> -- \n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Only you can decide what is important to you.\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 5 Jun 2023 17:33:51 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Mon, Jun 5, 2023 at 05:33:51PM +0900, Yugo NAGATA wrote:\n> Hello,\n> \n> On Thu, 18 May 2023 16:49:47 -0400\n> Bruce Momjian <bruce@momjian.us> wrote:\n> \n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> \n> Thanks for the release notes.\n> \n> > \n> > \thttps://momjian.us/pgsql_docs/release-16.html\n> > \n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> \n> I didn't find the following in the release note. This might be\n> considered as a bug fix, but the change in this commit can potentially\n> impact applications. Is it worth including it in the release note?\n> \n> commit 43351557d0d2b9c5e20298b5fee2849abef86aff\n> Make materialized views participate in predicate locking\n\nI did look at this commit and decided, thought it is a behavior change,\nthat it is probably something that would be caught during upgrade\ntesting. I thought it was rare enough, and so hard to describe about\nhow to adjust for it, that is should not be mentioned. If we find that\nusers do hit this issue, or others, during beta, we can add it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 5 Jun 2023 11:42:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nOn Fri, May 19, 2023 at 5:49 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n>\n> https://momjian.us/pgsql_docs/release-16.html\n>\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n>\n\n<!--\nAuthor: Michael Paquier <michael@paquier.xyz>\n2023-03-14 [5c1b66280] Rework design of functions in pg_walinspect\n-->\n\n<listitem>\n<para>\nRemove pg_walinspect functions\npg_get_wal_records_info_till_end_of_wal() and\npg_get_wal_stats_till_end_of_wal().\n</para>\n</listitem>\n\nI found that this item misses the author, Bharath Rupireddy. Please\nadd his name.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 8 Jun 2023 14:23:33 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Mon, 5 Jun 2023 11:42:43 -0400\nBruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Jun 5, 2023 at 05:33:51PM +0900, Yugo NAGATA wrote:\n> > Hello,\n> > \n> > On Thu, 18 May 2023 16:49:47 -0400\n> > Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > > I have completed the first draft of the PG 16 release notes. You can\n> > > see the output here:\n> > \n> > Thanks for the release notes.\n> > \n> > > \n> > > \thttps://momjian.us/pgsql_docs/release-16.html\n> > > \n> > > I will adjust it to the feedback I receive; that URL will quickly show\n> > > all updates.\n> > \n> > I didn't find the following in the release note. This might be\n> > considered as a bug fix, but the change in this commit can potentially\n> > impact applications. Is it worth including it in the release note?\n> > \n> > commit 43351557d0d2b9c5e20298b5fee2849abef86aff\n> > Make materialized views participate in predicate locking\n> \n> I did look at this commit and decided, thought it is a behavior change,\n> that it is probably something that would be caught during upgrade\n> testing. I thought it was rare enough, and so hard to describe about\n> how to adjust for it, that is should not be mentioned. If we find that\n> users do hit this issue, or others, during beta, we can add it.\n\nThank you for replying. I understood.\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Only you can decide what is important to you.\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 8 Jun 2023 15:38:17 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, Jun 8, 2023 at 02:23:33PM +0900, Masahiko Sawada wrote:\n> Hi,\n> \n> On Fri, May 19, 2023 at 5:49 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> >\n> > https://momjian.us/pgsql_docs/release-16.html\n> >\n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> >\n> \n> <!--\n> Author: Michael Paquier <michael@paquier.xyz>\n> 2023-03-14 [5c1b66280] Rework design of functions in pg_walinspect\n> -->\n> \n> <listitem>\n> <para>\n> Remove pg_walinspect functions\n> pg_get_wal_records_info_till_end_of_wal() and\n> pg_get_wal_stats_till_end_of_wal().\n> </para>\n> </listitem>\n> \n> I found that this item misses the author, Bharath Rupireddy. Please\n> add his name.\n\nThanks, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 9 Jun 2023 21:04:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Adding to this thread as suggested by jkatz for consideration of\nadding to release notes...\n\nIn [1] I mention the omission of ldap_password_hook and a suggested paragraph.\n\nRoberto\n\n[1] https://www.postgresql.org/message-id/CAKz%3D%3DbLzGb-9O294AoZHqEWpAi2Ki58yCr4gaqg1HnZyh3L1uA%40mail.gmail.com\n\n\n", "msg_date": "Tue, 27 Jun 2023 15:49:44 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, Jun 27, 2023 at 03:49:44PM -0600, Roberto Mello wrote:\n> Adding to this thread as suggested by jkatz for consideration of\n> adding to release notes...\n> \n> In [1] I mention the omission of ldap_password_hook and a suggested paragraph.\n> \n> Roberto\n> \n> [1] https://www.postgresql.org/message-id/CAKz%3D%3DbLzGb-9O294AoZHqEWpAi2Ki58yCr4gaqg1HnZyh3L1uA%40mail.gmail.com\n\nI did see that commit:\n\n\tcommit 419a8dd814\n\tAuthor: Andrew Dunstan <andrew@dunslane.net>\n\tDate: Wed Mar 15 16:37:28 2023 -0400\n\t\n\t Add a hook for modifying the ldapbind password\n\t\n\t The hook can be installed by a shared_preload library.\n\t\n\t A similar mechanism could be used for radius paswords, for example, and\n\t the type name auth_password_hook_typ has been shosen with that in mind.\n\t\n\t John Naylor and Andrew Dunstan\n\t\n\t Discussion: https://postgr.es/m/469b06ed-69de-ba59-c13a-91d2372e52a9@dunslane.net\n\nHowever, there is no user documentation of this hook, so it didn't seem\nlike something to add to the release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 27 Jun 2023 22:56:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Hi,\n\nThanks for making the release notes. I found the release note of\nPG16 beta2 mentions a reverted following feature.\n\n```\n<!--\nAuthor: Jeff Davis <jdavis@postgresql.org>\n2023-03-09 [27b62377b] Use ICU by default at initdb time.\n-->\n\n<listitem>\n<para>\nHave initdb use ICU by default if ICU is enabled in the binary (Jeff \nDavis)\n</para>\n\n<para>\nOption --locale-provider=libc can be used to disable ICU.\n</para>\n</listitem>\n```\n\nUnfortunately, the feature is reverted with the commit.\n* \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2535c74b1a6190cc42e13f6b6b55d94bff4b7dd6\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 30 Jun 2023 17:29:17 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, Jun 30, 2023 at 05:29:17PM +0900, Masahiro Ikeda wrote:\n> Hi,\n> \n> Thanks for making the release notes. I found the release note of\n> PG16 beta2 mentions a reverted following feature.\n> \n> ```\n> <!--\n> Author: Jeff Davis <jdavis@postgresql.org>\n> 2023-03-09 [27b62377b] Use ICU by default at initdb time.\n> -->\n> \n> <listitem>\n> <para>\n> Have initdb use ICU by default if ICU is enabled in the binary (Jeff Davis)\n> </para>\n> \n> <para>\n> Option --locale-provider=libc can be used to disable ICU.\n> </para>\n> </listitem>\n> ```\n> \n> Unfortunately, the feature is reverted with the commit.\n> * https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2535c74b1a6190cc42e13f6b6b55d94bff4b7dd6\n\nOh, I didn't notice the revert --- item removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 30 Jun 2023 17:36:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 04:49:47PM -0400, Bruce Momjian wrote:\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n> \n> \thttps://momjian.us/pgsql_docs/release-16.html\n> \n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n\nSawada-san has mentioned on twitter that fdd8937 is not mentioned in\nthe release notes, and it seems to me that he is right. This is\ndescribed as a bug in the commit log, but it did not get backpatched\nbecause of the lack of complaints. Also, because we've removed\nsupport for anything older than Windows 10 in PG16, this change very\neasy to do.\n--\nMichael", "msg_date": "Tue, 4 Jul 2023 15:31:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, Jul 4, 2023 at 03:31:05PM +0900, Michael Paquier wrote:\n> On Thu, May 18, 2023 at 04:49:47PM -0400, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 16 release notes. You can\n> > see the output here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-16.html\n> > \n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> \n> Sawada-san has mentioned on twitter that fdd8937 is not mentioned in\n> the release notes, and it seems to me that he is right. This is\n> described as a bug in the commit log, but it did not get backpatched\n> because of the lack of complaints. Also, because we've removed\n> support for anything older than Windows 10 in PG16, this change very\n> easy to do.\n\nI did review this and wasn't sure exactly what I would describe. It is\nsaying huge pages will now work on some versions of Windows 10 but\ndidn't before?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 4 Jul 2023 17:32:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, 2023-05-18 at 16:49 -0400, Bruce Momjian wrote:\n> I have completed the first draft of the PG 16 release notes.\n\nThe release notes say:\n\n- Prevent \\df+ from showing function source code (Isaac Morland)\n\n Function bodies are more easily viewed with \\ev and \\ef.\n\n\nThat should be \\sf, not \\ev or \\ef, right?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 14 Jul 2023 20:16:38 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, 2023-05-18 at 16:49 -0400, Bruce Momjian wrote:\n> I have completed the first draft of the PG 16 release notes.\n\nThe release notes still have:\n\n- Have initdb use ICU by default if ICU is enabled in the binary (Jeff Davis)\n\n Option --locale-provider=libc can be used to disable ICU.\n\n\nBut this was reverted in 2535c74b1a6190cc42e13f6b6b55d94bff4b7dd6.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 14 Jul 2023 20:20:59 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Op 7/4/23 om 23:32 schreef Bruce Momjian:\n>>> \thttps://momjian.us/pgsql_docs/release-16.html\n\nI noticed these:\n\n'new new RULES' should be\n'new RULES'\n\n'Perform apply of large transactions' should be\n'Performs apply of large transactions'\n (I think)\n\n'SQL JSON paths' should be\n'SQL/JSON paths'\n\nErik Rijkers\n\n\n\n\n\n", "msg_date": "Fri, 14 Jul 2023 21:29:42 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "> https://momjian.us/pgsql_docs/release-16.html\n>\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n\n> Create a predefined role and grantable privilege with permission to perform maintenance operations (Nathan Bossart)\n> The predefined role is is called pg_maintain.\n\nthis feature was also reverted.\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=151c22deee66a3390ca9a1c3675e29de54ae73fc\n\n\n", "msg_date": "Sun, 23 Jul 2023 12:45:17 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "Please consider to add item to the psql section:\n\nAdd psql \\drg command to display role grants and remove the \"Member of\" \ncolumn from \\du & \\dg altogether (d65ddaca)\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Sun, 23 Jul 2023 14:09:17 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, Jul 04, 2023 at 05:32:07PM -0400, Bruce Momjian wrote:\n> On Tue, Jul 4, 2023 at 03:31:05PM +0900, Michael Paquier wrote:\n>> On Thu, May 18, 2023 at 04:49:47PM -0400, Bruce Momjian wrote:\n>> Sawada-san has mentioned on twitter that fdd8937 is not mentioned in\n>> the release notes, and it seems to me that he is right. This is\n>> described as a bug in the commit log, but it did not get backpatched\n>> because of the lack of complaints. Also, because we've removed\n>> support for anything older than Windows 10 in PG16, this change very\n>> easy to do.\n> \n> I did review this and wasn't sure exactly what I would describe. It is\n> saying huge pages will now work on some versions of Windows 10 but\n> didn't before?\n\nWindows 10 has always used a forced automated rolling upgrade process,\nso there are not many versions older than 1703, I suppose. I don't\nknow if large pages were working before 1703 where\nFILE_MAP_LARGE_PAGES has been introduced, and I have never been able\nto test that. Honestly, I don't think that we need to be picky about\nthe version mentioned, as per the forced upgrade process done by\nMicrosoft.\n\nSo, my preference would be to keep it simple and add an item like \"Fix\nhuge pages on Windows 10 and newer versions\", with as potential\nsubnote \"The backend sets a flag named FILE_MAP_LARGE_PAGES to allow\nhuge pages\", though this is not really mandatory to go down to this\nlevel of internals, either.\n--\nMichael", "msg_date": "Sun, 23 Jul 2023 20:19:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, May 18, 2023 at 04:49:47PM -0400, Bruce Momjian wrote:\n> \thttps://momjian.us/pgsql_docs/release-16.html\n\n> <!--\n> Author: Robert Haas <rhaas@postgresql.org>\n> 2023-01-10 [cf5eb37c5] Restrict the privileges of CREATEROLE users.\n> -->\n> \n> <listitem>\n> <para>\n> Restrict the privileges of CREATEROLE roles (Robert Haas)\n> </para>\n> \n> <para>\n> Previously roles with CREATEROLE privileges could change many aspects of any non-superuser role. Such changes, including adding members, now require the role requesting the change to have ADMIN OPTION\n> permission.\n> </para>\n> </listitem>\n> \n> <!--\n> Author: Robert Haas <rhaas@postgresql.org>\n> 2023-01-24 [f1358ca52] Adjust interaction of CREATEROLE with role properties.\n> -->\n> \n> <listitem>\n> <para>\n> Improve logic of CREATEROLE roles ability to control other roles (Robert Haas)\n> </para>\n> \n> <para>\n> For example, they can change the CREATEDB, REPLICATION, and BYPASSRLS properties only if they also have those permissions.\n> </para>\n> </listitem>\n\nCREATEROLE is a radically different feature in v16. In v15-, it was an\nalmost-superuser. In v16, informally speaking, it can create and administer\nits own collection of roles, but it can't administer roles outside its\ncollection or grant memberships or permissions not offered to itself. Hence,\nlet's move these two into the incompatibilities section. Let's also merge\nthem, since f1358ca52 is just doing to clauses like CREATEDB what cf5eb37c5\ndid to role memberships.\n\n> <!--\n> Author: Robert Haas <rhaas@postgresql.org>\n> 2022-08-25 [e3ce2de09] Allow grant-level control of role inheritance behavior.\n> -->\n> \n> <listitem>\n> <para>\n> Allow GRANT to control role inheritance behavior (Robert Haas)\n> </para>\n> \n> <para>\n> By default, role inheritance is controlled by the inheritance status of the member role. The new GRANT clauses WITH INHERIT and WITH ADMIN can now override this.\n> </para>\n> </listitem>\n> \n> <!--\n> Author: Robert Haas <rhaas@postgresql.org>\n> 2023-01-10 [e5b8a4c09] Add new GUC createrole_self_grant.\n> Author: Daniel Gustafsson <dgustafsson@postgresql.org>\n> 2023-02-22 [e00bc6c92] doc: Add default value of createrole_self_grant\n> -->\n> \n> <listitem>\n> <para>\n> Allow roles that create other roles to automatically inherit the new role's rights or SET ROLE to the new role (Robert Haas, Shi Yu)\n> </para>\n> \n> <para>\n> This is controlled by server variable createrole_self_grant.\n> </para>\n> </listitem>\n\nSimilarly, v16 radically changes the CREATE ROLE ... WITH INHERIT clause. The\nclause used to \"change the behavior of already-existing grants.\" Let's merge\nthese two and move the combination to the incompatibilities section.\n\n> Remove libpq support for SCM credential authentication (Michael Paquier)\n\nSince the point of removing it is the deep unlikelihood of anyone using it, I\nwouldn't list this in \"incompatibilities\".\n\n> Deprecate createuser option --role (Nathan Bossart)\n\nThis is indeed a deprecation, not a removal. By the definition of\n\"deprecate\", it's not an incompatibility.\n\n\n", "msg_date": "Sat, 5 Aug 2023 16:08:47 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, Jul 14, 2023 at 08:16:38PM +0200, Laurenz Albe wrote:\n> On Thu, 2023-05-18 at 16:49 -0400, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 16 release notes.\n> \n> The release notes say:\n> \n> - Prevent \\df+ from showing function source code (Isaac Morland)\n> \n> Function bodies are more easily viewed with \\ev and \\ef.\n> \n> \n> That should be \\sf, not \\ev or \\ef, right?\n\nAgreed, fixed. I am not sure why I put \\ev and \\ef there.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 9 Aug 2023 13:24:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, Jul 14, 2023 at 08:20:59PM +0200, Laurenz Albe wrote:\n> On Thu, 2023-05-18 at 16:49 -0400, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 16 release notes.\n> \n> The release notes still have:\n> \n> - Have initdb use ICU by default if ICU is enabled in the binary (Jeff Davis)\n> \n> Option --locale-provider=libc can be used to disable ICU.\n> \n> \n> But this was reverted in 2535c74b1a6190cc42e13f6b6b55d94bff4b7dd6.\n\nFYI, this was corrected in this commit:\n\n\tcommit c729642bd7\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\tDate: Fri Jun 30 17:35:47 2023 -0400\n\t\n\t doc: PG 16 relnotes, remove \"Have initdb use ICU by default\"\n\t\n\t Item reverted.\n\t\n\t Backpatch-through: 16 only\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 9 Aug 2023 13:29:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Fri, Jul 14, 2023 at 09:29:42PM +0200, Erik Rijkers wrote:\n> Op 7/4/23 om 23:32 schreef Bruce Momjian:\n> > > > \thttps://momjian.us/pgsql_docs/release-16.html\n> \n> I noticed these:\n> \n> 'new new RULES' should be\n> 'new RULES'\n> \n> 'Perform apply of large transactions' should be\n> 'Performs apply of large transactions'\n> (I think)\n> \n> 'SQL JSON paths' should be\n> 'SQL/JSON paths'\n\nFixed with the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 9 Aug 2023 13:57:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, Jul 23, 2023 at 12:45:17PM +0800, jian he wrote:\n> > https://momjian.us/pgsql_docs/release-16.html\n> >\n> > I will adjust it to the feedback I receive; that URL will quickly show\n> > all updates.\n> \n> > Create a predefined role and grantable privilege with permission to perform maintenance operations (Nathan Bossart)\n> > The predefined role is is called pg_maintain.\n> \n> this feature was also reverted.\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=151c22deee66a3390ca9a1c3675e29de54ae73fc\n\nThenks, fixed based on earlier report by Laurenz Albe.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 9 Aug 2023 13:58:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, Jul 23, 2023 at 02:09:17PM +0300, Pavel Luzanov wrote:\n> Please consider to add item to the psql section:\n> \n> Add psql \\drg command to display role grants and remove the \"Member of\"\n> column from \\du & \\dg altogether (d65ddaca)\n\nThe release notes are only current as of 2023-06-26 and I will consider\nthis when I updated them next week, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 9 Aug 2023 14:06:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sun, Jul 23, 2023 at 08:19:55PM +0900, Michael Paquier wrote:\n> On Tue, Jul 04, 2023 at 05:32:07PM -0400, Bruce Momjian wrote:\n> > On Tue, Jul 4, 2023 at 03:31:05PM +0900, Michael Paquier wrote:\n> >> On Thu, May 18, 2023 at 04:49:47PM -0400, Bruce Momjian wrote:\n> >> Sawada-san has mentioned on twitter that fdd8937 is not mentioned in\n> >> the release notes, and it seems to me that he is right. This is\n> >> described as a bug in the commit log, but it did not get backpatched\n> >> because of the lack of complaints. Also, because we've removed\n> >> support for anything older than Windows 10 in PG16, this change very\n> >> easy to do.\n> > \n> > I did review this and wasn't sure exactly what I would describe. It is\n> > saying huge pages will now work on some versions of Windows 10 but\n> > didn't before?\n> \n> Windows 10 has always used a forced automated rolling upgrade process,\n> so there are not many versions older than 1703, I suppose. I don't\n> know if large pages were working before 1703 where\n> FILE_MAP_LARGE_PAGES has been introduced, and I have never been able\n> to test that. Honestly, I don't think that we need to be picky about\n> the version mentioned, as per the forced upgrade process done by\n> Microsoft.\n> \n> So, my preference would be to keep it simple and add an item like \"Fix\n> huge pages on Windows 10 and newer versions\", with as potential\n> subnote \"The backend sets a flag named FILE_MAP_LARGE_PAGES to allow\n> huge pages\", though this is not really mandatory to go down to this\n> level of internals, either.\n\nThat is very helpful. I added this to the release notes Server\nConfiguration section:\n\n\t<!--\n\tAuthor: Michael Paquier <michael@paquier.xyz>\n\t2022-09-17 [fdd8937c0] Fix huge_pages on Windows\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow huge pages to work on newer versions of Windows 10 (Thomas Munro)\n\t</para>\n\t\n\t<para>\n\tThis adds the special handling required to enable huge pages on newer\n\tversions of Windows 10.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 9 Aug 2023 17:45:27 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, Aug 5, 2023 at 04:08:47PM -0700, Noah Misch wrote:\n> On Thu, May 18, 2023 at 04:49:47PM -0400, Bruce Momjian wrote:\n> > \thttps://momjian.us/pgsql_docs/release-16.html\n> \n> > <!--\n> > Author: Robert Haas <rhaas@postgresql.org>\n> > 2023-01-10 [cf5eb37c5] Restrict the privileges of CREATEROLE users.\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Restrict the privileges of CREATEROLE roles (Robert Haas)\n> > </para>\n> > \n> > <para>\n> > Previously roles with CREATEROLE privileges could change many aspects of any non-superuser role. Such changes, including adding members, now require the role requesting the change to have ADMIN OPTION\n> > permission.\n> > </para>\n> > </listitem>\n> > \n> > <!--\n> > Author: Robert Haas <rhaas@postgresql.org>\n> > 2023-01-24 [f1358ca52] Adjust interaction of CREATEROLE with role properties.\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Improve logic of CREATEROLE roles ability to control other roles (Robert Haas)\n> > </para>\n> > \n> > <para>\n> > For example, they can change the CREATEDB, REPLICATION, and BYPASSRLS properties only if they also have those permissions.\n> > </para>\n> > </listitem>\n> \n> CREATEROLE is a radically different feature in v16. In v15-, it was an\n> almost-superuser. In v16, informally speaking, it can create and administer\n> its own collection of roles, but it can't administer roles outside its\n> collection or grant memberships or permissions not offered to itself. Hence,\n> let's move these two into the incompatibilities section. Let's also merge\n> them, since f1358ca52 is just doing to clauses like CREATEDB what cf5eb37c5\n> did to role memberships.\n\nGood point. I have adjusted this item with the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 9 Aug 2023 18:03:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, Aug 09, 2023 at 05:45:27PM -0400, Bruce Momjian wrote:\n> That is very helpful. I added this to the release notes Server\n> Configuration section:\n> \n> \t<listitem>\n> \t<para>\n> \tAllow huge pages to work on newer versions of Windows 10 (Thomas Munro)\n> \t</para>\n> \t\n> \t<para>\n> \tThis adds the special handling required to enable huge pages on newer\n> \tversions of Windows 10.\n> \t</para>\n> \t</listitem>\n\nLooks good to me, thanks for updating the notes!\n--\nMichael", "msg_date": "Thu, 10 Aug 2023 09:07:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, Aug 5, 2023 at 04:08:47PM -0700, Noah Misch wrote:\n> > Author: Robert Haas <rhaas@postgresql.org>\n> > 2022-08-25 [e3ce2de09] Allow grant-level control of role inheritance behavior.\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Allow GRANT to control role inheritance behavior (Robert Haas)\n> > </para>\n> > \n> > <para>\n> > By default, role inheritance is controlled by the inheritance status of the member role. The new GRANT clauses WITH INHERIT and WITH ADMIN can now override this.\n> > </para>\n> > </listitem>\n> > \n> > <!--\n> > Author: Robert Haas <rhaas@postgresql.org>\n> > 2023-01-10 [e5b8a4c09] Add new GUC createrole_self_grant.\n> > Author: Daniel Gustafsson <dgustafsson@postgresql.org>\n> > 2023-02-22 [e00bc6c92] doc: Add default value of createrole_self_grant\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Allow roles that create other roles to automatically inherit the new role's rights or SET ROLE to the new role (Robert Haas, Shi Yu)\n> > </para>\n> > \n> > <para>\n> > This is controlled by server variable createrole_self_grant.\n> > </para>\n> > </listitem>\n> \n> Similarly, v16 radically changes the CREATE ROLE ... WITH INHERIT clause. The\n> clause used to \"change the behavior of already-existing grants.\" Let's merge\n> these two and move the combination to the incompatibilities section.\n\nI need help with this. I don't understand how they can be combined, and\nI don't understand the incompatibility text in commit e3ce2de09d:\n\n If a GRANT does not specify WITH INHERIT, the behavior based on\n whether the member role is marked INHERIT or NOINHERIT. This means\n that if all roles are marked INHERIT or NOINHERIT before any role\n grants are performed, the behavior is identical to what we had before;\n otherwise, it's different, because ALTER ROLE [NO]INHERIT now only\n changes the default behavior of future grants, and has no effect on\n existing ones.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 9 Aug 2023 20:35:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, Aug 5, 2023 at 04:08:47PM -0700, Noah Misch wrote:\n> > Remove libpq support for SCM credential authentication (Michael Paquier)\n> \n> Since the point of removing it is the deep unlikelihood of anyone using it, I\n> wouldn't list this in \"incompatibilities\".\n\nI moved this to the Source Code section.\n\n> > Deprecate createuser option --role (Nathan Bossart)\n> \n> This is indeed a deprecation, not a removal. By the definition of\n> \"deprecate\", it's not an incompatibility.\n\nI moved this to the Server Applications section.\n\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 9 Aug 2023 20:48:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "FYI, the current PG 16 release notes are available at:\n\n\thttps://momjian.us/pgsql_docs/release-16.html\n\nI plan to add markup next week. I am sorry I was away most of July so\ncould not update this until now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 9 Aug 2023 22:11:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 09.08.2023 21:06, Bruce Momjian wrote:\n> On Sun, Jul 23, 2023 at 02:09:17PM +0300, Pavel Luzanov wrote:\n>> Please consider to add item to the psql section:\n>>\n>> Add psql \\drg command to display role grants and remove the \"Member of\"\n>> column from \\du & \\dg altogether (d65ddaca)\n> The release notes are only current as of 2023-06-26 and I will consider\n> this when I updated them next week, thanks.\n\nThis item is a part of Beta 3 scheduled for August 10, 2023 (today). [1]\nIt might be worth updating the release notes before the release.\nBut I'm not familiar with the release process in detail, so I could be \nwrong.\n\n1. \nhttps://www.postgresql.org/message-id/93c00ac3-08b3-33ba-5d77-6ceb6ab20254%40postgresql.org\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Thu, 10 Aug 2023 07:56:12 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, Aug 10, 2023 at 07:56:12AM +0300, Pavel Luzanov wrote:\n> On 09.08.2023 21:06, Bruce Momjian wrote:\n> > On Sun, Jul 23, 2023 at 02:09:17PM +0300, Pavel Luzanov wrote:\n> > > Please consider to add item to the psql section:\n> > > \n> > > Add psql \\drg command to display role grants and remove the \"Member of\"\n> > > column from \\du & \\dg altogether (d65ddaca)\n> > The release notes are only current as of 2023-06-26 and I will consider\n> > this when I updated them next week, thanks.\n> \n> This item is a part of Beta 3 scheduled for August 10, 2023 (today). [1]\n> It might be worth updating the release notes before the release.\n> But I'm not familiar with the release process in detail, so I could be\n> wrong.\n> \n> 1. https://www.postgresql.org/message-id/93c00ac3-08b3-33ba-5d77-6ceb6ab20254%40postgresql.org\n\nThe next text is:\n\n\t<listitem>\n\t<para>\n\tAllow psql's access privilege commands to show system objects (Nathan Bossart, Pavel Luzanov)\n\t</para>\n\t\n\t<para>\n-->\tThe options are \\dpS, \\zS, and \\drg.\n\t</para>\n\t</listitem>\n\nThe current release notes are at:\n\n\thttps://momjian.us/pgsql_docs/release-16.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 14 Aug 2023 19:13:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "You can view the Postgres 16 release notes, with markup and links to our\ndocs, here:\n\n\thttps://momjian.us/pgsql_docs/release-16.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 16 Aug 2023 22:36:05 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, Aug 9, 2023 at 08:35:21PM -0400, Bruce Momjian wrote:\n> On Sat, Aug 5, 2023 at 04:08:47PM -0700, Noah Misch wrote:\n> > > Author: Robert Haas <rhaas@postgresql.org>\n> > > 2022-08-25 [e3ce2de09] Allow grant-level control of role inheritance behavior.\n> > > -->\n> > > \n> > > <listitem>\n> > > <para>\n> > > Allow GRANT to control role inheritance behavior (Robert Haas)\n> > > </para>\n> > > \n> > > <para>\n> > > By default, role inheritance is controlled by the inheritance status of the member role. The new GRANT clauses WITH INHERIT and WITH ADMIN can now override this.\n> > > </para>\n> > > </listitem>\n> > > \n> > > <!--\n> > > Author: Robert Haas <rhaas@postgresql.org>\n> > > 2023-01-10 [e5b8a4c09] Add new GUC createrole_self_grant.\n> > > Author: Daniel Gustafsson <dgustafsson@postgresql.org>\n> > > 2023-02-22 [e00bc6c92] doc: Add default value of createrole_self_grant\n> > > -->\n> > > \n> > > <listitem>\n> > > <para>\n> > > Allow roles that create other roles to automatically inherit the new role's rights or SET ROLE to the new role (Robert Haas, Shi Yu)\n> > > </para>\n> > > \n> > > <para>\n> > > This is controlled by server variable createrole_self_grant.\n> > > </para>\n> > > </listitem>\n> > \n> > Similarly, v16 radically changes the CREATE ROLE ... WITH INHERIT clause. The\n> > clause used to \"change the behavior of already-existing grants.\" Let's merge\n> > these two and move the combination to the incompatibilities section.\n> \n> I need help with this. I don't understand how they can be combined, and\n> I don't understand the incompatibility text in commit e3ce2de09d:\n> \n> If a GRANT does not specify WITH INHERIT, the behavior based on\n> whether the member role is marked INHERIT or NOINHERIT. This means\n> that if all roles are marked INHERIT or NOINHERIT before any role\n> grants are performed, the behavior is identical to what we had before;\n> otherwise, it's different, because ALTER ROLE [NO]INHERIT now only\n> changes the default behavior of future grants, and has no effect on\n> existing ones.\n\nI am waiting for an answer to this question, or can I assume the release\nnotes are acceptable?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 16 Aug 2023 22:36:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 17.08.2023 05:36, Bruce Momjian wrote:\n> On Wed, Aug 9, 2023 at 08:35:21PM -0400, Bruce Momjian wrote:\n>> On Sat, Aug 5, 2023 at 04:08:47PM -0700, Noah Misch wrote:\n>>>> Author: Robert Haas <rhaas@postgresql.org>\n>>>> 2022-08-25 [e3ce2de09] Allow grant-level control of role inheritance behavior.\n>>>> -->\n>>>>\n>>>> <listitem>\n>>>> <para>\n>>>> Allow GRANT to control role inheritance behavior (Robert Haas)\n>>>> </para>\n>>>>\n>>>> <para>\n>>>> By default, role inheritance is controlled by the inheritance status of the member role. The new GRANT clauses WITH INHERIT and WITH ADMIN can now override this.\n>>>> </para>\n>>>> </listitem>\n>>>>\n>>>> <!--\n>>>> Author: Robert Haas <rhaas@postgresql.org>\n>>>> 2023-01-10 [e5b8a4c09] Add new GUC createrole_self_grant.\n>>>> Author: Daniel Gustafsson <dgustafsson@postgresql.org>\n>>>> 2023-02-22 [e00bc6c92] doc: Add default value of createrole_self_grant\n>>>> -->\n>>>>\n>>>> <listitem>\n>>>> <para>\n>>>> Allow roles that create other roles to automatically inherit the new role's rights or SET ROLE to the new role (Robert Haas, Shi Yu)\n>>>> </para>\n>>>>\n>>>> <para>\n>>>> This is controlled by server variable createrole_self_grant.\n>>>> </para>\n>>>> </listitem>\n>>> Similarly, v16 radically changes the CREATE ROLE ... WITH INHERIT clause. The\n>>> clause used to \"change the behavior of already-existing grants.\" Let's merge\n>>> these two and move the combination to the incompatibilities section.\n>> I need help with this. I don't understand how they can be combined, and\n>> I don't understand the incompatibility text in commit e3ce2de09d:\n>>\n>> If a GRANT does not specify WITH INHERIT, the behavior based on\n>> whether the member role is marked INHERIT or NOINHERIT. This means\n>> that if all roles are marked INHERIT or NOINHERIT before any role\n>> grants are performed, the behavior is identical to what we had before;\n>> otherwise, it's different, because ALTER ROLE [NO]INHERIT now only\n>> changes the default behavior of future grants, and has no effect on\n>> existing ones.\n> I am waiting for an answer to this question, or can I assume the release\n> notes are acceptable?\n\nI can try to explain how I understand it myself.\n\nIn v15 and early, inheritance of granted to role privileges depends on \nINHERIT attribute of a role:\n\ncreate user alice;\ngrant pg_read_all_settings to alice;\n\nBy default privileges inherited:\n\\c - alice\nshow data_directory;\n        data_directory\n-----------------------------\n  /var/lib/postgresql/15/main\n(1 row)\n\nAfter disabling the INHERIT attribute, privileges are not inherited:\n\n\\c - postgres\nalter role alice noinherit;\n\n\\c - alice\nshow data_directory;\nERROR:  must be superuser or have privileges of pg_read_all_settings to \nexamine \"data_directory\"\n\nIn v16 changing INHERIT attribute on alice role doesn't change \ninheritance behavior of already granted roles.\nIf we repeat the example, Alice still inherits pg_read_all_settings \nprivileges after disabling the INHERIT attribute for the role.\n\nInformation for making decisions about role inheritance has been moved \nfrom the role attribute to GRANT role TO role [WITH INHERIT|NOINHERIT] \ncommand and can be viewed by the new \\drg command:\n\n\\drg\n                     List of role grants\n  Role name |      Member of       |   Options    | Grantor\n-----------+----------------------+--------------+----------\n  alice     | pg_read_all_settings | INHERIT, SET | postgres\n(1 row)\n\nChanging the INHERIT attribute for a role now will affect (as the \ndefault value) only future GRANT commands without an INHERIT clause.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Thu, 17 Aug 2023 08:37:28 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, Aug 16, 2023 at 10:36:05PM -0400, Bruce Momjian wrote:\n> You can view the Postgres 16 release notes, with markup and links to our\n> docs, here:\n> \n> \thttps://momjian.us/pgsql_docs/release-16.html\n\nFYI, thanks to this patch:\n\n\tcommit 78ee60ed84\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\tDate: Mon Jan 9 15:08:24 2023 -0500\n\t\n\t Doc: add XML ID attributes to <sectN> and <varlistentry> tags.\n\t\n\t This doesn't have any external effect at the moment, but it\n\t will allow adding useful link-discoverability features later.\n\t\n\t Brar Piening, reviewed by Karl Pinc.\n\t\n\t Discussion: https://postgr.es/m/CAB8KJ=jpuQU9QJe4+RgWENrK5g9jhoysMw2nvTN_esoOU0=a_w@mail.gmail.com\n\nI was able to add more links to the docs, and the links were more\nprecise. I used to be frustrated I couldn't find nearby links to\ncontent, but I had no such troubles this year. I think the additional\nand more precise links will help people digest the release notes more\nefficiently.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 17 Aug 2023 10:59:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "I posted to pgsql-docs first, but was kindly redirected here by Jonathan:\n\nThe release notes for Postgres 16 says here:\nhttps://www.postgresql.org/docs/16/release-16.html#RELEASE-16-PERFORMANCE\n\nSame as here:\nhttps://momjian.us/pgsql_docs/release-16.html#RELEASE-16-PERFORMANCE\n\n Allow window functions to use ROWS mode internally when RANGE mode is\nspecified but unnecessary (David Rowley)\n\nBut the improvement (fix to some degree) also applies to the much more\ncommon case where no mode has been specified, RANGE unfortunately being the\ndefault.\nThat includes the most common use case \"row_number() OVER (ORDER BY col)\",\nwhere RANGE mode should not be applied to begin with, according to SQL\nspecs. This is what made me investigate, test and eventually propose a fix\nin the first place. See:\n\nhttps://www.postgresql.org/message-id/flat/CAGHENJ7LBBszxS%2BSkWWFVnBmOT2oVsBhDMB1DFrgerCeYa_DyA%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/CAApHDvohAKEtTXxq7Pc-ic2dKT8oZfbRKeEJP64M0B6%2BS88z%2BA%40mail.gmail.com\n\nAlso, I was hoping to be mentioned in the release note for working this out:\n\n Allow window functions to use the faster ROWS mode internally when\nRANGE mode is specified or would be default, but unnecessary (David Rowley,\nErwin Brandstetter)\n\n\nThanks,\nErwin\n\nOn Sat, 19 Aug 2023 at 04:02, Bruce Momjian <bruce@momjian.us> wrote:\n\n> I have completed the first draft of the PG 16 release notes. You can\n> see the output here:\n>\n> https://momjian.us/pgsql_docs/release-16.html\n>\n> I will adjust it to the feedback I receive; that URL will quickly show\n> all updates.\n>\n> I learned a few things creating it this time:\n>\n> * I can get confused over C function names and SQL function names in\n> commit messages.\n>\n> * The sections and ordering of the entries can greatly clarify the\n> items.\n>\n> * The feature count is slightly higher than recent releases:\n>\n> release-10: 189\n> release-11: 170\n> release-12: 180\n> release-13: 178\n> release-14: 220\n> release-15: 184\n> --> release-16: 200\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n>\n>\n>\n>\n>\n\nI posted to pgsql-docs first, but was kindly redirected here by Jonathan:The release notes for Postgres 16 says here:https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-PERFORMANCESame as here:https://momjian.us/pgsql_docs/release-16.html#RELEASE-16-PERFORMANCE    Allow window functions to use ROWS mode internally when RANGE mode is specified but unnecessary (David Rowley)But\n the improvement (fix to some degree) also applies to the much more common case where no mode has \nbeen specified, RANGE unfortunately being the default.That includes the \nmost common use case \"row_number() OVER (ORDER BY col)\", where RANGE mode should not be applied to begin with, according to SQL specs. This is what \nmade me investigate, test and eventually propose a fix in the first \nplace. See:https://www.postgresql.org/message-id/flat/CAGHENJ7LBBszxS%2BSkWWFVnBmOT2oVsBhDMB1DFrgerCeYa_DyA%40mail.gmail.comhttps://www.postgresql.org/message-id/flat/CAApHDvohAKEtTXxq7Pc-ic2dKT8oZfbRKeEJP64M0B6%2BS88z%2BA%40mail.gmail.comAlso, I was hoping to be mentioned in the release note for working this out:    Allow window functions to use the faster ROWS mode internally when \nRANGE mode is specified or would be default, but unnecessary \n(David Rowley, Erwin Brandstetter)Thanks,ErwinOn Sat, 19 Aug 2023 at 04:02, Bruce Momjian <bruce@momjian.us> wrote:I have completed the first draft of the PG 16 release notes.  You can\nsee the output here:\n\n        https://momjian.us/pgsql_docs/release-16.html\n\nI will adjust it to the feedback I receive;  that URL will quickly show\nall updates.\n\nI learned a few things creating it this time:\n\n*  I can get confused over C function names and SQL function names in\n   commit messages.\n\n*  The sections and ordering of the entries can greatly clarify the\n   items.\n\n*  The feature count is slightly higher than recent releases:\n\n        release-10:  189\n        release-11:  170\n        release-12:  180\n        release-13:  178\n        release-14:  220\n        release-15:  184\n-->     release-16:  200\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  Only you can decide what is important to you.", "msg_date": "Sat, 19 Aug 2023 04:24:48 +0200", "msg_from": "Erwin Brandstetter <brsaweda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, Aug 17, 2023 at 08:37:28AM +0300, Pavel Luzanov wrote:\n> On 17.08.2023 05:36, Bruce Momjian wrote:\n> > On Wed, Aug 9, 2023 at 08:35:21PM -0400, Bruce Momjian wrote:\n> > > On Sat, Aug 5, 2023 at 04:08:47PM -0700, Noah Misch wrote:\n> > > > > Author: Robert Haas <rhaas@postgresql.org>\n> > > > > 2022-08-25 [e3ce2de09] Allow grant-level control of role inheritance behavior.\n> > > > > -->\n> > > > > \n> > > > > <listitem>\n> > > > > <para>\n> > > > > Allow GRANT to control role inheritance behavior (Robert Haas)\n> > > > > </para>\n> > > > > \n> > > > > <para>\n> > > > > By default, role inheritance is controlled by the inheritance status of the member role. The new GRANT clauses WITH INHERIT and WITH ADMIN can now override this.\n> > > > > </para>\n> > > > > </listitem>\n> > > > > \n> > > > > <!--\n> > > > > Author: Robert Haas <rhaas@postgresql.org>\n> > > > > 2023-01-10 [e5b8a4c09] Add new GUC createrole_self_grant.\n> > > > > Author: Daniel Gustafsson <dgustafsson@postgresql.org>\n> > > > > 2023-02-22 [e00bc6c92] doc: Add default value of createrole_self_grant\n> > > > > -->\n> > > > > \n> > > > > <listitem>\n> > > > > <para>\n> > > > > Allow roles that create other roles to automatically inherit the new role's rights or SET ROLE to the new role (Robert Haas, Shi Yu)\n> > > > > </para>\n> > > > > \n> > > > > <para>\n> > > > > This is controlled by server variable createrole_self_grant.\n> > > > > </para>\n> > > > > </listitem>\n> > > > Similarly, v16 radically changes the CREATE ROLE ... WITH INHERIT clause. The\n> > > > clause used to \"change the behavior of already-existing grants.\" Let's merge\n> > > > these two and move the combination to the incompatibilities section.\n> > > I need help with this. I don't understand how they can be combined, and\n> > > I don't understand the incompatibility text in commit e3ce2de09d:\n> > > \n> > > If a GRANT does not specify WITH INHERIT, the behavior based on\n> > > whether the member role is marked INHERIT or NOINHERIT. This means\n> > > that if all roles are marked INHERIT or NOINHERIT before any role\n> > > grants are performed, the behavior is identical to what we had before;\n> > > otherwise, it's different, because ALTER ROLE [NO]INHERIT now only\n> > > changes the default behavior of future grants, and has no effect on\n> > > existing ones.\n> > I am waiting for an answer to this question, or can I assume the release\n> > notes are acceptable?\n> \n> I can try to explain how I understand it myself.\n> \n> In v15 and early, inheritance of granted to role privileges depends on\n> INHERIT attribute of a role:\n> \n> create user alice;\n> grant pg_read_all_settings to alice;\n> \n> By default privileges inherited:\n> \\c - alice\n> show data_directory;\n>        data_directory\n> -----------------------------\n>  /var/lib/postgresql/15/main\n> (1 row)\n> \n> After disabling the INHERIT attribute, privileges are not inherited:\n> \n> \\c - postgres\n> alter role alice noinherit;\n> \n> \\c - alice\n> show data_directory;\n> ERROR:  must be superuser or have privileges of pg_read_all_settings to\n> examine \"data_directory\"\n> \n> In v16 changing INHERIT attribute on alice role doesn't change inheritance\n> behavior of already granted roles.\n> If we repeat the example, Alice still inherits pg_read_all_settings\n> privileges after disabling the INHERIT attribute for the role.\n> \n> Information for making decisions about role inheritance has been moved from\n> the role attribute to GRANT role TO role [WITH INHERIT|NOINHERIT] command\n> and can be viewed by the new \\drg command:\n> \n> \\drg\n>                     List of role grants\n>  Role name |      Member of       |   Options    | Grantor\n> -----------+----------------------+--------------+----------\n>  alice     | pg_read_all_settings | INHERIT, SET | postgres\n> (1 row)\n> \n> Changing the INHERIT attribute for a role now will affect (as the default\n> value) only future GRANT commands without an INHERIT clause.\n\nI was able to create this simple example to illustrate it:\n\n\tCREATE ROLE a1;\n\tCREATE ROLE a2;\n\tCREATE ROLE a3;\n\tCREATE ROLE a4;\n\tCREATE ROLE b INHERIT;\n\n\tGRANT a1 TO b WITH INHERIT TRUE;\n\tGRANT a2 TO b WITH INHERIT FALSE;\n\n\tGRANT a3 TO b;\n\tALTER USER b NOINHERIT;\n\tGRANT a4 TO b;\n\n\t\\drg\n\t List of role grants\n\t Role name | Member of | Options | Grantor\n\t-----------+-----------+--------------+----------\n\t b | a1 | INHERIT, SET | postgres\n\t b | a2 | SET | postgres\n\t b | a3 | INHERIT, SET | postgres\n\t b | a4 | SET | postgres\n\nI will work on the relase notes adjustments for this and reply in a few\ndays.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 19 Aug 2023 12:59:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, Aug 19, 2023 at 12:59:47PM -0400, Bruce Momjian wrote:\n> On Thu, Aug 17, 2023 at 08:37:28AM +0300, Pavel Luzanov wrote:\n> > I can try to explain how I understand it myself.\n> > \n> > In v15 and early, inheritance of granted to role privileges depends on\n> > INHERIT attribute of a role:\n> > \n> > create user alice;\n> > grant pg_read_all_settings to alice;\n> > \n> > By default privileges inherited:\n> > \\c - alice\n> > show data_directory;\n> >        data_directory\n> > -----------------------------\n> >  /var/lib/postgresql/15/main\n> > (1 row)\n> > \n> > After disabling the INHERIT attribute, privileges are not inherited:\n> > \n> > \\c - postgres\n> > alter role alice noinherit;\n> > \n> > \\c - alice\n> > show data_directory;\n> > ERROR:  must be superuser or have privileges of pg_read_all_settings to\n> > examine \"data_directory\"\n> > \n> > In v16 changing INHERIT attribute on alice role doesn't change inheritance\n> > behavior of already granted roles.\n> > If we repeat the example, Alice still inherits pg_read_all_settings\n> > privileges after disabling the INHERIT attribute for the role.\n> > \n> > Information for making decisions about role inheritance has been moved from\n> > the role attribute to GRANT role TO role [WITH INHERIT|NOINHERIT] command\n> > and can be viewed by the new \\drg command:\n> > \n> > \\drg\n> >                     List of role grants\n> >  Role name |      Member of       |   Options    | Grantor\n> > -----------+----------------------+--------------+----------\n> >  alice     | pg_read_all_settings | INHERIT, SET | postgres\n> > (1 row)\n> > \n> > Changing the INHERIT attribute for a role now will affect (as the default\n> > value) only future GRANT commands without an INHERIT clause.\n> \n> I was able to create this simple example to illustrate it:\n> \n> \tCREATE ROLE a1;\n> \tCREATE ROLE a2;\n> \tCREATE ROLE a3;\n> \tCREATE ROLE a4;\n> \tCREATE ROLE b INHERIT;\n> \n> \tGRANT a1 TO b WITH INHERIT TRUE;\n> \tGRANT a2 TO b WITH INHERIT FALSE;\n> \n> \tGRANT a3 TO b;\n> \tALTER USER b NOINHERIT;\n> \tGRANT a4 TO b;\n> \n> \t\\drg\n> \t List of role grants\n> \t Role name | Member of | Options | Grantor\n> \t-----------+-----------+--------------+----------\n> \t b | a1 | INHERIT, SET | postgres\n> \t b | a2 | SET | postgres\n> \t b | a3 | INHERIT, SET | postgres\n> \t b | a4 | SET | postgres\n> \n> I will work on the relase notes adjustments for this and reply in a few\n> days.\n\nAttached is an applied patch that moves the inherit item into\nincompatibilities. clarifies it, and splits out the ADMIN syntax item.\n\nPlease let me know if I need any other changes. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Mon, 21 Aug 2023 17:58:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Sat, Aug 19, 2023 at 04:24:48AM +0200, Erwin Brandstetter wrote:\n> I posted to pgsql-docs first, but was kindly redirected here by Jonathan:\n> \n> The release notes for Postgres 16 says here:\n> https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-PERFORMANCE\n> \n> Same as here:\n> https://momjian.us/pgsql_docs/release-16.html#RELEASE-16-PERFORMANCE\n> \n>     Allow window functions to use ROWS mode internally when RANGE mode is\n> specified but unnecessary (David Rowley)\n\nYes, I didn't like \"specified\" myself but never returned to improve it. \nI am now using:\n\n\tAllow window functions to use the faster <link\n\tlinkend=\"syntax-window-functions\"><literal>ROWS</literal></link> mode\n\tinternally when <literal>RANGE</literal> mode is active but unnecessary\n\t ------\n\t(David Rowley)\n\nCan that be improved?\n\n> But the improvement (fix to some degree) also applies to the much more common\n> case where no mode has been specified, RANGE unfortunately being the default.\n> That includes the most common use case \"row_number() OVER (ORDER BY col)\",\n> where RANGE mode should not be applied to begin with, according to SQL specs.\n> This is what made me investigate, test and eventually propose a fix in the\n> first place. See:\n\nYes, very true.\n\n> Also, I was hoping to be mentioned in the release note for working this out:\n> \n>     Allow window functions to use the faster ROWS mode internally when RANGE\n> mode is specified or would be default, but unnecessary (David Rowley, Erwin\n> Brandstetter)\n\nUh, I have CC'ed David Rowley because that is unclear based on the\ncommit message. I don't normally mention reviewers.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 21 Aug 2023 18:07:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, 22 Aug 2023 at 10:08, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, Aug 19, 2023 at 04:24:48AM +0200, Erwin Brandstetter wrote:\n> > I posted to pgsql-docs first, but was kindly redirected here by Jonathan:\n> >\n> > The release notes for Postgres 16 says here:\n> > https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-PERFORMANCE\n> >\n> > Same as here:\n> > https://momjian.us/pgsql_docs/release-16.html#RELEASE-16-PERFORMANCE\n> >\n> > Allow window functions to use ROWS mode internally when RANGE mode is\n> > specified but unnecessary (David Rowley)\n>\n> Yes, I didn't like \"specified\" myself but never returned to improve it.\n> I am now using:\n>\n> Allow window functions to use the faster <link\n> linkend=\"syntax-window-functions\"><literal>ROWS</literal></link> mode\n> internally when <literal>RANGE</literal> mode is active but unnecessary\n> ------\n> (David Rowley)\n>\n> Can that be improved?\n\nLooks good to me.\n\n> > Also, I was hoping to be mentioned in the release note for working this out:\n> >\n> > Allow window functions to use the faster ROWS mode internally when RANGE\n> > mode is specified or would be default, but unnecessary (David Rowley, Erwin\n> > Brandstetter)\n>\n> Uh, I have CC'ed David Rowley because that is unclear based on the\n> commit message. I don't normally mention reviewers.\n\nI confirm that Erwin reported in [1] that row_number() is not affected\nby the ROWS/RANGE option and that ROWS performs better due to the\nexecutor having less work to do. I am the author of the patch which\nimplemented that plus a few other window functions that also can\nbenefit from the same optimisation. Based on this, I don't see any\nproblems with the credits for this item as they are currently in the\nrelease notes.\n\nDavid\n\n[1] https://postgr.es/m/CAGHENJ7LBBszxS%2BSkWWFVnBmOT2oVsBhDMB1DFrgerCeYa_DyA%40mail.gmail.com\n\n\n", "msg_date": "Tue, 22 Aug 2023 17:36:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On 22.08.2023 00:58, Bruce Momjian wrote:\n> Attached is an applied patch that moves the inherit item into\n> incompatibilities. clarifies it, and splits out the ADMIN syntax item.\n\n > The role's default inheritance behavior can be overridden with the \nnew <command>GRANT ... WITH INHERIT</command> clause.\n\nThe only question about \"can be\". Why not \"will be\"? The inheritance \nbehavior will be changed anyway.\n\n> Please let me know if I need any other changes. Thanks.\n\n* Allow psql's access privilege commands to show system objects (Nathan \nBossart, Pavel Luzanov)\n     The options are \\dpS, \\zS, and \\drg.\n\nI think that this description correct only for the \\dpS and \\zS commands.\n(By the way, unfortunately after reverting MAINTAIN privilege this \ncommands are not much useful in v16.)\n\nBut the \\drg command is a different thing. This is a full featured \nreplacement for \"Member of\" column of the \\du, \\dg commands.\nIt shows not only members, but granted options (admin, inherit, set) and \ngrantor.\nThis is important information for membership usage and administration.\nIMO, removing the \"Member of\" column from the \\du & \\dg commands also \nrequires attention in release notes.\n\nSo, I suggest new item in the psql section for \\drg. Something like this:\n\n* Add psql \\drg command to display role grants and remove the \"Member \nof\" column from \\du & \\dg altogether.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Tue, 22 Aug 2023 10:02:16 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, Aug 22, 2023 at 10:02:16AM +0300, Pavel Luzanov wrote:\n> On 22.08.2023 00:58, Bruce Momjian wrote:\n> > Attached is an applied patch that moves the inherit item into\n> > incompatibilities. clarifies it, and splits out the ADMIN syntax item.\n> \n> > The role's default inheritance behavior can be overridden with the new\n> <command>GRANT ... WITH INHERIT</command> clause.\n> \n> The only question about \"can be\". Why not \"will be\"? The inheritance\n> behavior will be changed anyway.\n\nI used \"can be\" to highlight you \"can\" override it, but don't need to.\n\n> > Please let me know if I need any other changes. Thanks.\n> \n> * Allow psql's access privilege commands to show system objects (Nathan\n> Bossart, Pavel Luzanov)\n>     The options are \\dpS, \\zS, and \\drg.\n> \n> I think that this description correct only for the \\dpS and \\zS commands.\n> (By the way, unfortunately after reverting MAINTAIN privilege this commands\n> are not much useful in v16.)\n> \n> But the \\drg command is a different thing. This is a full featured\n> replacement for \"Member of\" column of the \\du, \\dg commands.\n> It shows not only members, but granted options (admin, inherit, set) and\n> grantor.\n> This is important information for membership usage and administration.\n> IMO, removing the \"Member of\" column from the \\du & \\dg commands also\n> requires attention in release notes.\n> \n> So, I suggest new item in the psql section for \\drg. Something like this:\n> \n> * Add psql \\drg command to display role grants and remove the \"Member of\"\n> column from \\du & \\dg altogether.\n\nI see your point. Attached is an applied patch which fixes this by\nsplitting \\drg into a separate item and adding text.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Tue, 22 Aug 2023 15:14:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Thu, 2023-05-18 at 16:49 -0400, Bruce Momjian wrote:\n> I have completed the first draft of the PG 16 release notes.  You can\n> see the output here:\n\n\nhttps://www.postgresql.org/docs/16/release-16.html#RELEASE-16-LOCALIZATION\n\nI notice that this item is still listed:\n\n * Determine the ICU default locale from the environment (Jeff Davis)\n\nBut that was reverted as part of 2535c74b1a.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 22 Aug 2023 13:42:41 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, Aug 22, 2023 at 01:42:41PM -0700, Jeff Davis wrote:\n> On Thu, 2023-05-18 at 16:49 -0400, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 16 release notes.  You can\n> > see the output here:\n> \n> \n> https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-LOCALIZATION\n> \n> I notice that this item is still listed:\n> \n> * Determine the ICU default locale from the environment (Jeff Davis)\n> \n> But that was reverted as part of 2535c74b1a.\n\nThe original commit is:\n\n\tAuthor: Jeff Davis <jdavis@postgresql.org>\n\t2023-03-10 [c45dc7ffb] initdb: derive encoding from locale for ICU; similar to\n\nand I don't see that reverted by 2535c74b1a. Is that a problem?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 22 Aug 2023 22:23:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Tue, 2023-08-22 at 22:23 -0400, Bruce Momjian wrote:\n> > I notice that this item is still listed:\n> > \n> >  * Determine the ICU default locale from the environment (Jeff\n> > Davis)\n> > \n> > But that was reverted as part of 2535c74b1a.\n> \n> The original commit is:\n> \n>         Author: Jeff Davis <jdavis@postgresql.org>\n>         2023-03-10 [c45dc7ffb] initdb: derive encoding from locale\n> for ICU; similar to\n> \n> and I don't see that reverted by 2535c74b1a.  Is that a problem?\n\nc45dc7ffb causes initdb to choose the encoding based on the environment\nfor ICU just like libc, and that was not reverted, so in v16:\n\n $ export LANG=en_US\n $ initdb -D data --locale-provider=icu --icu-locale=en\n ...\n The default database encoding has accordingly been set to \"LATIN1\".\n\nWhereas previously in v15 that would cause an error like:\n\n initdb: error: encoding mismatch\n initdb: detail: The encoding you selected (UTF8) and the encoding\nthat the selected locale uses (LATIN1) do not match...\n\n\"Determine the ICU default locale from the environment\" to me refers to\nwhat happened in 27b62377b4, where initdb would select an ICU locale if\none was not provided. 2535c74b1a reverted that, so in v16:\n\n $ initdb -D data --locale-provider=icu\n initdb: error: ICU locale must be specified\n\nJust like in v15.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 23 Aug 2023 09:36:01 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" }, { "msg_contents": "On Wed, Aug 23, 2023 at 09:36:01AM -0700, Jeff Davis wrote:\n> On Tue, 2023-08-22 at 22:23 -0400, Bruce Momjian wrote:\n> > > I notice that this item is still listed:\n> > > \n> > >  * Determine the ICU default locale from the environment (Jeff\n> > > Davis)\n> > > \n> > > But that was reverted as part of 2535c74b1a.\n> > \n> > The original commit is:\n> > \n> >         Author: Jeff Davis <jdavis@postgresql.org>\n> >         2023-03-10 [c45dc7ffb] initdb: derive encoding from locale\n> > for ICU; similar to\n> > \n> > and I don't see that reverted by 2535c74b1a.  Is that a problem?\n> \n> c45dc7ffb causes initdb to choose the encoding based on the environment\n> for ICU just like libc, and that was not reverted, so in v16:\n> \n> $ export LANG=en_US\n> $ initdb -D data --locale-provider=icu --icu-locale=en\n> ...\n> The default database encoding has accordingly been set to \"LATIN1\".\n> \n> Whereas previously in v15 that would cause an error like:\n> \n> initdb: error: encoding mismatch\n> initdb: detail: The encoding you selected (UTF8) and the encoding\n> that the selected locale uses (LATIN1) do not match...\n> \n> \"Determine the ICU default locale from the environment\" to me refers to\n> what happened in 27b62377b4, where initdb would select an ICU locale if\n> one was not provided. 2535c74b1a reverted that, so in v16:\n> \n> $ initdb -D data --locale-provider=icu\n> initdb: error: ICU locale must be specified\n> \n> Just like in v15.\n\nOkay, so what I hear you saying is that commit c45dc7ffb needs to remain\nin the release notes, but its description sounds like 27b62377b4, which\nwas reverted, so my description is wrong for c45dc7ffb.\n\nI would love to blame the patch revert on this mistake, but looking at\nthe history of this entry, I just didn't understand it when I initiallly\nwrote it. Updated applied patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 24 Aug 2023 21:43:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: PG 16 draft release notes ready" } ]
[ { "msg_contents": "Hi hackers,\n I found $subject problem when using SQLancer.\n\n How to repeat:\n CREATE TEMP TABLE t0(c0 inet, c1 money, c2 TEXT);\nCREATE TEMP TABLE IF NOT EXISTS t1(LIKE t0); CREATE TEMP TABLE t2(c0\nboolean , c1 DECIMAL NOT NULL UNIQUE); CREATE TEMPORARY TABLE t3(LIKE t1);\nCREATE VIEW v0(c0) AS (SELECT DISTINCT 0 FROM t3); SELECT SUM(count) FROM\n(SELECT\n(((t1.c2)LIKE(((((t0.c2)||(((103556691)-(v0.c0)))))||(v0.c0)))))::INT as\ncount FROM t0, ONLY t1 RIGHT OUTER JOIN ONLY t2 ON t2.c0 RIGHT OUTER JOIN\nv0 ON ((t2.c1)=(0.08182310538090898))) as res;\n\n\npsql (16devel)\nType \"help\" for help.\n\npostgres=# \\d\nDid not find any relations.\npostgres=# CREATE TEMP TABLE t0(c0 inet, c1 money, c2 TEXT);\nCREATE TABLE\npostgres=# CREATE TEMP TABLE IF NOT EXISTS t1(LIKE t0);\nCREATE TABLE\npostgres=# CREATE TEMP TABLE t2(c0 boolean , c1 DECIMAL NOT NULL UNIQUE);\nCREATE TABLE\npostgres=# CREATE TEMPORARY TABLE t3(LIKE t1);\nCREATE TABLE\npostgres=# CREATE VIEW v0(c0) AS (SELECT DISTINCT 0 FROM t3);\nNOTICE: view \"v0\" will be a temporary view\nCREATE VIEW\npostgres=# SELECT SUM(count) FROM (SELECT\n(((t1.c2)LIKE(((((t0.c2)||(((103556691)-(v0.c0)))))||(v0.c0)))))::INT as\ncount FROM t0, ONLY t1 RIGHT OUTER JOIN ONLY t2 ON t2.c0 RIGHT OUTER JOIN\nv0 ON ((t2.c1)=(0.08182310538090898))) as res;\nERROR: wrong varnullingrels (b 5 7) (expected (b)) for Var 3/3\n\nregards, tender wang\n\nHi hackers,   I found $subject problem when using SQLancer.   How to repeat: CREATE TEMP TABLE t0(c0 inet, c1 money, c2 TEXT); CREATE TEMP TABLE IF NOT EXISTS t1(LIKE t0);\n CREATE TEMP TABLE t2(c0 boolean , c1 DECIMAL NOT NULL UNIQUE);\n CREATE TEMPORARY TABLE t3(LIKE t1);\n CREATE VIEW v0(c0) AS (SELECT DISTINCT 0 FROM t3);\n SELECT SUM(count) FROM (SELECT (((t1.c2)LIKE(((((t0.c2)||(((103556691)-(v0.c0)))))||(v0.c0)))))::INT as count FROM t0, ONLY t1 RIGHT OUTER JOIN ONLY t2 ON t2.c0 RIGHT OUTER JOIN v0 ON ((t2.c1)=(0.08182310538090898))) as res;psql (16devel)Type \"help\" for help.postgres=# \\dDid not find any relations.postgres=# CREATE TEMP TABLE t0(c0 inet, c1 money, c2 TEXT);CREATE TABLEpostgres=# CREATE TEMP TABLE IF NOT EXISTS t1(LIKE t0);CREATE TABLEpostgres=# CREATE TEMP TABLE t2(c0 boolean , c1 DECIMAL  NOT NULL UNIQUE);CREATE TABLEpostgres=# CREATE TEMPORARY TABLE t3(LIKE t1);CREATE TABLEpostgres=# CREATE VIEW v0(c0) AS (SELECT DISTINCT 0 FROM t3);NOTICE:  view \"v0\" will be a temporary viewCREATE VIEWpostgres=# SELECT SUM(count) FROM (SELECT (((t1.c2)LIKE(((((t0.c2)||(((103556691)-(v0.c0)))))||(v0.c0)))))::INT as count FROM t0, ONLY t1 RIGHT OUTER JOIN ONLY t2 ON t2.c0 RIGHT OUTER JOIN v0 ON ((t2.c1)=(0.08182310538090898))) as res;ERROR:  wrong varnullingrels (b 5 7) (expected (b)) for Var 3/3 regards, tender wang", "msg_date": "Fri, 19 May 2023 09:42:35 +0800", "msg_from": "tender wang <tndrwang@gmail.com>", "msg_from_op": true, "msg_subject": "ERROR: wrong varnullingrels (b 5 7) (expected (b)) for Var 3/3" }, { "msg_contents": "On Fri, May 19, 2023 at 09:42:35AM +0800, tender wang wrote:\n> postgres=# \\d\n> Did not find any relations.\n> postgres=# CREATE TEMP TABLE t0(c0 inet, c1 money, c2 TEXT);\n> CREATE TABLE\n> postgres=# CREATE TEMP TABLE IF NOT EXISTS t1(LIKE t0);\n> CREATE TABLE\n> postgres=# CREATE TEMP TABLE t2(c0 boolean , c1 DECIMAL NOT NULL UNIQUE);\n> CREATE TABLE\n> postgres=# CREATE TEMPORARY TABLE t3(LIKE t1);\n> CREATE TABLE\n> postgres=# CREATE VIEW v0(c0) AS (SELECT DISTINCT 0 FROM t3);\n> NOTICE: view \"v0\" will be a temporary view\n> CREATE VIEW\n> postgres=# SELECT SUM(count) FROM (SELECT\n> (((t1.c2)LIKE(((((t0.c2)||(((103556691)-(v0.c0)))))||(v0.c0)))))::INT as\n> count FROM t0, ONLY t1 RIGHT OUTER JOIN ONLY t2 ON t2.c0 RIGHT OUTER JOIN\n> v0 ON ((t2.c1)=(0.08182310538090898))) as res;\n> ERROR: wrong varnullingrels (b 5 7) (expected (b)) for Var 3/3\n\nThanks for the test case, issue reproduced here on HEAD and not v15.\nThis causes an assertion failure here:\n#4 0x000055a6f8faa776 in ExceptionalCondition (conditionName=0x55a6f915ac60 \"bms_equal(rel->relids, root->all_query_rels)\", fileName=0x55a6f915ac3d \"allpaths.c\", \n lineNumber=234) at assert.c:66\n#5 0x000055a6f8c55b6d in make_one_rel (root=0x55a6fa814ea8, joinlist=0x55a6fa83f758) at allpaths.c:234\n#6 0x000055a6f8c95c45 in query_planner (root=0x55a6fa814ea8, qp_callback=0x55a6f8c9c373 <standard_qp_callback>, qp_extra=0x7ffc98138570) at planmain.c:278\n#7 0x000055a6f8c9860a in grouping_planner (root=0x55a6fa814ea8, tuple_fraction=0) at planner.c:1493\n\nI am adding an open item.\n--\nMichael", "msg_date": "Fri, 19 May 2023 14:57:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: wrong varnullingrels (b 5 7) (expected (b)) for Var 3/3" }, { "msg_contents": "On Fri, May 19, 2023 at 1:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Thanks for the test case, issue reproduced here on HEAD and not v15.\n> This causes an assertion failure here:\n> #4 0x000055a6f8faa776 in ExceptionalCondition\n> (conditionName=0x55a6f915ac60 \"bms_equal(rel->relids,\n> root->all_query_rels)\", fileName=0x55a6f915ac3d \"allpaths.c\",\n> lineNumber=234) at assert.c:66\n> #5 0x000055a6f8c55b6d in make_one_rel (root=0x55a6fa814ea8,\n> joinlist=0x55a6fa83f758) at allpaths.c:234\n> #6 0x000055a6f8c95c45 in query_planner (root=0x55a6fa814ea8,\n> qp_callback=0x55a6f8c9c373 <standard_qp_callback>, qp_extra=0x7ffc98138570)\n> at planmain.c:278\n> #7 0x000055a6f8c9860a in grouping_planner (root=0x55a6fa814ea8,\n> tuple_fraction=0) at planner.c:1493\n>\n> I am adding an open item.\n\n\nThanks for testing. I looked into it and figured out that it is the\nsame issue discussed at the end of this thread:\nhttps://www.postgresql.org/message-id/CAMbWs4-EU9uBGSP7G-iTwLBhRQ%3DrnZKvFDhD%2Bn%2BxhajokyPCKg%40mail.gmail.com\n\nThanks\nRichard\n\nOn Fri, May 19, 2023 at 1:57 PM Michael Paquier <michael@paquier.xyz> wrote:\nThanks for the test case, issue reproduced here on HEAD and not v15.\nThis causes an assertion failure here:\n#4  0x000055a6f8faa776 in ExceptionalCondition (conditionName=0x55a6f915ac60 \"bms_equal(rel->relids, root->all_query_rels)\", fileName=0x55a6f915ac3d \"allpaths.c\", \n    lineNumber=234) at assert.c:66\n#5  0x000055a6f8c55b6d in make_one_rel (root=0x55a6fa814ea8, joinlist=0x55a6fa83f758) at allpaths.c:234\n#6  0x000055a6f8c95c45 in query_planner (root=0x55a6fa814ea8, qp_callback=0x55a6f8c9c373 <standard_qp_callback>, qp_extra=0x7ffc98138570) at planmain.c:278\n#7  0x000055a6f8c9860a in grouping_planner (root=0x55a6fa814ea8, tuple_fraction=0) at planner.c:1493\n\nI am adding an open item.Thanks for testing.  I looked into it and figured out that it is thesame issue discussed at the end of this thread:https://www.postgresql.org/message-id/CAMbWs4-EU9uBGSP7G-iTwLBhRQ%3DrnZKvFDhD%2Bn%2BxhajokyPCKg%40mail.gmail.comThanksRichard", "msg_date": "Fri, 19 May 2023 15:46:26 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ERROR: wrong varnullingrels (b 5 7) (expected (b)) for Var 3/3" }, { "msg_contents": "On Fri, May 19, 2023 at 03:46:26PM +0800, Richard Guo wrote:\n> Thanks for testing. I looked into it and figured out that it is the\n> same issue discussed at the end of this thread:\n> https://www.postgresql.org/message-id/CAMbWs4-EU9uBGSP7G-iTwLBhRQ%3DrnZKvFDhD%2Bn%2BxhajokyPCKg%40mail.gmail.com\n\nOK, thanks for confirming.\n--\nMichael", "msg_date": "Mon, 22 May 2023 09:19:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: wrong varnullingrels (b 5 7) (expected (b)) for Var 3/3" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is a draft of the release announcement for PostgreSQL 16 Beta \r\n1. The goal of this announcement is to get people excited about testing \r\nthe beta and highlight many of the new features.\r\n\r\nPlease review for inaccuracies, omissions, and other suggestions / errors.\r\n\r\nPlease provide feedback no later than May 24, 0:00 AoE. This will give \r\nme enough time to incorporate the changes prior to the release the next day.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Fri, 19 May 2023 00:17:50 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "Op 5/19/23 om 06:17 schreef Jonathan S. Katz:\n> Hi,\n> \n> Attached is a draft of the release announcement for PostgreSQL 16 Beta \n\nHi,\n\n\nThe usual small fry:\n\n\n'continues to to' should be\n'continues to'\n\n'continues to give users to the ability' should be\n'continues to give users the ability to'\n\n'pg_createsubscription' should be\n'pg_create_subscription'\n\n'starting with release' should be\n'starting with this release'\n\n'credentials to connected to other services' should be\n'credentials to connect to other services'\n\n\nThanks,\n\nErik\n\n\n\n", "msg_date": "Fri, 19 May 2023 07:42:37 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On Fri, May 19, 2023 at 12:17:50AM -0400, Jonathan S. Katz wrote:\n> Attached is a draft of the release announcement for PostgreSQL 16 Beta 1.\n> The goal of this announcement is to get people excited about testing the\n> beta and highlight many of the new features.\n\nThanks!\n\n> PostgreSQL 16 continues to give users to the ability grant privileged access to\n> features without requiring superuser with new\n> [predefined roles](https://www.postgresql.org/docs/devel/predefined-roles.html).\n> These include `pg_maintain`, which enables execution of operations such as\n> `VACUUM`, `ANALYZE`, `REINDEX`, and others, and `pg_createsubscription`, which\n> allows users to create a logical replication subscription. Additionally,\n> starting with release, logical replication subscribers execute transactions on a\n> table as the table owner, not the superuser.\n\n[pg_use_]reserved_connections might also deserve a mention here. AFAICT\nit's the only new predefined role that isn't mentioned in the announcement.\nI'm okay with leaving it out if folks don't think it should make the cut.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 May 2023 07:57:02 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/19/23 1:42 AM, Erik Rijkers wrote:\r\n> Op 5/19/23 om 06:17 schreef Jonathan S. Katz:\r\n>> Hi,\r\n>>\r\n>> Attached is a draft of the release announcement for PostgreSQL 16 Beta \r\n> \r\n> Hi,\r\n> \r\n> \r\n> The usual small fry:\r\n> \r\n> \r\n> 'continues to to'  should be\r\n> 'continues to'\r\n> \r\n> 'continues to give users to the ability'  should be\r\n> 'continues to give users the ability to'\r\n> \r\n> 'pg_createsubscription'  should be\r\n> 'pg_create_subscription'\r\n> \r\n> 'starting with release'  should be\r\n> 'starting with this release'\r\n> \r\n> 'credentials to connected to other services'  should be\r\n> 'credentials to connect to other services'\r\n\r\nThanks Erik. I made all of these changes and will upload them in the \r\nnext review.\r\n\r\nJonathan", "msg_date": "Sun, 21 May 2023 12:50:21 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/19/23 10:57 AM, Nathan Bossart wrote:\r\n> On Fri, May 19, 2023 at 12:17:50AM -0400, Jonathan S. Katz wrote:\r\n\r\n>> PostgreSQL 16 continues to give users to the ability grant privileged access to\r\n>> features without requiring superuser with new\r\n>> [predefined roles](https://www.postgresql.org/docs/devel/predefined-roles.html).\r\n>> These include `pg_maintain`, which enables execution of operations such as\r\n>> `VACUUM`, `ANALYZE`, `REINDEX`, and others, and `pg_createsubscription`, which\r\n>> allows users to create a logical replication subscription. Additionally,\r\n>> starting with release, logical replication subscribers execute transactions on a\r\n>> table as the table owner, not the superuser.\r\n> \r\n> [pg_use_]reserved_connections might also deserve a mention here. AFAICT\r\n> it's the only new predefined role that isn't mentioned in the announcement.\r\n> I'm okay with leaving it out if folks don't think it should make the cut.\r\n\r\nI'm not sure how widely used this one would be, so I left it out. \r\nHowever, open to other opinions.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sun, 21 May 2023 12:51:05 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/19/23 12:17 AM, Jonathan S. Katz wrote:\r\n> Hi,\r\n> \r\n> Attached is a draft of the release announcement for PostgreSQL 16 Beta \r\n> 1. The goal of this announcement is to get people excited about testing \r\n> the beta and highlight many of the new features.\r\n> \r\n> Please review for inaccuracies, omissions, and other suggestions / errors.\r\n> \r\n> Please provide feedback no later than May 24, 0:00 AoE. This will give \r\n> me enough time to incorporate the changes prior to the release the next \r\n> day.\r\n\r\nThanks everyone for your feedback. Here is the updated text that \r\ncombines all of the feedback from both -advocacy and -hackers.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sun, 21 May 2023 13:07:38 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On Sun, May 21, 2023 at 12:51:05PM -0400, Jonathan S. Katz wrote:\n> On 5/19/23 10:57 AM, Nathan Bossart wrote:\n>> [pg_use_]reserved_connections might also deserve a mention here. AFAICT\n>> it's the only new predefined role that isn't mentioned in the announcement.\n>> I'm okay with leaving it out if folks don't think it should make the cut.\n> \n> I'm not sure how widely used this one would be, so I left it out. However,\n> open to other opinions.\n\nFair enough.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 21 May 2023 10:12:20 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "Op 5/21/23 om 19:07 schreef Jonathan S. Katz:\n> On 5/19/23 12:17 AM, Jonathan S. Katz wrote:\n>> Hi,\n>>\n>> Attached is a draft of the release announcement for PostgreSQL 16 Beta \n>> Please provide feedback no later than May 24, 0:00 AoE. This will give \n> Thanks everyone for your feedback. Here is the updated text that \n\n'substransaction' should be\n'subtransaction'\n\n'use thousands separators' perhaps is better:\n'use underscore as digit-separator, as in `5_432` and `1_00_000`'\n\n'instrcut' should be\n'instruct'\n\n\nErik\n\n\n", "msg_date": "Mon, 22 May 2023 21:23:59 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/22/23 3:23 PM, Erik Rijkers wrote:\r\n> Op 5/21/23 om 19:07 schreef Jonathan S. Katz:\r\n>> On 5/19/23 12:17 AM, Jonathan S. Katz wrote:\r\n>>> Hi,\r\n>>>\r\n>>> Attached is a draft of the release announcement for PostgreSQL 16 \r\n>>> Beta Please provide feedback no later than May 24, 0:00 AoE. This \r\n>>> will give \r\n>> Thanks everyone for your feedback. Here is the updated text that \r\n> \r\n> 'substransaction'  should be\r\n> 'subtransaction'\r\n\r\nFixed.\r\n\r\n> 'use thousands separators'  perhaps is better:\r\n> 'use underscore as digit-separator, as in `5_432` and `1_00_000`'\r\n\r\nI looked at how other languages document this, and they do use the term \r\n\"thousands separators.\" I left that in, but explicitly called out the \r\nunderscore.\r\n\r\n> 'instrcut'  should be\r\n> 'instruct'\r\n\r\nFixed. Attached is the (hopefully) final draft.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 24 May 2023 13:06:30 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "Hi,\n\nOn 2023-05-24 13:06:30 -0400, Jonathan S. Katz wrote:\n> PostgreSQL 16 Feature Highlights\n> --------------------------------\n> \n> ### Performance\n> \n> PostgreSQL 16 includes performance improvements in query execution. This release\n> adds more query parallelism, including allowing `FULL` and `RIGHT` joins to\n> execute in parallel, and parallel execution of the `string_agg` and `array_agg`\n> aggregate functions. Additionally, PostgreSQL 16 can use incremental sorts in\n> `SELECT DISTINCT` queries. There are also several optimizations for\n> [window queries](https://www.postgresql.org/docs/16/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS),\n> improvements in lookups for `RANGE` and `LIST` partitions, and support for\n> \"anti-joins\" in `RIGHT` and `OUTER` queries.\n> \n> This release also introduces support for CPU acceleration using SIMD for both\n> x86 and ARM architectures, including optimizations for processing ASCII and JSON\n> strings, and array and subtransaction searches. Additionally, PostgreSQL 16\n> introduces [load balancing](https://www.postgresql.org/docs/16/libpq-connect.html#LIBPQ-CONNECT-LOAD-BALANCE-HOSTS)\n> to libpq, the client library for PostgreSQL.\n\nI think the relation extension improvements ought to be mentioned here as\nwell? Up to 3x faster concurrent data load with COPY seems practically\nrelevant.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 May 2023 14:28:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/24/23 5:28 PM, Andres Freund wrote:\r\n>\r\n> I think the relation extension improvements ought to be mentioned here as\r\n> well? Up to 3x faster concurrent data load with COPY seems practically\r\n> relevant.\r\n\r\nI missed that -- not sure I'm finding it in the release notes with a \r\nquick grep -- which commit/thread is this?\r\n\r\nBut yes this does sound like something that should be included, I just \r\nwant to read upon it.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 24 May 2023 19:57:39 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "Hi,\n\nOn 2023-05-24 19:57:39 -0400, Jonathan S. Katz wrote:\n> On 5/24/23 5:28 PM, Andres Freund wrote:\n> >\n> > I think the relation extension improvements ought to be mentioned here as\n> > well? Up to 3x faster concurrent data load with COPY seems practically\n> > relevant.\n>\n> I missed that -- not sure I'm finding it in the release notes with a quick\n> grep -- which commit/thread is this?\n\nIt was split over quite a few commits, the one improving COPY most\nsignificantly is\n\ncommit 00d1e02be24987180115e371abaeb84738257ae2\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2023-04-06 16:35:21 -0700\n\n hio: Use ExtendBufferedRelBy() to extend tables more efficiently\n\nRelevant thread: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de\n\nIt's in the release notes as:\n Allow more efficient addition of heap and index pages (Andres Freund)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 May 2023 17:04:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/24/23 8:04 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2023-05-24 19:57:39 -0400, Jonathan S. Katz wrote:\r\n>> On 5/24/23 5:28 PM, Andres Freund wrote:\r\n>>>\r\n>>> I think the relation extension improvements ought to be mentioned here as\r\n>>> well? Up to 3x faster concurrent data load with COPY seems practically\r\n>>> relevant.\r\n>>\r\n>> I missed that -- not sure I'm finding it in the release notes with a quick\r\n>> grep -- which commit/thread is this?\r\n> \r\n> It was split over quite a few commits, the one improving COPY most\r\n> significantly is\r\n> \r\n> commit 00d1e02be24987180115e371abaeb84738257ae2\r\n> Author: Andres Freund <andres@anarazel.de>\r\n> Date: 2023-04-06 16:35:21 -0700\r\n> \r\n> hio: Use ExtendBufferedRelBy() to extend tables more efficiently\r\n> \r\n> Relevant thread: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de\r\n> \r\n> It's in the release notes as:\r\n> Allow more efficient addition of heap and index pages (Andres Freund)\r\n\r\nAh, OK, that's why I didn't grok it. I read through the first message \r\nin[1] and definitely agree it should be in the announcement. How about:\r\n\r\n\"PostgreSQL 16 also shows up to a 300% improvement when concurrently \r\nloading data with `COPY`\"\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de", "msg_date": "Wed, 24 May 2023 21:20:30 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/24/23 9:20 PM, Jonathan S. Katz wrote:\r\n> On 5/24/23 8:04 PM, Andres Freund wrote:\r\n>> Hi,\r\n>>\r\n>> On 2023-05-24 19:57:39 -0400, Jonathan S. Katz wrote:\r\n>>> On 5/24/23 5:28 PM, Andres Freund wrote:\r\n>>>>\r\n>>>> I think the relation extension improvements ought to be mentioned \r\n>>>> here as\r\n>>>> well? Up to 3x faster concurrent data load with COPY seems practically\r\n>>>> relevant.\r\n>>>\r\n>>> I missed that -- not sure I'm finding it in the release notes with a \r\n>>> quick\r\n>>> grep -- which commit/thread is this?\r\n>>\r\n>> It was split over quite a few commits, the one improving COPY most\r\n>> significantly is\r\n>>\r\n>> commit 00d1e02be24987180115e371abaeb84738257ae2\r\n>> Author: Andres Freund <andres@anarazel.de>\r\n>> Date:   2023-04-06 16:35:21 -0700\r\n>>\r\n>>      hio: Use ExtendBufferedRelBy() to extend tables more efficiently\r\n>>\r\n>> Relevant thread: \r\n>> https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de\r\n>>\r\n>> It's in the release notes as:\r\n>>    Allow more efficient addition of heap and index pages (Andres Freund)\r\n> \r\n> Ah, OK, that's why I didn't grok it. I read through the first message \r\n> in[1] and definitely agree it should be in the announcement. How about:\r\n> \r\n> \"PostgreSQL 16 also shows up to a 300% improvement when concurrently \r\n> loading data with `COPY`\"\r\n\r\nI currently have it as the below in the release announcement. If it you \r\nsend any suggested updates, I can try to put them in before release:\r\n\r\nPostgreSQL 16 can also improve the performance of concurrent bulk \r\nloading of data using \r\n[`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) up to a 300%.\r\n\r\nJonathan", "msg_date": "Wed, 24 May 2023 23:30:58 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/24/23 11:30 PM, Jonathan S. Katz wrote:\r\n> On 5/24/23 9:20 PM, Jonathan S. Katz wrote:\r\n\r\n> I currently have it as the below in the release announcement. If it you \r\n> send any suggested updates, I can try to put them in before release:\r\n> \r\n> PostgreSQL 16 can also improve the performance of concurrent bulk \r\n> loading of data using \r\n> [`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) up to a 300%.\r\n\r\n(without the \"a 300%\" typo).\r\n\r\nJonathan", "msg_date": "Wed, 24 May 2023 23:32:02 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "Hi,\n\nOn 2023-05-24 23:30:58 -0400, Jonathan S. Katz wrote:\n> > Ah, OK, that's why I didn't grok it. I read through the first message\n> > in[1] and definitely agree it should be in the announcement. How about:\n> > \n> > \"PostgreSQL 16 also shows up to a 300% improvement when concurrently\n> > loading data with `COPY`\"\n> \n> I currently have it as the below in the release announcement. If it you send\n> any suggested updates, I can try to put them in before release:\n> \n> PostgreSQL 16 can also improve the performance of concurrent bulk loading of\n> data using [`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) up to\n> a 300%.\n\nIt also speeds up concurrent loading when not using COPY, just to a lesser\ndegree. But I can't come up with a concise phrasing for that right now...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 May 2023 21:16:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" }, { "msg_contents": "On 5/25/23 12:16 AM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2023-05-24 23:30:58 -0400, Jonathan S. Katz wrote:\r\n>>> Ah, OK, that's why I didn't grok it. I read through the first message\r\n>>> in[1] and definitely agree it should be in the announcement. How about:\r\n>>>\r\n>>> \"PostgreSQL 16 also shows up to a 300% improvement when concurrently\r\n>>> loading data with `COPY`\"\r\n>>\r\n>> I currently have it as the below in the release announcement. If it you send\r\n>> any suggested updates, I can try to put them in before release:\r\n>>\r\n>> PostgreSQL 16 can also improve the performance of concurrent bulk loading of\r\n>> data using [`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) up to\r\n>> a 300%.\r\n> \r\n> It also speeds up concurrent loading when not using COPY, just to a lesser\r\n> degree. But I can't come up with a concise phrasing for that right now...\r\n\r\nI left as is (in part because of a hurried morning), but we can improve \r\nupon it for the GA.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Thu, 25 May 2023 10:40:16 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 16 Beta 1 release announcement draft" } ]
[ { "msg_contents": "I observed a missing end bracket in E 1.3.11:\n\n\nRequire Windows 10 or newer versions (Michael Paquier, Juan José Santamaría Flecha\n\n\nHans Buschmann\n\n\n\n\n\n\n\n\nI observed a missing end bracket in E 1.3.11:\n\n\nRequire Windows 10 or newer versions (Michael Paquier, Juan José Santamaría Flecha\n\n\n\nHans Buschmann", "msg_date": "Fri, 19 May 2023 07:42:12 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": true, "msg_subject": "PG 16 draft release notes ready" }, { "msg_contents": "On Fri, May 19, 2023 at 07:42:12AM +0000, Hans Buschmann wrote:\n> I observed a missing end bracket in E 1.3.11:\n> \n> \n> Require Windows 10 or newer versions (Michael Paquier, Juan José Santamaría\n> Flecha\n\nThanks, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 19 May 2023 08:05:36 -0400", "msg_from": "\"bruce@momjian.us\" <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: PG 16 draft release notes ready" } ]
[ { "msg_contents": "The thread \"Should CSV parsing be stricter about mid-field quotes?\" [1] forked\ninto a new topic, with two new ideas, hence this new thread.\n\n1. COPY ... QUOTE NONE\n\nIn the [1] thread, Andrew Dunstan suggested a trick on how to deal with\nunquoted but delimited files, such as TSV-files produced by Google Sheets:\n\n> You can use CSV mode pretty reliably for TSV files.\n> The trick is to use a quoting char that shouldn't appear,\n> such as E'\\x01' as well as setting the delimiter to E'\\t'.\n> Yes, it's far from obvious.\n\nWould it be an improvement to allow specifying `QUOTE NONE` instead?\n\nquotes.tsv:\nid quote\n1 \"E = mc^2\" -- Albert Einstein\n\nCOPY quotes FROM '/tmp/quotes.tsv' WITH CSV HEADER DELIMITER E'\\t' QUOTE NONE;\n\nSELECT * FROM quotes;\nid | quote\n----+-------------------------------\n 1 | \"E = mc^2\" -- Albert Einstein\n(1 row)\n\n2. COPY ... DELIMITER NONE\n\nThis is meant to improve the use-case when wanting to import e.g. an\nunstructured log file line-by-line into a single column.\n\nThe current trick I've been using is similar to the first one,\nthat is, to specify a non-existing delimiter. But that involves having to find\nsome non-existing byte, which is error-prone since future log files might\nsuddenly start to contain it. So I think it would be better to be to be explicit\nabout not wanting to delimit fields at all, treating the entire whole line as a column.\n\nExample:\n\n% cat /tmp/apache.log\n192.168.1.1 - - [19/May/2023:09:54:17 -0700] \"GET /index.html HTTP/1.1\" 200 431 \"http://www.example.com/home.html\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3\"\n192.168.1.2 - - [19/May/2023:09:55:12 -0700] \"POST /form.php HTTP/1.1\" 200 512 \"http://www.example.com/form.html\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3\"\n\nCREATE TABLE unstructured_log (whole_line text);\nCOPY unstructured_log FROM '/tmp/apache.log' WITH CSV DELIMITER NONE QUOTE NONE;\nSELECT * FROM unstructured_log;\n whole_line\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n192.168.1.1 - - [19/May/2023:09:54:17 -0700] \"GET /index.html HTTP/1.1\" 200 431 \"http://www.example.com/home.html\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3\"\n192.168.1.2 - - [19/May/2023:09:55:12 -0700] \"POST /form.php HTTP/1.1\" 200 512 \"http://www.example.com/form.html\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3\"\n(2 rows)\n\nI hacked together a broken patch just to demonstrate the idea on syntax\nand basic idea. The `COPY ... FROM` examples above works.\nBut it doesn't work at all for `COPY ... TO`, since it output \\0 byte as\ndelimiter and quote in the output, which is of course not what we want.\n\nJust wanted some feedback to see if there is any broader interest in this,\nbefore proceeding and looking into how to implement it properly.\n\nIs this something we want or are there just a few of us who have needed this in the past?\n\n/Joel\n\n[1] https://www.postgresql.org/message-id/31c81233-d707-0d2a-8111-a915f463459b%40dunslane.net", "msg_date": "Fri, 19 May 2023 11:24:27 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "New COPY options: DELIMITER NONE and QUOTE NONE" }, { "msg_contents": "On 2023-05-19 Fr 05:24, Joel Jacobson wrote:\n>\n> I hacked together a broken patch just to demonstrate the idea on syntax\n> and basic idea. The `COPY ... FROM` examples above works.\n> But it doesn't work at all for `COPY ... TO`, since it output \\0 byte as\n> delimiter and quote in the output, which is of course not what we want.\n>\n>\n\nI think you've been a bit too cute with the grammar changes, but as you \nsay this is a POC.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-19 Fr 05:24, Joel Jacobson\n wrote:\n\n\n\n\n\n\n\nI hacked together a broken patch just to demonstrate the idea\n on syntax\n\nand basic idea. The `COPY ... FROM` examples above works.\n\nBut it doesn't work at all for `COPY ... TO`, since it output\n \\0 byte as\n\ndelimiter and quote in the output, which is of course not\n what we want.\n\n\n\n\n\n\n\nI think you've been a bit too cute with the grammar changes, but\n as you say this is a POC.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 19 May 2023 13:03:14 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: New COPY options: DELIMITER NONE and QUOTE NONE" }, { "msg_contents": "On Fri, May 19, 2023, at 19:03, Andrew Dunstan wrote:\n> I think you've been a bit too cute with the grammar changes, but as you say this is a POC.\n\nThanks for feedback.\n\nThe approach I took for the new grammar rules was inspired by previous commits,\nsuch as de7531a971b, which introduced support for 'FORCE QUOTE '*''. In that\ncase, a new separate grammar rule was crafted.\n\nNot sure what you mean with it being \"too cute\", but maybe you think it's a bit\nverbose with another grammar rule and it would be better to integrate it into\nthe existing one?\n\nExample:\n\n| DELIMITER opt_as (Sconst | NONE)\n {\n if ($3 == NONE)\n $$ = makeDefElem(\"delimiter\", (Node *) makeString(\"\\0\"), @1);\n else\n $$ = makeDefElem(\"delimiter\", (Node *) makeString($3), @1);\n }\n\n/Joel\n\nOn Fri, May 19, 2023, at 19:03, Andrew Dunstan wrote:> I think you've been a bit too cute with the grammar changes, but as you say this is a POC.Thanks for feedback.The approach I took for the new grammar rules was inspired by previous commits,such as de7531a971b, which introduced support for 'FORCE QUOTE '*''. In thatcase, a new separate grammar rule was crafted.Not sure what you mean with it being \"too cute\", but maybe you think it's a bitverbose with another grammar rule and it would be better to integrate it intothe existing one?Example:| DELIMITER opt_as (Sconst | NONE)        {                if ($3 == NONE)                        $$ = makeDefElem(\"delimiter\", (Node *) makeString(\"\\0\"), @1);                else                        $$ = makeDefElem(\"delimiter\", (Node *) makeString($3), @1);        }/Joel", "msg_date": "Sat, 20 May 2023 08:59:46 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: New COPY options: DELIMITER NONE and QUOTE NONE" }, { "msg_contents": "On 2023-05-20 Sa 02:59, Joel Jacobson wrote:\n> On Fri, May 19, 2023, at 19:03, Andrew Dunstan wrote:\n> > I think you've been a bit too cute with the grammar changes, but as \n> you say this is a POC.\n>\n> Thanks for feedback.\n>\n> The approach I took for the new grammar rules was inspired by previous \n> commits,\n> such as de7531a971b, which introduced support for 'FORCE QUOTE '*''. \n> In that\n> case, a new separate grammar rule was crafted.\n>\n> Not sure what you mean with it being \"too cute\", but maybe you think \n> it's a bit\n> verbose with another grammar rule and it would be better to integrate \n> it into\n> the existing one?\n>\n> Example:\n>\n> | DELIMITER opt_as (Sconst | NONE)\n>         {\n>                 if ($3 == NONE)\n>                         $$ = makeDefElem(\"delimiter\", (Node *) \n> makeString(\"\\0\"), @1);\n>                 else\n>                         $$ = makeDefElem(\"delimiter\", (Node *) \n> makeString($3), @1);\n>         }\n>\n>\n\n\nI would probably go for something like this for \"DELIMITER NONE\" in a \nseparate rule:\n\n  | DELIMITER NONE\n     {\n\n        $$ = makeDefElem(\"delimiter_none\", (Node *)makeInteger(true), @1);\n\n     }\n\nand deal with that element further down the stack. It looks to me at \nfirst glance that your changes would allow \"DELIMITER ''\" which is \nprobably not what we want.\n\nSimilarly for \"QUOTE NONE\".\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-20 Sa 02:59, Joel Jacobson\n wrote:\n\n\n\n\n\nOn Fri, May 19, 2023, at 19:03, Andrew Dunstan wrote:\n\n> I think you've been a bit too cute with the grammar\n changes, but as you say this is a POC.\n\n\n\nThanks for feedback.\n\n\n\nThe approach I took for the new grammar rules was inspired by\n previous commits,\n\nsuch as de7531a971b, which introduced support for 'FORCE\n QUOTE '*''. In that\n\ncase, a new separate grammar rule was crafted.\n\n\n\nNot sure what you mean with it being \"too cute\", but maybe\n you think it's a bit\n\nverbose with another grammar rule and it would be better to\n integrate it into\n\nthe existing one?\n\n\n\nExample:\n\n\n\n| DELIMITER opt_as (Sconst | NONE)\n\n        {\n\n                if ($3 == NONE)\n\n                        $$ = makeDefElem(\"delimiter\", (Node\n *) makeString(\"\\0\"), @1);\n\n                else\n\n                        $$ = makeDefElem(\"delimiter\", (Node\n *) makeString($3), @1);\n\n        }\n\n\n\n\n\n\n\n\n\n\nI would probably go for something like this for \"DELIMITER NONE\"\n in a separate rule:\n | DELIMITER NONE\n     {\n\n       $$ = makeDefElem(\"delimiter_none\", (Node\n *)makeInteger(true), @1);\n    }\n\nand deal with that element further down the stack. It looks to me\n at first glance that your changes would allow \"DELIMITER ''\" which\n is probably not what we want.\nSimilarly for \"QUOTE NONE\".\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 20 May 2023 10:01:44 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: New COPY options: DELIMITER NONE and QUOTE NONE" } ]
[ { "msg_contents": "I am trying to connect with PostgreSQL database from client with SSL\nenabled on server 10.30.32.186 port 6432 using below java code -\n\nI am using certificates ( [server-cert.pem, server-key.pem, ca.cert] and\n[postgresql.crt, postgresql.pk8, root.crt] ).\n\nSuggest me if there are any specific java understandable certificate and\nkey file format.\n\n\n package com.ssl;\n\n import java.sql.Connection;\n import java.sql.DriverManager;\n import java.sql.SQLException;\n\n public class DBConnect {\n\nprivate final String url = \"jdbc:postgresql://\n10.30.32.186:6432/postgres?sslmode=require&sslcert=/root/.postgresql/postgresql.crt&sslkey=/root/.postgresql/postgresql.pk8&sslrootcert=/root/.postgresql/root.crt&sslpassword=postgress\n\";\n\n private final String user = \"postgres\";\n private final String password = \"postgres123\";\n\n /**\n * Connect to the PostgreSQL database\n *\n * @return a Connection object\n */\n public Connection connect() {\n Connection conn = null;\n try {\n conn = DriverManager.getConnection(url, user, password);\n System.out.println(\"Connected to the PostgreSQL server\nsuccessfully.\");\n } catch (SQLException e) {\n System.out.println(e.getMessage());\n }\n\n return conn;\n }\n\npublic static void main(String[] args) {\n\nDBConnect db = new DBConnect();\ndb.connect();\n\n}\n\n }\n\nGives Error -\n\n SSL error: -1\n\n\n\nCode NO 2 -\n\n package SSL_Enablement;\n\n import java.sql.Connection;\n import java.sql.DriverManager;\n import java.sql.SQLException;\n import java.util.Properties;\n\n public class PostgresSSLConnection {\n public static void main(String[] args) {\n Connection conn = null;\n try {\n // Set SSL properties\n Properties props = new Properties();\n props.setProperty(\"user\", \"postgres\");\n props.setProperty(\"password\", \"postgres123\");\n props.setProperty(\"ssl\", \"true\");\n props.setProperty(\"https.protocols\", \"TLSv1.2\");\n props.setProperty(\"sslmode\", \"Verify-CA\");\n props.setProperty(\"sslcert\",\n\"/root/.postgresql/server-cert.pem\");\n props.setProperty(\"sslkey\", \"/root/.postgresql/server-key.pem\");\n props.setProperty(\"sslrootcert\", \"/root/.postgresql/ca.cert\");\n\n // Initialize SSL context\n Class.forName(\"org.postgresql.Driver\");\n String url = \"jdbc:postgresql://10.30.32.186:6432/postgres\";\n conn = DriverManager.getConnection(url, props);\n System.out.println(\"Connected DB using SSL\");\n // Use the connection...\n // ...\n\n } catch (SQLException e) {\n e.printStackTrace();\n } catch (ClassNotFoundException e) {\n e.printStackTrace();\n } finally {\n try {\n if (conn != null) {\n conn.close();\n }\n } catch (SQLException e) {\n e.printStackTrace();\n }\n }\n }\n }\n\nGives Error -\n\n org.postgresql.util.PSQLException: Could not read SSL key file\n/root/.postgresql/server-key.pem.\n at org.postgresql.ssl.LazyKeyManager.getPrivateKey(LazyKeyManager.java:284)\n at\nsun.security.ssl.AbstractKeyManagerWrapper.getPrivateKey(SSLContextImpl.java:1552)\n at\nsun.security.ssl.X509Authentication$X509PossessionGenerator.createClientPossession(X509Authentication.java:220)\n at\nsun.security.ssl.X509Authentication$X509PossessionGenerator.createPossession(X509Authentication.java:175)\n at\nsun.security.ssl.X509Authentication.createPossession(X509Authentication.java:88)\n at\nsun.security.ssl.CertificateMessage$T13CertificateProducer.choosePossession(CertificateMessage.java:1080)\n at\nsun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:1101)\n at\nsun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:958)\n at sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:421)\n at\nsun.security.ssl.Finished$T13FinishedConsumer.onConsumeFinished(Finished.java:989)\n at sun.security.ssl.Finished$T13FinishedConsumer.consume(Finished.java:852)\n at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)\n at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)\n at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)\n at sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)\n at sun.security.ssl.SSLTransport.decode(SSLTransport.java:152)\n at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1397)\n at\nsun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1305)\n at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)\n at org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:41)\n at\norg.postgresql.core.v3.ConnectionFactoryImpl.enableSSL(ConnectionFactoryImpl.java:584)\n at\norg.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:168)\n at\norg.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)\n at\norg.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)\n at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247)\n at org.postgresql.Driver.makeConnection(Driver.java:434)\n at org.postgresql.Driver.connect(Driver.java:291)\n at java.sql.DriverManager.getConnection(DriverManager.java:664)\n at java.sql.DriverManager.getConnection(DriverManager.java:208)\n at SSL_Enablement.PostgresSSLConnection.main(PostgresSSLConnection.java:26)\n Caused by: java.io.IOException: extra data given to DerValue constructor\n at sun.security.util.DerValue.init(DerValue.java:423)\n at sun.security.util.DerValue.<init>(DerValue.java:306)\n at sun.security.util.DerValue.<init>(DerValue.java:347)\n at sun.security.util.DerValue.wrap(DerValue.java:334)\n at sun.security.util.DerValue.wrap(DerValue.java:319)\n at\njavax.crypto.EncryptedPrivateKeyInfo.<init>(EncryptedPrivateKeyInfo.java:84)\n at org.postgresql.ssl.LazyKeyManager.getPrivateKey(LazyKeyManager.java:236)\n ... 29 more\n\n\n\nCode NO 3 -\n\n package SSL_Enablement;\n\n import java.sql.Connection;\n import java.sql.DriverManager;\n import java.sql.SQLException;\n import java.util.Properties;\n\n public class PostgresSSLConnection {\n public static void main(String[] args) {\n Connection conn = null;\n try {\n // Set SSL properties\n Properties props = new Properties();\n props.setProperty(\"user\", \"postgres\");\n props.setProperty(\"password\", \"postgres123\");\n props.setProperty(\"ssl\", \"true\");\n props.setProperty(\"https.protocols\", \"TLSv1.2\");\n props.setProperty(\"sslmode\", \"Verify-CA\");\n props.setProperty(\"sslcert\",\n\"/root/.postgresql/postgresql.crt\");\n props.setProperty(\"sslkey\", \"/root/.postgresql/postgresql.pk8\");\n props.setProperty(\"sslrootcert\", \"/root/.postgresql/root.crt\");\n\n // Initialize SSL context\n Class.forName(\"org.postgresql.Driver\");\n String url = \"jdbc:postgresql://10.30.32.186:6432/postgres\";\n conn = DriverManager.getConnection(url, props);\n System.out.println(\"Connected DB using SSL\");\n // Use the connection...\n // ...\n\n } catch (SQLException e) {\n e.printStackTrace();\n } catch (ClassNotFoundException e) {\n e.printStackTrace();\n } finally {\n try {\n if (conn != null) {\n conn.close();\n }\n } catch (SQLException e) {\n e.printStackTrace();\n }\n }\n }\n }\n\nGives Error -\n\n org.postgresql.util.PSQLException: SSL error: -1\n at org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:43)\n at\norg.postgresql.core.v3.ConnectionFactoryImpl.enableSSL(ConnectionFactoryImpl.java:584)\n at\norg.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:168)\n at\norg.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)\n at\norg.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)\n at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247)\n at org.postgresql.Driver.makeConnection(Driver.java:434)\n at org.postgresql.Driver.connect(Driver.java:291)\n at java.sql.DriverManager.getConnection(DriverManager.java:664)\n at java.sql.DriverManager.getConnection(DriverManager.java:208)\n at\nSSL_Enablement.PostgresSSLConnection.main(PostgresSSLConnection.java:26)\n Caused by: javax.net.ssl.SSLException: -1\nat sun.security.ssl.Alert.createSSLException(Alert.java:133)\nat sun.security.ssl.TransportContext.fatal(TransportContext.java:331)\nat sun.security.ssl.TransportContext.fatal(TransportContext.java:274)\nat sun.security.ssl.TransportContext.fatal(TransportContext.java:269)\nat sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1568)\nat sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:446)\nat org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:41)\n... 10 more\n Caused by: java.lang.ArrayIndexOutOfBoundsException: -1\nat\norg.postgresql.ssl.LazyKeyManager.chooseClientAlias(LazyKeyManager.java:105)\nat\nsun.security.ssl.AbstractKeyManagerWrapper.chooseClientAlias(SSLContextImpl.java:1531)\nat\nsun.security.ssl.X509Authentication$X509PossessionGenerator.createClientPossession(X509Authentication.java:200)\nat\nsun.security.ssl.X509Authentication$X509PossessionGenerator.createPossession(X509Authentication.java:175)\nat\nsun.security.ssl.X509Authentication.createPossession(X509Authentication.java:88)\nat\nsun.security.ssl.CertificateMessage$T13CertificateProducer.choosePossession(CertificateMessage.java:1080)\nat\nsun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:1101)\nat\nsun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:958)\nat sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:421)\nat\nsun.security.ssl.Finished$T13FinishedConsumer.onConsumeFinished(Finished.java:989)\nat sun.security.ssl.Finished$T13FinishedConsumer.consume(Finished.java:852)\nat sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)\nat sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)\nat sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)\nat sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)\nat sun.security.ssl.SSLTransport.decode(SSLTransport.java:152)\nat sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1397)\nat\nsun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1305)\nat sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)\n... 11 more\n\nI am trying to connect with PostgreSQL database from client with SSL enabled on server 10.30.32.186 port 6432 using below java code -I am using certificates ( [server-cert.pem, server-key.pem, ca.cert] and [postgresql.crt, postgresql.pk8, root.crt] ).Suggest me if there are any specific java understandable certificate and key file format.    package com.ssl;    import java.sql.Connection;    import java.sql.DriverManager;    import java.sql.SQLException;    public class DBConnect { \tprivate final String url = \"jdbc:postgresql://10.30.32.186:6432/postgres?sslmode=require&sslcert=/root/.postgresql/postgresql.crt&sslkey=/root/.postgresql/postgresql.pk8&sslrootcert=/root/.postgresql/root.crt&sslpassword=postgress\";    private final String user = \"postgres\";    private final String password = \"postgres123\";    /**     * Connect to the PostgreSQL database     *     * @return a Connection object     */    public Connection connect() {        Connection conn = null;        try {            conn = DriverManager.getConnection(url, user, password);            System.out.println(\"Connected to the PostgreSQL server successfully.\");        } catch (SQLException e) {            System.out.println(e.getMessage());        }        return conn;    }\tpublic static void main(String[] args) { \t\tDBConnect db = new DBConnect();\t\tdb.connect();\t}    }Gives Error -    SSL error: -1Code NO 2 -    package SSL_Enablement;    import java.sql.Connection;    import java.sql.DriverManager;    import java.sql.SQLException;    import java.util.Properties;    public class PostgresSSLConnection {    public static void main(String[] args) {        Connection conn = null;        try {            // Set SSL properties            Properties props = new Properties();            props.setProperty(\"user\", \"postgres\");            props.setProperty(\"password\", \"postgres123\");            props.setProperty(\"ssl\", \"true\");            props.setProperty(\"https.protocols\", \"TLSv1.2\");            props.setProperty(\"sslmode\", \"Verify-CA\");            props.setProperty(\"sslcert\", \"/root/.postgresql/server-cert.pem\");            props.setProperty(\"sslkey\", \"/root/.postgresql/server-key.pem\");            props.setProperty(\"sslrootcert\", \"/root/.postgresql/ca.cert\");            // Initialize SSL context            Class.forName(\"org.postgresql.Driver\");            String url = \"jdbc:postgresql://10.30.32.186:6432/postgres\";            conn = DriverManager.getConnection(url, props);            System.out.println(\"Connected DB using SSL\");            // Use the connection...            // ...        } catch (SQLException e) {            e.printStackTrace();        } catch (ClassNotFoundException e) {            e.printStackTrace();        } finally {            try {                if (conn != null) {                    conn.close();                }            } catch (SQLException e) {                e.printStackTrace();            }        }    }    }Gives Error -        org.postgresql.util.PSQLException: Could not read SSL key file /root/.postgresql/server-key.pem.\t  at org.postgresql.ssl.LazyKeyManager.getPrivateKey(LazyKeyManager.java:284)\t  at sun.security.ssl.AbstractKeyManagerWrapper.getPrivateKey(SSLContextImpl.java:1552)\t  at sun.security.ssl.X509Authentication$X509PossessionGenerator.createClientPossession(X509Authentication.java:220)\t  at sun.security.ssl.X509Authentication$X509PossessionGenerator.createPossession(X509Authentication.java:175)\t  at sun.security.ssl.X509Authentication.createPossession(X509Authentication.java:88)\t  at sun.security.ssl.CertificateMessage$T13CertificateProducer.choosePossession(CertificateMessage.java:1080)\t  at sun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:1101)\t  at sun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:958)\t  at sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:421)\t  at sun.security.ssl.Finished$T13FinishedConsumer.onConsumeFinished(Finished.java:989)\t  at sun.security.ssl.Finished$T13FinishedConsumer.consume(Finished.java:852)\t  at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)\t  at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)\t  at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)\t  at sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)\t  at sun.security.ssl.SSLTransport.decode(SSLTransport.java:152)\t  at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1397)\t  at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1305)\t  at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)\t  at org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:41)\t  at org.postgresql.core.v3.ConnectionFactoryImpl.enableSSL(ConnectionFactoryImpl.java:584)\t  at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:168)\t  at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)\t  at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)\t  at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247)\t  at org.postgresql.Driver.makeConnection(Driver.java:434)\t  at org.postgresql.Driver.connect(Driver.java:291)\t  at java.sql.DriverManager.getConnection(DriverManager.java:664)\t  at java.sql.DriverManager.getConnection(DriverManager.java:208)\t  at SSL_Enablement.PostgresSSLConnection.main(PostgresSSLConnection.java:26)    Caused by: java.io.IOException: extra data given to DerValue constructor\t  at sun.security.util.DerValue.init(DerValue.java:423)\t  at sun.security.util.DerValue.<init>(DerValue.java:306)\t  at sun.security.util.DerValue.<init>(DerValue.java:347)\t  at sun.security.util.DerValue.wrap(DerValue.java:334)\t  at sun.security.util.DerValue.wrap(DerValue.java:319)\t  at javax.crypto.EncryptedPrivateKeyInfo.<init>(EncryptedPrivateKeyInfo.java:84)\t  at org.postgresql.ssl.LazyKeyManager.getPrivateKey(LazyKeyManager.java:236)\t  ... 29 moreCode NO 3 -    package SSL_Enablement;    import java.sql.Connection;    import java.sql.DriverManager;    import java.sql.SQLException;    import java.util.Properties;    public class PostgresSSLConnection {    public static void main(String[] args) {        Connection conn = null;        try {            // Set SSL properties            Properties props = new Properties();            props.setProperty(\"user\", \"postgres\");            props.setProperty(\"password\", \"postgres123\");            props.setProperty(\"ssl\", \"true\");            props.setProperty(\"https.protocols\", \"TLSv1.2\");            props.setProperty(\"sslmode\", \"Verify-CA\");            props.setProperty(\"sslcert\", \"/root/.postgresql/postgresql.crt\");            props.setProperty(\"sslkey\", \"/root/.postgresql/postgresql.pk8\");            props.setProperty(\"sslrootcert\", \"/root/.postgresql/root.crt\");            // Initialize SSL context            Class.forName(\"org.postgresql.Driver\");            String url = \"jdbc:postgresql://10.30.32.186:6432/postgres\";            conn = DriverManager.getConnection(url, props);            System.out.println(\"Connected DB using SSL\");            // Use the connection...            // ...        } catch (SQLException e) {            e.printStackTrace();        } catch (ClassNotFoundException e) {            e.printStackTrace();        } finally {            try {                if (conn != null) {                    conn.close();                }            } catch (SQLException e) {                e.printStackTrace();            }        }    }    }Gives Error -        org.postgresql.util.PSQLException: SSL error: -1\t    at org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:43)\t    at org.postgresql.core.v3.ConnectionFactoryImpl.enableSSL(ConnectionFactoryImpl.java:584)\t    at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:168)\t    at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)\t    at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)\t    at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247)\t    at org.postgresql.Driver.makeConnection(Driver.java:434)\t    at org.postgresql.Driver.connect(Driver.java:291)\t    at java.sql.DriverManager.getConnection(DriverManager.java:664)\t    at java.sql.DriverManager.getConnection(DriverManager.java:208)\t    at SSL_Enablement.PostgresSSLConnection.main(PostgresSSLConnection.java:26)    Caused by: javax.net.ssl.SSLException: -1\t\tat sun.security.ssl.Alert.createSSLException(Alert.java:133)\t\tat sun.security.ssl.TransportContext.fatal(TransportContext.java:331)\t\tat sun.security.ssl.TransportContext.fatal(TransportContext.java:274)\t\tat sun.security.ssl.TransportContext.fatal(TransportContext.java:269)\t\tat sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1568)\t\tat sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:446)\t\tat org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:41)\t\t... 10 more    Caused by: java.lang.ArrayIndexOutOfBoundsException: -1\t\tat org.postgresql.ssl.LazyKeyManager.chooseClientAlias(LazyKeyManager.java:105)\t\tat sun.security.ssl.AbstractKeyManagerWrapper.chooseClientAlias(SSLContextImpl.java:1531)\t\tat sun.security.ssl.X509Authentication$X509PossessionGenerator.createClientPossession(X509Authentication.java:200)\t\tat sun.security.ssl.X509Authentication$X509PossessionGenerator.createPossession(X509Authentication.java:175)\t\tat sun.security.ssl.X509Authentication.createPossession(X509Authentication.java:88)\t\tat sun.security.ssl.CertificateMessage$T13CertificateProducer.choosePossession(CertificateMessage.java:1080)\t\tat sun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:1101)\t\tat sun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:958)\t\tat sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:421)\t\tat sun.security.ssl.Finished$T13FinishedConsumer.onConsumeFinished(Finished.java:989)\t\tat sun.security.ssl.Finished$T13FinishedConsumer.consume(Finished.java:852)\t\tat sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)\t\tat sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)\t\tat sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)\t\tat sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)\t\tat sun.security.ssl.SSLTransport.decode(SSLTransport.java:152)\t\tat sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1397)\t\tat sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1305)\t\tat sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)\t\t... 11 more", "msg_date": "Fri, 19 May 2023 16:43:32 +0530", "msg_from": "sujay kadam <sujaykadam02@gmail.com>", "msg_from_op": true, "msg_subject": "How to connect with PostgreSQL Database with SSL using Certificates\n and Key from client Eclipse in Java" }, { "msg_contents": "Hi Sujay,\n\n> I am trying to connect with PostgreSQL database from client with SSL enabled on server 10.30.32.186 port 6432 using below java code -\n\nThis mailing list is dedicated to the PostgreSQL Core development. I\ndon't think you will find many people interested in your question\nand/or familiar with Java.\n\nI think you should address the question to pgsql-general@ mailing list\nor StackOverflow.\n\n(If you believe there is a bug in the DBMS core please provide simpler\nsteps to reproduce, ideally with pgsql utility and maybe bash.)\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 May 2023 14:41:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: How to connect with PostgreSQL Database with SSL using\n Certificates and Key from client Eclipse in Java" }, { "msg_contents": "Thank you for your response.\n\nOn Fri, 19 May 2023 at 5:11 PM, Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Sujay,\n>\n> > I am trying to connect with PostgreSQL database from client with SSL\n> enabled on server 10.30.32.186 port 6432 using below java code -\n>\n> This mailing list is dedicated to the PostgreSQL Core development. I\n> don't think you will find many people interested in your question\n> and/or familiar with Java.\n>\n> I think you should address the question to pgsql-general@ mailing list\n> or StackOverflow.\n>\n> (If you believe there is a bug in the DBMS core please provide simpler\n> steps to reproduce, ideally with pgsql utility and maybe bash.)\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nThank you for your response.On Fri, 19 May 2023 at 5:11 PM, Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Sujay,\n\n> I am trying to connect with PostgreSQL database from client with SSL enabled on server 10.30.32.186 port 6432 using below java code -\n\nThis mailing list is dedicated to the PostgreSQL Core development. I\ndon't think you will find many people interested in your question\nand/or familiar with Java.\n\nI think you should address the question to pgsql-general@ mailing list\nor StackOverflow.\n\n(If you believe there is a bug in the DBMS core please provide simpler\nsteps to reproduce, ideally with pgsql utility and maybe bash.)\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 19 May 2023 17:12:32 +0530", "msg_from": "sujay kadam <sujaykadam02@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to connect with PostgreSQL Database with SSL using\n Certificates and Key from client Eclipse in Java" } ]
[ { "msg_contents": "Why is the new PG 16 GUC called \"gss_accept_deleg\" and not\n\"gss_accept_delegation\"? The abbreviation here seems atypical.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 19 May 2023 08:09:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Naming of gss_accept_deleg" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> Why is the new PG 16 GUC called \"gss_accept_deleg\" and not\n> \"gss_accept_delegation\"? The abbreviation here seems atypical.\n\nAt the time it felt natural to me but I don't feel strongly about it,\nhappy to change it if folks would prefer it spelled out.\n\nThanks,\n\nStephen", "msg_date": "Fri, 19 May 2023 09:07:26 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "On Fri, May 19, 2023 at 09:07:26AM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > Why is the new PG 16 GUC called \"gss_accept_deleg\" and not\n> > \"gss_accept_delegation\"? The abbreviation here seems atypical.\n> \n> At the time it felt natural to me but I don't feel strongly about it,\n> happy to change it if folks would prefer it spelled out.\n\nYes, please do spell it out, thanks. The fact \"deleg\" looks similar to\n\"debug\" also doesn't help.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 19 May 2023 09:16:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "At 2023-05-19 09:16:09 -0400, bruce@momjian.us wrote:\n>\n> On Fri, May 19, 2023 at 09:07:26AM -0400, Stephen Frost wrote:\n> > \n> > > Why is the new PG 16 GUC called \"gss_accept_deleg\" and not\n> > > \"gss_accept_delegation\"? The abbreviation here seems atypical.\n> > \n> > At the time it felt natural to me but I don't feel strongly about it,\n> > happy to change it if folks would prefer it spelled out.\n> \n> Yes, please do spell it out, thanks. The fact \"deleg\" looks similar to\n> \"debug\" also doesn't help.\n\nNote that GSS-API itself calls it the \"DELEG\" flag:\n\n if (conn->gcred != GSS_C_NO_CREDENTIAL)\n gss_flags |= GSS_C_DELEG_FLAG;\n\nI would also prefer a GUC named gss_accept_delegation, but the current\nname matches the libpq gssdeleg connection parameter and the PGSSDELEG\nenvironment variable. Maybe there's something to be said for keeping\nthose three things alike?\n\n-- Abhijit\n\n\n", "msg_date": "Fri, 19 May 2023 19:05:19 +0530", "msg_from": "Abhijit Menon-Sen <ams@toroid.org>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "Abhijit Menon-Sen <ams@toroid.org> writes:\n> I would also prefer a GUC named gss_accept_delegation, but the current\n> name matches the libpq gssdeleg connection parameter and the PGSSDELEG\n> environment variable. Maybe there's something to be said for keeping\n> those three things alike?\n\n+1 for spelling it out in all user-visible names. I do not think\nthat that GSS-API C symbol is a good precedent to follow.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 May 2023 09:42:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "On Fri, May 19, 2023 at 09:42:00AM -0400, Tom Lane wrote:\n> Abhijit Menon-Sen <ams@toroid.org> writes:\n> > I would also prefer a GUC named gss_accept_delegation, but the current\n> > name matches the libpq gssdeleg connection parameter and the PGSSDELEG\n> > environment variable. Maybe there's something to be said for keeping\n> > those three things alike?\n> \n> +1 for spelling it out in all user-visible names. I do not think\n> that that GSS-API C symbol is a good precedent to follow.\n\nOnce nice bonus of the release notes is that it allows us to see the new\nAPI in one place to check for consistency.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 19 May 2023 09:44:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "On Fri, May 19, 2023 at 09:42:00AM -0400, Tom Lane wrote:\n> Abhijit Menon-Sen <ams@toroid.org> writes:\n>> I would also prefer a GUC named gss_accept_delegation, but the current\n>> name matches the libpq gssdeleg connection parameter and the PGSSDELEG\n>> environment variable. Maybe there's something to be said for keeping\n>> those three things alike?\n> \n> +1 for spelling it out in all user-visible names. I do not think\n> that that GSS-API C symbol is a good precedent to follow.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 May 2023 07:58:34 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "On Fri, May 19, 2023 at 07:58:34AM -0700, Nathan Bossart wrote:\n> On Fri, May 19, 2023 at 09:42:00AM -0400, Tom Lane wrote:\n> > Abhijit Menon-Sen <ams@toroid.org> writes:\n> >> I would also prefer a GUC named gss_accept_delegation, but the current\n> >> name matches the libpq gssdeleg connection parameter and the PGSSDELEG\n> >> environment variable. Maybe there's something to be said for keeping\n> >> those three things alike?\n> > \n> > +1 for spelling it out in all user-visible names. I do not think\n> > that that GSS-API C symbol is a good precedent to follow.\n> \n> +1\n\nWith less then 48 hours to beta 1 packaging, I have made this change and\nadjusted internal variable to match.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 21:33:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "On Sat, May 20, 2023 at 09:33:44PM -0400, Bruce Momjian wrote:\n> With less then 48 hours to beta 1 packaging, I have made this change and\n> adjusted internal variable to match.\n\nThe buildfarm and cfbot seem unhappy with 9c0a0e2. It looks like there are\na few remaining uses of gss_accept_deleg to rename. I'm planning to commit\nthe attached patch shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 20 May 2023 20:17:57 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "On Sat, May 20, 2023 at 08:17:57PM -0700, Nathan Bossart wrote:\n> On Sat, May 20, 2023 at 09:33:44PM -0400, Bruce Momjian wrote:\n> > With less then 48 hours to beta 1 packaging, I have made this change and\n> > adjusted internal variable to match.\n> \n> The buildfarm and cfbot seem unhappy with 9c0a0e2. It looks like there are\n> a few remaining uses of gss_accept_deleg to rename. I'm planning to commit\n> the attached patch shortly.\n\nYes, please do. I saw some matches in the tests but was confused since\nmy tests passed. I now realize I wasn't testing those.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 23:20:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Sat, May 20, 2023 at 09:33:44PM -0400, Bruce Momjian wrote:\n>> With less then 48 hours to beta 1 packaging, I have made this change and\n>> adjusted internal variable to match.\n\n> The buildfarm and cfbot seem unhappy with 9c0a0e2. It looks like there are\n> a few remaining uses of gss_accept_deleg to rename. I'm planning to commit\n> the attached patch shortly.\n\nI thought the plan was to also rename the libpq \"gssdeleg\" connection\nparameter and so on? I can look into that tomorrow, if nobody beats\nme to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 May 2023 23:21:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "On Sat, May 20, 2023 at 11:21:57PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > On Sat, May 20, 2023 at 09:33:44PM -0400, Bruce Momjian wrote:\n> >> With less then 48 hours to beta 1 packaging, I have made this change and\n> >> adjusted internal variable to match.\n> \n> > The buildfarm and cfbot seem unhappy with 9c0a0e2. It looks like there are\n> > a few remaining uses of gss_accept_deleg to rename. I'm planning to commit\n> > the attached patch shortly.\n> \n> I thought the plan was to also rename the libpq \"gssdeleg\" connection\n> parameter and so on? I can look into that tomorrow, if nobody beats\n> me to it.\n\nOh, I didn't consider those. I thought we would leave libpq alone since\nthose are often supplied on the command line and there are existing\nshort-style libpq options, e.g., gssencmode, krbsrvname, sslcrl.\n\nI am fine with such a change though.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 20 May 2023 23:27:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "On Sat, May 20, 2023 at 11:21:57PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> The buildfarm and cfbot seem unhappy with 9c0a0e2. It looks like there are\n>> a few remaining uses of gss_accept_deleg to rename. I'm planning to commit\n>> the attached patch shortly.\n\nDone.\n\n> I thought the plan was to also rename the libpq \"gssdeleg\" connection\n> parameter and so on? I can look into that tomorrow, if nobody beats\n> me to it.\n\nPlease do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 20 May 2023 20:41:22 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "At 2023-05-20 23:21:57 -0400, tgl@sss.pgh.pa.us wrote:\n>\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > On Sat, May 20, 2023 at 09:33:44PM -0400, Bruce Momjian wrote:\n> >> With less then 48 hours to beta 1 packaging, I have made this change and\n> >> adjusted internal variable to match.\n> \n> > The buildfarm and cfbot seem unhappy with 9c0a0e2. It looks like there are\n> > a few remaining uses of gss_accept_deleg to rename. I'm planning to commit\n> > the attached patch shortly.\n> \n> I thought the plan was to also rename the libpq \"gssdeleg\" connection\n> parameter and so on? I can look into that tomorrow, if nobody beats\n> me to it.\n\nI was trying the change to see if it would be better to name it\n\"gssdelegate\" instead (as in delegate on one side, and accept the\ndelegation on the other), but decided that \"gssdelegation=enable\"\nreads better than \"gssdelegate=enable\".\n\nHere's the diff.\n\n-- Abhijit", "msg_date": "Sun, 21 May 2023 12:16:04 +0530", "msg_from": "Abhijit Menon-Sen <ams@toroid.org>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "Abhijit Menon-Sen <ams@toroid.org> writes:\n> At 2023-05-20 23:21:57 -0400, tgl@sss.pgh.pa.us wrote:\n>> I thought the plan was to also rename the libpq \"gssdeleg\" connection\n>> parameter and so on? I can look into that tomorrow, if nobody beats\n>> me to it.\n\n> I was trying the change to see if it would be better to name it\n> \"gssdelegate\" instead (as in delegate on one side, and accept the\n> delegation on the other), but decided that \"gssdelegation=enable\"\n> reads better than \"gssdelegate=enable\".\n\nYeah, agreed.\n\n> Here's the diff.\n\nThanks for doing that legwork! I found a couple other places where\n\"deleg\" had escaped notice, and changed the lot. Watching the\nbuildfarm now ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 May 2023 10:57:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Abhijit Menon-Sen <ams@toroid.org> writes:\n> > At 2023-05-20 23:21:57 -0400, tgl@sss.pgh.pa.us wrote:\n> >> I thought the plan was to also rename the libpq \"gssdeleg\" connection\n> >> parameter and so on? I can look into that tomorrow, if nobody beats\n> >> me to it.\n> \n> > I was trying the change to see if it would be better to name it\n> > \"gssdelegate\" instead (as in delegate on one side, and accept the\n> > delegation on the other), but decided that \"gssdelegation=enable\"\n> > reads better than \"gssdelegate=enable\".\n> \n> Yeah, agreed.\n> \n> > Here's the diff.\n> \n> Thanks for doing that legwork! I found a couple other places where\n> \"deleg\" had escaped notice, and changed the lot. Watching the\n> buildfarm now ...\n\nThanks all for taking this up over a weekend.\n\nStephen", "msg_date": "Sun, 21 May 2023 11:53:04 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "I noticed that the value that enables this feature at libpq client side\nis 'enable'. However, for other Boolean settings like sslsni,\nkeepalives, requiressl, sslcompression, the value that enables feature\nis '1' -- we use strings only for \"enum\" type of settings.\n\nAlso, it looks like connectOptions2() doesn't validate the string value.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n", "msg_date": "Mon, 22 May 2023 11:16:09 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I noticed that the value that enables this feature at libpq client side\n> is 'enable'. However, for other Boolean settings like sslsni,\n> keepalives, requiressl, sslcompression, the value that enables feature\n> is '1' -- we use strings only for \"enum\" type of settings.\n\n> Also, it looks like connectOptions2() doesn't validate the string value.\n\nHmm, it certainly seems like this ought to accept exactly the\nsame inputs as other libpq boolean settings. I can take a look\nunless somebody else is already on it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 May 2023 09:42:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "At 2023-05-22 09:42:44 -0400, tgl@sss.pgh.pa.us wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I noticed that the value that enables this feature at libpq client side\n> > is 'enable'. However, for other Boolean settings like sslsni,\n> > keepalives, requiressl, sslcompression, the value that enables feature\n> > is '1' -- we use strings only for \"enum\" type of settings.\n> \n> > Also, it looks like connectOptions2() doesn't validate the string value.\n> \n> Hmm, it certainly seems like this ought to accept exactly the\n> same inputs as other libpq boolean settings. I can take a look\n> unless somebody else is already on it.\n\nI'm working on it.\n\n-- Abhijit\n\n\n", "msg_date": "Mon, 22 May 2023 19:14:40 +0530", "msg_from": "Abhijit Menon-Sen <ams@toroid.org>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "At 2023-05-22 09:42:44 -0400, tgl@sss.pgh.pa.us wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I noticed that the value that enables this feature at libpq client side\n> > is 'enable'. However, for other Boolean settings like sslsni,\n> > keepalives, requiressl, sslcompression, the value that enables feature\n> > is '1' -- we use strings only for \"enum\" type of settings.\n> \n> > Also, it looks like connectOptions2() doesn't validate the string value.\n> \n> Hmm, it certainly seems like this ought to accept exactly the\n> same inputs as other libpq boolean settings. I can take a look\n> unless somebody else is already on it.\n\nHere's the diff, but the 0/1 values of settings like sslsni and\nsslcompression don't seem to be validated anywhere, unlike the string\noptions in connectOptions2, so I didn't do anything for gssdelegation.\n\n(I've never run the Kerberos tests before, but I changed one\n\"gssdelegation=disable\" to \"gssdelegation=1\" and got a test failure, so\nthey're probably working as expected.)\n\n-- Abhijit", "msg_date": "Mon, 22 May 2023 20:33:34 +0530", "msg_from": "Abhijit Menon-Sen <ams@toroid.org>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "Abhijit Menon-Sen <ams@toroid.org> writes:\n> Here's the diff, but the 0/1 values of settings like sslsni and\n> sslcompression don't seem to be validated anywhere, unlike the string\n> options in connectOptions2, so I didn't do anything for gssdelegation.\n\nThanks! I'll set to work on this. I assume we want to squeeze\nit into beta1, so there's not much time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 May 2023 11:08:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" }, { "msg_contents": "Abhijit Menon-Sen <ams@toroid.org> writes:\n> Here's the diff,\n\nPushed, thanks.\n\n> but the 0/1 values of settings like sslsni and\n> sslcompression don't seem to be validated anywhere, unlike the string\n> options in connectOptions2, so I didn't do anything for gssdelegation.\n\nYeah. Perhaps it's worth adding code to validate boolean options,\nbut since nobody has noticed the lack of that for decades, I'm not\nin a hurry to (especially not in a last-minute patch).\n\nAlso, I noticed that PGGSSDELEGATION had not been added to the lists of\nenvironment variables to unset in pg_regress.c and Test/Utils.pm.\nDealt with that in the same commit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 May 2023 11:54:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Naming of gss_accept_deleg" } ]
[ { "msg_contents": "Hi\nI’m looking at config_default.pl file and I can see the line\n\ngss => undef, # --with-gssapi=<path>\n\nI was advised to use SSPI API that is built-in (windows) instead of MIT Kerberos\n\nSo what should I set and where to ensure that result PostgreSQL build will support SSPI?\n\nThanks in advance\n\nDimitry Markman\n\n\n\n\n\n\n\n\n\n\n\nHi\nI’m looking at config_default.pl file and I can see the line\n \ngss       => undef,    # --with-gssapi=<path>\n \nI was advised to use SSPI API that is built-in (windows) instead of MIT Kerberos\n \nSo what should I set and where to ensure that result PostgreSQL build will support SSPI?\n \nThanks in advance\n \nDimitry Markman", "msg_date": "Fri, 19 May 2023 15:21:32 +0000", "msg_from": "Dimitry Markman <dmarkman@mathworks.com>", "msg_from_op": true, "msg_subject": "How to ensure that SSPI support (Windows) enabled?" }, { "msg_contents": "Dimitry Markman <dmarkman@mathworks.com> writes:\n> I’m looking at config_default.pl file and I can see the line\n> gss => undef, # --with-gssapi=<path>\n> I was advised to use SSPI API that is built-in (windows) instead of MIT Kerberos\n> So what should I set and where to ensure that result PostgreSQL build will support SSPI?\n\nSSPI != GSS. SSPI support is always built in Windows builds, see\nwin32_port.h:\n\n#define ENABLE_SSPI 1\n\n(Perhaps not the best place for such a thing, but somebody put it there.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 May 2023 11:26:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to ensure that SSPI support (Windows) enabled?" }, { "msg_contents": "Hi Tom,\r\nthanks a lot for your super fast answer 😊. I really appreciate that\r\n\r\nI was asking our 3p library people how to add windows support to gss and they said that on windows we should use SSPI\r\nI’m not really familiar with either gssapi or SSPI\r\n\r\nI see that macOS has builtin support for gssapi, so all I need is to use –with-gssapi\r\nOn linux I use MIT Kerberos that we build in our 3p environment (only linux)\r\nWhen I ask to build MIT Kerberos on windows that’s when I was advised simply to use SSPI\r\n\r\nThanks again\r\n\r\ndm\r\n\r\n\r\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\r\nDate: Friday, May 19, 2023 at 11:26 AM\r\nTo: Dimitry Markman <dmarkman@mathworks.com>\r\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\nSubject: Re: How to ensure that SSPI support (Windows) enabled?\r\nDimitry Markman <dmarkman@mathworks.com> writes:\r\n> I’m looking at config_default.pl file and I can see the line\r\n> gss => undef, # --with-gssapi=<path>\r\n> I was advised to use SSPI API that is built-in (windows) instead of MIT Kerberos\r\n> So what should I set and where to ensure that result PostgreSQL build will support SSPI?\r\n\r\nSSPI != GSS. SSPI support is always built in Windows builds, see\r\nwin32_port.h:\r\n\r\n#define ENABLE_SSPI 1\r\n\r\n(Perhaps not the best place for such a thing, but somebody put it there.)\r\n\r\n regards, tom lane\r\n\n\n\n\n\n\n\n\n\nHi Tom, \nthanks a lot for your super  fast answer \r\n😊. I really appreciate that\n \nI was asking our 3p library people how to add windows support to gss and they said that on windows we should use SSPI\nI’m not really familiar with either gssapi or SSPI\n \nI see that macOS has builtin support for gssapi, so all I need is to use –with-gssapi\nOn linux I use MIT Kerberos that we build in our 3p environment (only linux)\nWhen I ask to build MIT Kerberos on windows that’s when I was advised simply to use SSPI\n \nThanks again\n \ndm\n \n \n\nFrom:\r\nTom Lane <tgl@sss.pgh.pa.us>\nDate: Friday, May 19, 2023 at 11:26 AM\nTo: Dimitry Markman <dmarkman@mathworks.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: How to ensure that SSPI support (Windows) enabled?\n\n\nDimitry Markman <dmarkman@mathworks.com> writes:\r\n> I’m looking at config_default.pl file and I can see the line\r\n> gss       => undef,    # --with-gssapi=<path>\r\n> I was advised to use SSPI API that is built-in (windows) instead of MIT Kerberos\r\n> So what should I set and where to ensure that result PostgreSQL build will support SSPI?\n\r\nSSPI != GSS.  SSPI support is always built in Windows builds, see\r\nwin32_port.h:\n\r\n#define ENABLE_SSPI 1\n\r\n(Perhaps not the best place for such a thing, but somebody put it there.)\n\r\n                        regards, tom lane", "msg_date": "Fri, 19 May 2023 15:33:04 +0000", "msg_from": "Dimitry Markman <dmarkman@mathworks.com>", "msg_from_op": true, "msg_subject": "Re: How to ensure that SSPI support (Windows) enabled?" }, { "msg_contents": "Greetings,\n\nPlease don't top-post.\n\n* Dimitry Markman (dmarkman@mathworks.com) wrote:\n> I was asking our 3p library people how to add windows support to gss and they said that on windows we should use SSPI\n\nThey're correct.\n\n> I’m not really familiar with either gssapi or SSPI\n\nKerberos support is provided through SSPI on Windows. On Linux and Unix\nsystems in general, it's provided through GSSAPI. On the wire, the two\nare (mostly) compatible.\n\n> I see that macOS has builtin support for gssapi, so all I need is to use –with-gssapi\n\nOn most Unix-based systems (and certainly for MacOS), you should be\ninstalling MIT Kerberos and using that for your GSSAPI library. The\nGSSAPI library included with MacOS has not been properly maintained by\nApple and is woefully out of date and using it will absolutely cause you\nundue headaches.\n\n> On linux I use MIT Kerberos that we build in our 3p environment (only linux)\n\nYes, MIT Kerberos on Linux makes sense.\n\n> When I ask to build MIT Kerberos on windows that’s when I was advised simply to use SSPI\n\nThat's correct, you should be using SSPI on Windows is the vast majority\nof cases.\n\nThanks,\n\nStephen", "msg_date": "Fri, 19 May 2023 11:54:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: How to ensure that SSPI support (Windows) enabled?" }, { "msg_contents": "Thanks Stephen, very useful information\ndm\n\n\nOn 5/19/23, 12:02 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\nGreetings,\n\nPlease don't top-post.\n\n* Dimitry Markman (dmarkman@mathworks.com<mailto:dmarkman@mathworks.com>) wrote:\n> I was asking our 3p library people how to add windows support to gss and they said that on windows we should use SSPI\n\nThey're correct.\n\n> I’m not really familiar with either gssapi or SSPI\n\nKerberos support is provided through SSPI on Windows. On Linux and Unix\nsystems in general, it's provided through GSSAPI. On the wire, the two\nare (mostly) compatible.\n\n> I see that macOS has builtin support for gssapi, so all I need is to use –with-gssapi\n\nOn most Unix-based systems (and certainly for MacOS), you should be\ninstalling MIT Kerberos and using that for your GSSAPI library. The\nGSSAPI library included with MacOS has not been properly maintained by\nApple and is woefully out of date and using it will absolutely cause you\nundue headaches.\n\n> On linux I use MIT Kerberos that we build in our 3p environment (only linux)\n\nYes, MIT Kerberos on Linux makes sense.\n\n> When I ask to build MIT Kerberos on windows that’s when I was advised simply to use SSPI\n\nThat's correct, you should be using SSPI on Windows is the vast majority\nof cases.\n\nThanks,\n\nStephen\n\n\n\n\n\n\n\n\n\n\nThanks Stephen, very useful information\ndm\n \n \nOn 5/19/23, 12:02 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n\nGreetings,\n\n\n \n\n\nPlease don't top-post.\n\n\n \n\n\n* Dimitry Markman (dmarkman@mathworks.com) wrote:\n\n\n> I was asking our 3p library people how to add windows support to gss and they said that on windows we should use SSPI\n\n\n \n\n\nThey're correct.\n\n\n \n\n\n> I’m not really familiar with either gssapi or SSPI\n\n\n \n\n\nKerberos support is provided through SSPI on Windows.  On Linux and Unix\n\n\nsystems in general, it's provided through GSSAPI.  On the wire, the two\n\n\nare (mostly) compatible.\n\n\n \n\n\n> I see that macOS has builtin support for gssapi, so all I need is to use –with-gssapi\n\n\n \n\n\nOn most Unix-based systems (and certainly for MacOS), you should be\n\n\ninstalling MIT Kerberos and using that for your GSSAPI library.  The\n\n\nGSSAPI library included with MacOS has not been properly maintained by\n\n\nApple and is woefully out of date and using it will absolutely cause you\n\n\nundue headaches.\n\n\n \n\n\n> On linux I use MIT Kerberos that we build in our 3p environment (only linux)\n\n\n \n\n\nYes, MIT Kerberos on Linux makes sense.\n\n\n \n\n\n> When I ask to build MIT Kerberos on windows that’s when I was advised simply to use SSPI\n\n\n \n\n\nThat's correct, you should be using SSPI on Windows is the vast majority\n\n\nof cases.\n\n\n \n\n\nThanks,\n\n\n \n\n\nStephen", "msg_date": "Fri, 19 May 2023 19:32:05 +0000", "msg_from": "Dimitry Markman <dmarkman@mathworks.com>", "msg_from_op": true, "msg_subject": "Re: How to ensure that SSPI support (Windows) enabled?" } ]
[ { "msg_contents": "Hi,\n\nWhile re-reading 38.10.10. Shared Memory and LWLocks [1] and the\ncorresponding code in pg_stat_statements.c I noticed that there are\nseveral things that can puzzle the reader.\n\nThe documentation and the example suggest that LWLock* should be\nstored within a structure in shared memory:\n\n```\ntypedef struct pgssSharedState\n{\n LWLock *lock;\n /* ... etc ... */\n} pgssSharedState;\n```\n\n... and initialized like this:\n\n```\n LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);\n\n pgss = ShmemInitStruct(\"pg_stat_statements\",\n sizeof(pgssSharedState),\n &found);\n\n if (!found)\n {\n pgss->lock = &(GetNamedLWLockTranche(\"pg_stat_statements\"))->lock;\n /* ... */\n }\n\n /* ... */\n\n LWLockRelease(AddinShmemInitLock);\n```\n\nIt is not clear why placing LWLock* in a local process memory would be a bug.\n\nOn top of that the documentation says:\n\n\"\"\"\nTo avoid possible race-conditions, each backend should use the LWLock\nAddinShmemInitLock when connecting to and initializing its allocation\nof shared memory\n\"\"\"\n\nHowever it's not clear when a race-condition may happen. The rest of\nthe text gives an overall impression that the shmem_startup_hook will\nbe called by postmaster once (unless an extension places several hooks\nin series). Thus there is no real need to ackquire AddinShmemInitLock\nand it should be safe to store LWLock* in local process memory. This\nmemory will be inherited from postmaster by child processes and the\noverall memory usage is going to be the same due to copy-on-write.\n\nPerhaps we should clarify this.\n\nThoughts?\n\n[1]: https://www.postgresql.org/docs/15/xfunc-c.html#XFUNC-SHARED-ADDIN\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 May 2023 18:49:11 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "\"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "Hi,\n\n> However it's not clear when a race-condition may happen. The rest of\n> the text gives an overall impression that the shmem_startup_hook will\n> be called by postmaster once (unless an extension places several hooks\n> in series). Thus there is no real need to ackquire AddinShmemInitLock\n> and it should be safe to store LWLock* in local process memory. This\n> memory will be inherited from postmaster by child processes and the\n> overall memory usage is going to be the same due to copy-on-write.\n\nI added some logs and comments to my toy extension [1] to demonstrate\nthis. Additionally I added a sleep() call to the shmem_startup_hook to\nmake sure there are no concurrent processes at the moment when the\nhook is called (this change is not committed to the GitHub\nrepository):\n\n```\n@@ -35,6 +35,9 @@ experiment_shmem_request(void)\n RequestNamedLWLockTranche(\"experiment\", 1);\n }\n\n+#include <stdio.h>\n+#include <unistd.h>\n+\n static void\n experiment_shmem_startup(void)\n {\n@@ -43,6 +46,8 @@ experiment_shmem_startup(void)\n elog(LOG, \"experiment_shmem_startup(): pid = %d, postmaster = %d\\n\",\n MyProcPid, !IsUnderPostmaster);\n\n+ sleep(30);\n+\n if(prev_shmem_startup_hook)\n prev_shmem_startup_hook();\n```\n\nIf we do `make && make install && make installcheck` and examine\n.//tmp_check/log/001_basic_main.log we will see:\n\n```\n[6288] LOG: _PG_init(): pid = 6288, postmaster = 1\n[6288] LOG: experiment_shmem_request(): pid = 6288, postmaster = 1\n[6288] LOG: experiment_shmem_startup(): pid = 6288, postmaster = 1\n```\n\nAlso we can make sure that there is only one process running when\nshmem_startup_hook is called.\n\nSo it looks like acquiring AddinShmemInitLock in the hook is redundant\nand also placing LWLock* in local process memory instead of shared\nmemory is safe.\n\nUnless I missed something, I suggest updating the documentation and\npg_stat_statements.c accordingly.\n\n[1]: https://github.com/afiskon/postgresql-extensions/blob/main/006-shared-memory/experiment.c#L38\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sun, 21 May 2023 13:10:49 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "Hi,\n\n> Unless I missed something, I suggest updating the documentation and\n> pg_stat_statements.c accordingly.\n\nSince no one seems to object so far I prepared the patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 22 May 2023 16:19:48 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "Hi,\n\n> Since no one seems to object so far I prepared the patch.\n\nTurned out patch v1 fails on cfbot on Windows due to extra Assert I added [1]:\n\n```\nabort() has been calledTRAP: failed Assert(\"!IsUnderPostmaster\"),\nFile: \"../src/backend/storage/ipc/ipci.c\", Line: 320, PID: 4040\nabort() has been calledTRAP: failed Assert(\"!IsUnderPostmaster\"),\nFile: \"../src/backend/storage/ipc/ipci.c\", Line: 320, PID: 3484\n```\n\nWhich indicates that currently shmem_startup_hook **can** be called by\nchild processes on Windows. Not 100% sure if this is a desired\nbehavior considering the fact that it is inconsistent with the current\nbehavior on *nix systems.\n\nHere is patch v2. Changes comparing to v1:\n\n```\n--- a/src/backend/storage/ipc/ipci.c\n+++ b/src/backend/storage/ipc/ipci.c\n@@ -311,15 +311,8 @@ CreateSharedMemoryAndSemaphores(void)\n /*\n * Now give loadable modules a chance to set up their shmem allocations\n */\n- if (shmem_startup_hook)\n- {\n- /*\n- * The following assert ensures that\nshmem_startup_hook is going to be\n- * called only by the postmaster, as promised in the\ndocumentation.\n- */\n- Assert(!IsUnderPostmaster);\n+ if (!IsUnderPostmaster && shmem_startup_hook)\n shmem_startup_hook();\n- }\n }\n```\n\nThoughts?\n\n[1]: https://api.cirrus-ci.com/v1/artifact/task/4924036300406784/testrun/build/testrun/pg_stat_statements/regress/log/postmaster.log\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 22 May 2023 17:04:06 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "Hi hackers,\n\nThat's me still talking to myself :)\n\n> Thoughts?\n\nEvidently this works differently from what I initially thought on\nWindows due to lack of fork() on this system.\n\nPFA the patch v3. Your feedback is most welcomed.\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 23 May 2023 13:47:52 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "On Tue, May 23, 2023 at 01:47:52PM +0300, Aleksander Alekseev wrote:\n> That's me still talking to myself :)\n\nLet's be two then.\n\n> Evidently this works differently from what I initially thought on\n> Windows due to lack of fork() on this system.\n\nThis comes down to the fact that processes executed with EXEC_BACKEND,\nbecause Windows does not know how to do a fork(), need to update their\nlocal variables to point to the shared memory structures already\ncreated, so we have to call CreateSharedMemoryAndSemaphores() in this\ncase.\n\n> PFA the patch v3. Your feedback is most welcomed.\n\n+ <para>\n+ It is convenient to use <literal>shmem_startup_hook</literal> which allows\n+ placing all the code responsible for initializing shared memory in one place.\n+ When using <literal>shmem_startup_hook</literal> the extension still needs\n+ to acquire <function>AddinShmemInitLock</function> in order to work properly\n+ on all the supported platforms including Windows.\n\nYeah, AddinShmemInitLock is useful because extensions have no base\npoint outside that, and they may want to update their local variables.\nStill, this is not completely exact because EXEC_BACKEND on\nnon-Windows platform would still need it, so this should be mentioned.\nAnother thing is that extensions may do like autoprewarm.c, where\nthe shmem area is not initialized in the startup shmem hook. This is\na bit cheating because this is a scenario where shmem_request_hook is\nnot requested, so shmem needs overallocation, but you'd also need a\nLWLock in this case, even for non-WIN32.\n\n+ on all the supported platforms including Windows. This is also the reason\n+ why the return value of <function>GetNamedLWLockTranche</function> is\n+ conventionally stored in shared memory instead of local process memory.\n+ </para>\n\nNot sure to follow this sentence, the result of GetNamedLWLockTranche\nis the lock, so this sentence does not seem really necessary?\n\nWhile we're on it, why not improving this part of the documentation more\nmodern? We don't mention LWLockNewTrancheId() and\nLWLockRegisterTranche() at all, so do you think that it would be worth\nadding a sample of code with that, mentioning autoprewarm.c as\nexample?\n--\nMichael", "msg_date": "Fri, 27 Oct 2023 15:52:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "Michael,\n\nOn Fri, Oct 27, 2023 at 9:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 23, 2023 at 01:47:52PM +0300, Aleksander Alekseev wrote:\n> > That's me still talking to myself :)\n>\n> Let's be two then.\n\nMany thanks for your feedback.\n\n> + <para>\n> + It is convenient to use <literal>shmem_startup_hook</literal> which allows\n> + placing all the code responsible for initializing shared memory in one place.\n> + When using <literal>shmem_startup_hook</literal> the extension still needs\n> + to acquire <function>AddinShmemInitLock</function> in order to work properly\n> + on all the supported platforms including Windows.\n>\n> Yeah, AddinShmemInitLock is useful because extensions have no base\n> point outside that, and they may want to update their local variables.\n> Still, this is not completely exact because EXEC_BACKEND on\n> non-Windows platform would still need it, so this should be mentioned.\n> Another thing is that extensions may do like autoprewarm.c, where\n> the shmem area is not initialized in the startup shmem hook. This is\n> a bit cheating because this is a scenario where shmem_request_hook is\n> not requested, so shmem needs overallocation, but you'd also need a\n> LWLock in this case, even for non-WIN32.\n\nGot it. Let's simply remove the \"including Windows\" part then.\n\n>\n> + on all the supported platforms including Windows. This is also the reason\n> + why the return value of <function>GetNamedLWLockTranche</function> is\n> + conventionally stored in shared memory instead of local process memory.\n> + </para>\n>\n> Not sure to follow this sentence, the result of GetNamedLWLockTranche\n> is the lock, so this sentence does not seem really necessary?\n\nTo be honest, by now I don't remember what was meant here, so I\nremoved the sentence.\n\n> While we're on it, why not improving this part of the documentation more\n> modern? We don't mention LWLockNewTrancheId() and\n> LWLockRegisterTranche() at all, so do you think that it would be worth\n> adding a sample of code with that, mentioning autoprewarm.c as\n> example?\n\nAgree, these functions deserve to be mentioned in this section.\n\nPFA patch v4.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 30 Oct 2023 16:03:45 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "Hi,\n\n> PFA patch v4.\n\nI didn't like the commit message. Here is the corrected patch. Sorry\nfor the noise.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 30 Oct 2023 16:10:17 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "On Mon, Oct 30, 2023 at 04:10:17PM +0300, Aleksander Alekseev wrote:\n> I didn't like the commit message. Here is the corrected patch. Sorry\n> for the noise.\n\nSounds pretty much OK to me. Thanks!\n--\nMichael", "msg_date": "Tue, 31 Oct 2023 10:33:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "On Tue, Oct 31, 2023 at 10:33:25AM +0900, Michael Paquier wrote:\n> Sounds pretty much OK to me. Thanks!\n\nThe main thing I have found annoying in the patch was the term\n\"tranche ID\", so I have reworded that to use tranche_id to match with\nthe surroundings and the routines of lwlock.h. LWLockInitialize()\nshould also be mentioned, as it is as important as the two others.\n\nThe result has been applied as fe705ef6fc1d.\n--\nMichael", "msg_date": "Wed, 1 Nov 2023 15:12:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" }, { "msg_contents": "> The result has been applied as fe705ef6fc1d.\n\nMany thanks!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 1 Nov 2023 12:59:49 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: \"38.10.10. Shared Memory and LWLocks\" may require a clarification" } ]
[ { "msg_contents": "Work from commit 5b861baa (later backpatched as commit 43e409ce)\ntaught nbtree to press on with vacuuming an index when page deletion\nfails to \"re-find\" a downlink in the target page's parent (or in some\npage to the right of the parent) due to index corruption.\n\nTo recap, avoiding ERRORs during vacuuming (even those caused by index\ncorruption) is useful because there is no reason to expect the error\nto go away on its own; we're relying on the DBA to notice the error\nand REINDEX before wraparound/xidStopLimit kicks in. This is at least\nthe case on versions before 14, where the failsafe can eventually\nkick-in and avoid catastrophe (though the failsafe can only be\nexpected to avoid the worst consequences).\n\nIt has come to my attention that there is a remaining issue of the\nsame general nature in nbtree VACUUM's page deletion code. Though this\nremaining issue seems significantly less likely to come up in\npractice, there is no reason to take any chances here. Attached patch\nfixes it.\n\nAlso attached is a bugfix for a minor issue in amcheck's\nbt_index_parent_check() function, which I noticed in passing, while I\ntested the first patch. We assumed that we'd always land on the\nleftmost page on each level first (the leftmost according to internal\npages one level up). That assumption is faulty because page deletion\nof the leftmost page is quite possible. Page deletion can be\ninterrupted, leaving a half-dead leaf page (possibly the leftmost leaf\npage) without any downlink one level up, while still leaving a left\nsibling link on the leaf level (in the leaf page that isn't about to\nbecome the leftmost, but won't until the interrupted page deletion can\nbe completed).\n\nIMV this should be backpatched all the way. The issue in question is\nrather unlikely to come up. But the fix that I've come up with is very\nwell targeted. It seems just about impossible for it to affect any\nuser that didn't already have a serious problem (without the fix).\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 19 May 2023 19:17:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Avoiding another needless ERROR during nbtree page deletion" }, { "msg_contents": "On 20/05/2023 05:17, Peter Geoghegan wrote:\n> It has come to my attention that there is a remaining issue of the\n> same general nature in nbtree VACUUM's page deletion code. Though this\n> remaining issue seems significantly less likely to come up in\n> practice, there is no reason to take any chances here.\n\nYeah, let's be consistent. Any idea what might cause this corruption?\n\n> Attached patch fixes it.\n> +\t/*\n> +\t * Validate target's right sibling page's left link points back to target.\n> +\t *\n> +\t * This is known to fail in the field in the presence of index corruption,\n> +\t * so we go to the trouble of avoiding a hard ERROR. This is fairly close\n> +\t * to what we did earlier on when we located the target's left sibling\n> +\t * (iff target has a left sibling).\n> +\t */\n\nThis comment notes that this is similar to what we did with the left \nsibling, but there isn't really any mention at the left sibling code \nabout avoiding hard ERRORs. Feels a bit backwards. Maybe move the \ncomment about avoiding the hard ERROR to where the left sibling is \nhandled. Or explain it in the function comment and just have short \n\"shouldn't happen, but avoid hard ERROR if the index is corrupt\" comment \nhere.\n\n> Also attached is a bugfix for a minor issue in amcheck's\n> bt_index_parent_check() function, which I noticed in passing, while I\n> tested the first patch. We assumed that we'd always land on the\n> leftmost page on each level first (the leftmost according to internal\n> pages one level up). That assumption is faulty because page deletion\n> of the leftmost page is quite possible. Page deletion can be\n> interrupted, leaving a half-dead leaf page (possibly the leftmost leaf\n> page) without any downlink one level up, while still leaving a left\n> sibling link on the leaf level (in the leaf page that isn't about to\n> become the leftmost, but won't until the interrupted page deletion can\n> be completed).\n\nYou could check that the left sibling is indeed a half-dead page.\n\n> \tereport(DEBUG1,\n> \t\t(errcode(ERRCODE_NO_DATA),\n> \t\t errmsg(\"block %u is not leftmost in index \\\"%s\\\"\",\n> \t\tcurrent, RelationGetRelationName(state->rel))));\n\nERRCODE_NO_DATA doesn't look right. Let's just leave out the errcode.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 22 May 2023 09:51:31 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Avoiding another needless ERROR during nbtree page deletion" }, { "msg_contents": "On Sun, May 21, 2023 at 11:51 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Any idea what might cause this corruption?\n\nNot really, no. As far as I know the specific case that was brought to\nmy attention (that put me on the path to writing this patch) was just\nan isolated incident. The interesting detail (if any) is that it was a\nrelatively recent version of Postgres (13), and that there were no\nother known problems. This means that there is a plausible remaining\ngap in the defensive checks in nbtree VACUUM on recent versions -- we\nmight have expected to avoid a hard ERROR in some other way, from one\nof the earlier checks, but that didn't happen on at least one\noccasion.\n\nYou can find several references to the \"right sibling's left-link\ndoesn't match:\" error message by googling. Most of them are clearly\nfrom the page split ERROR. But there are some from VACUUM, too:\n\nhttps://stackoverflow.com/questions/49307292/error-in-postgresql-right-siblings-left-link-doesnt-match-block-5-links-to-8\n\nGranted, that was from a 9.2 database -- before your 9.4 work that\nmade this whole area much more robust.\n\n> This comment notes that this is similar to what we did with the left\n> sibling, but there isn't really any mention at the left sibling code\n> about avoiding hard ERRORs. Feels a bit backwards. Maybe move the\n> comment about avoiding the hard ERROR to where the left sibling is\n> handled. Or explain it in the function comment and just have short\n> \"shouldn't happen, but avoid hard ERROR if the index is corrupt\" comment\n> here.\n\nGood point. Will do it that way.\n\n> > Also attached is a bugfix for a minor issue in amcheck's\n> > bt_index_parent_check() function, which I noticed in passing, while I\n> > tested the first patch.\n\n> You could check that the left sibling is indeed a half-dead page.\n\nIt's very hard to see, but...I think that we do. Sort of. Since\nbt_recheck_sibling_links() is prepared to check that the left\nsibling's right link points back to the target page.\n\nOne problem with that is that it only happens in the AccessShareLock\ncase, whereas we're concerned with fixing an issue in the ShareLock\ncase. Another problem is that it's awkward and complicated to explain.\nIt's not obvious that it's worth trying to explain all this and/or\nmaking sure that it happens in the ShareLock case, so that we have\neverything covered. I'm unsure.\n\n> ERRCODE_NO_DATA doesn't look right. Let's just leave out the errcode.\n\nAgreed.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 22 May 2023 09:22:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Avoiding another needless ERROR during nbtree page deletion" }, { "msg_contents": "On Mon, May 22, 2023 at 9:22 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > This comment notes that this is similar to what we did with the left\n> > sibling, but there isn't really any mention at the left sibling code\n> > about avoiding hard ERRORs. Feels a bit backwards. Maybe move the\n> > comment about avoiding the hard ERROR to where the left sibling is\n> > handled. Or explain it in the function comment and just have short\n> > \"shouldn't happen, but avoid hard ERROR if the index is corrupt\" comment\n> > here.\n>\n> Good point. Will do it that way.\n\nAttached is v2, which does it that way. It also adjusts the approach\ntaken to release locks and pins when the left sibling validation check\nfails. This makes it simpler and more consistent with surrounding\ncode. I might not include this change in the backpatch.\n\nNot including a revised amcheck patch here, since I'm not exactly sure\nwhat to do with your feedback on that one just yet.\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 22 May 2023 10:59:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Avoiding another needless ERROR during nbtree page deletion" }, { "msg_contents": "On Mon, May 22, 2023 at 10:59 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v2, which does it that way. It also adjusts the approach\n> taken to release locks and pins when the left sibling validation check\n> fails.\n\nI pushed this just now, backpatching all the way.\n\n> Not including a revised amcheck patch here, since I'm not exactly sure\n> what to do with your feedback on that one just yet.\n\nI'd like to go with a minimal approach in my patch to address the\nremaining issue in amcheck. Something similar to the patch that was\nposted as part of v1. While it seems important to address the issue,\nmaking sure that we have coverage of the leftmost page really being\nhalf-dead (as opposed to something that would constitute corruption)\nseems much less important. Ideally we'd have exhaustive coverage, but\nit's not a priority for me right now.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 25 May 2023 15:40:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Avoiding another needless ERROR during nbtree page deletion" }, { "msg_contents": "On Mon, May 22, 2023, 12:31 Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Sun, May 21, 2023 at 11:51 PM Heikki Linnakangas <hlinnaka@iki.fi>\n> wrote:\n> > Any idea what might cause this corruption?\n>\n> Not really, no. As far as I know the specific case that was brought to\n> my attention (that put me on the path to writing this patch) was just\n> an isolated incident. The interesting detail (if any) is that it was a\n> relatively recent version of Postgres (13), and that there were no\n> other known problems. This means that there is a plausible remaining\n> gap in the defensive checks in nbtree VACUUM on recent versions -- we\n> might have expected to avoid a hard ERROR in some other way, from one\n> of the earlier checks, but that didn't happen on at least one\n> occasion.\n>\n\nWhat error would one expect to see? I did have a case where vacuum was\nerroring on a btree in $previous_job.\n\nOn Mon, May 22, 2023, 12:31 Peter Geoghegan <pg@bowt.ie> wrote:On Sun, May 21, 2023 at 11:51 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Any idea what might cause this corruption?\n\nNot really, no. As far as I know the specific case that was brought to\nmy attention (that put me on the path to writing this patch) was just\nan isolated incident. The interesting detail (if any) is that it was a\nrelatively recent version of Postgres (13), and that there were no\nother known problems. This means that there is a plausible remaining\ngap in the defensive checks in nbtree VACUUM on recent versions -- we\nmight have expected to avoid a hard ERROR in some other way, from one\nof the earlier checks, but that didn't happen on at least one\noccasion.What error would one expect to see? I did have a case where vacuum was erroring on a btree in $previous_job.", "msg_date": "Thu, 1 Jun 2023 06:47:40 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Avoiding another needless ERROR during nbtree page deletion" }, { "msg_contents": "On Thu, Jun 1, 2023 at 6:47 AM Greg Stark <stark@mit.edu> wrote:\n> What error would one expect to see? I did have a case where vacuum was erroring on a btree in $previous_job.\n\nYou mean in general? It's usually this one:\n\nhttps://gitlab.com/gitlab-org/gitlab/-/issues/381443\n\nIn the case of this particular issue, the error is \"right sibling's\nleft-link doesn't match\". Per:\n\nhttps://stackoverflow.com/questions/49307292/error-in-postgresql-right-siblings-left-link-doesnt-match-block-5-links-to-8\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 1 Jun 2023 08:14:28 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Avoiding another needless ERROR during nbtree page deletion" } ]
[ { "msg_contents": "Hi,\n\nHere is a draft version of the long awaited patch to support LLVM 16.\nIt's mostly mechanical donkeywork, but it took some time: this donkey\nfound it quite hard to understand the mighty getelementptr\ninstruction[1] and the code generation well enough to figure out all\nthe right types, and small mistakes took some debugging effort*. I\nnow finally have a patch that passes all tests.\n\nThough it's not quite ready yet, I thought I should give this status\nupdate to report that the main task is more or less complete, since\nwe're starting to get quite a few emails about it (mostly from Fedora\nusers) and there is an entry for it on the Open Items for 16 wiki\npage. Comments/review/testing welcome.\n\nHere are some things I think I need to do next (probably after PGCon):\n\n1. If you use non-matching clang and LLVM versions I think we might\nuse \"clang -no-opaque-pointers\" at the wrong times (I've not looked\ninto that interaction yet).\n2. The treatment of function types is a bit inconsistent/messy and\ncould be tidied up.\n3. There are quite a lot of extra function calls that could perhaps be\nelided (ie type variables instead of LLVMTypeInt8(), and calls to\nLLVMStructGetTypeAtIndex() that are not used in LLVM < 16).\n4. Could use some comments.\n5. I need to test with very old versions of LLVM and Clang that we\nclaim to support (I have several years' worth of releases around but\nnothing older than 9).\n6. I need to go through the types again with a fine tooth comb, and\ncheck the test coverage to look out for eg GEP array arithmetic with\nthe wrong type/size that isn't being exercised.\n\n*For anyone working with this type of IR generation code and\nquestioning their sanity, I can pass on some excellent advice I got\nfrom Andres: build LLVM yourself with assertions enabled, as they\ncatch some classes of silly mistake that otherwise just segfault\ninscrutably on execution.\n\n[1] https://llvm.org/docs/GetElementPtr.html", "msg_date": "Sun, 21 May 2023 15:01:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "LLVM 16 (opaque pointers)" }, { "msg_contents": "Oh, one important thing I forgot to mention: that patch is for LLVM 16\nonly, and I developed it with a local build of their \"release/16.x\"\nbranch on a FreeBSD box, and also tested with a released package for\n16 on a Debian box. Further changes are already needed for their\n\"main\" branch (LLVM 17-to-be), so this won't quite be enough to shut\nseawasp up. At a glance, we will need to change from the \"old pass\nmanager\" API that has recently been vaporised[1]\n(llvm-c/Transforms/PassManagerBuilder.h) to the new one[2][3]\n(llvm-c/Transforms/PassBuilder.h), which I suspect/hope will be as\nsimple as changing llvmjit.c to call LLVMRunPasses() with a string\ndescribing the passes we want in \"opt -passes\" format, instead of our\ncode that calls LLVMAddFunctionInlingPass() etc. But that'll be a\ntopic for another day, and another thread.\n\n[1] https://github.com/llvm/llvm-project/commit/0aac9a2875bad4f065367e4a6553fad78605f895\n[2] https://llvm.org/docs/NewPassManager.html\n[3] https://reviews.llvm.org/D102136\n\n\n", "msg_date": "Mon, 22 May 2023 15:38:44 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "> On Mon, May 22, 2023 at 03:38:44PM +1200, Thomas Munro wrote:\n> Further changes are already needed for their \"main\" branch (LLVM\n> 17-to-be), so this won't quite be enough to shut seawasp up. At a\n> glance, we will need to change from the \"old pass manager\" API that\n> has recently been vaporised[1]\n> (llvm-c/Transforms/PassManagerBuilder.h) to the new one[2][3]\n> (llvm-c/Transforms/PassBuilder.h), which I suspect/hope will be as\n> simple as changing llvmjit.c to call LLVMRunPasses() with a string\n> describing the passes we want in \"opt -passes\" format, instead of our\n> code that calls LLVMAddFunctionInlingPass() etc. But that'll be a\n> topic for another day, and another thread.\n>\n> [1] https://github.com/llvm/llvm-project/commit/0aac9a2875bad4f065367e4a6553fad78605f895\n> [2] https://llvm.org/docs/NewPassManager.html\n> [3] https://reviews.llvm.org/D102136\n\nThanks for tackling the topic! I've tested it with a couple of versions,\nLLVM 12 that comes with my Gentoo box, LLVM 15 build from sources and\nthe modified version of patch adopted for LLVM 17 (build form sources as\nwell). In all three cases everything seems to be working fine.\n\nSimple benchmarking with a query stolen from some other jit thread\n(pgbench running single client with multiple unions of selects a-la\nSELECT a, count(*), sum(b) FROM test WHERE c = 2 GROUP BY a) show some\nslight performance differences, but nothing dramatic so far. LLVM 17\nversion produces the lowest latency, with faster generation, inlining\nand optimization, but slower emission time. LLVM 12 version produces the\nlargest latencies with everything except emission timings being slower.\nLLVM 15 is somewhere in between.\n\nI'll continue reviewing and, for the records, attach adjustments I was\nusing for LLVM 17 (purely for testing, not taking into account other\nversions), in case if I've missed something.", "msg_date": "Sun, 4 Jun 2023 11:33:42 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Le dimanche 21 mai 2023, 05:01:41 CEST Thomas Munro a écrit :\n> Hi,\n> \n> Here is a draft version of the long awaited patch to support LLVM 16.\n> It's mostly mechanical donkeywork, but it took some time: this donkey\n> found it quite hard to understand the mighty getelementptr\n> instruction[1] and the code generation well enough to figure out all\n> the right types, and small mistakes took some debugging effort*. I\n> now finally have a patch that passes all tests.\n> \n> Though it's not quite ready yet, I thought I should give this status\n> update to report that the main task is more or less complete, since\n> we're starting to get quite a few emails about it (mostly from Fedora\n> users) and there is an entry for it on the Open Items for 16 wiki\n> page. Comments/review/testing welcome.\n\nHello Thomas,\n\nThank you for this effort !\n\nI've tested it against llvm 15 and 16, and found no problem with it.\n\n> 6. I need to go through the types again with a fine tooth comb, and\n> check the test coverage to look out for eg GEP array arithmetic with\n> the wrong type/size that isn't being exercised.\n\nI haven't gone through the test coverage myself, but I exercised the following \nthings: \n\n - running make installcheck with jit_above_cost = 0\n - letting sqlsmith hammer random queries at it for a few hours.\n\nThis didn't show obvious issues.\n\n> *For anyone working with this type of IR generation code and\n> questioning their sanity, I can pass on some excellent advice I got\n> from Andres: build LLVM yourself with assertions enabled, as they\n> catch some classes of silly mistake that otherwise just segfault\n> > inscrutably on execution.\n\nI tried my hand at backporting it to previous versions, and not knowing \nanything about it made me indeed question my sanity. It's quite easy for PG \n15, 14, 13. PG 12 is nothing insurmontable either, but PG 11 is a bit hairier \nmost notably due to to the change in fcinfo args representation. But I guess \nthat's also a topic for another day :-)\n\nBest regards,\n\n--\nRonan Dunklau\n\n\n\n\n\n\n", "msg_date": "Thu, 10 Aug 2023 16:56:54 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-08-10 16:56:54 +0200, Ronan Dunklau wrote:\n> I tried my hand at backporting it to previous versions, and not knowing\n> anything about it made me indeed question my sanity. It's quite easy for PG\n> 15, 14, 13. PG 12 is nothing insurmontable either, but PG 11 is a bit hairier\n> most notably due to to the change in fcinfo args representation. But I guess\n> that's also a topic for another day :-)\n\nGiven that 11 is about to be EOL, I don't think it's worth spending the time\nto support a new LLVM version for it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Aug 2023 10:59:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-05-21 15:01:41 +1200, Thomas Munro wrote:\n> *For anyone working with this type of IR generation code and\n> questioning their sanity, I can pass on some excellent advice I got\n> from Andres: build LLVM yourself with assertions enabled, as they\n> catch some classes of silly mistake that otherwise just segfault\n> inscrutably on execution.\n\nHm. I think we need a buildfarm animal with an assertion enabled llvm 16 once\nwe merge this. I think after an upgrade my buildfarm machine has the necessary\nresources.\n\n\n> @@ -150,7 +150,7 @@ llvm_compile_expr(ExprState *state)\n> \n> \t/* create function */\n> \teval_fn = LLVMAddFunction(mod, funcname,\n> -\t\t\t\t\t\t\t llvm_pg_var_func_type(\"TypeExprStateEvalFunc\"));\n> +\t\t\t\t\t\t\t llvm_pg_var_func_type(\"ExecInterpExprStillValid\"));\n\nHm, that's a bit ugly. But ...\n\n> @@ -77,9 +80,44 @@ extern Datum AttributeTemplate(PG_FUNCTION_ARGS);\n> Datum\n> AttributeTemplate(PG_FUNCTION_ARGS)\n> {\n> +\tPGFunction\tfp PG_USED_FOR_ASSERTS_ONLY;\n> +\n> +\tfp = &AttributeTemplate;\n> \tPG_RETURN_NULL();\n> }\n\nOther parts of the file do this by putting the functions into\nreferenced_functions[], i'd copy that here and below.\n\n> +void\n> +ExecEvalSubroutineTemplate(ExprState *state,\n> +\t\t\t\t\t\t struct ExprEvalStep *op,\n> +\t\t\t\t\t\t ExprContext *econtext)\n> +{\n> +\tExecEvalSubroutine fp PG_USED_FOR_ASSERTS_ONLY;\n> +\n> +\tfp = &ExecEvalSubroutineTemplate;\n> +}\n> +\n> +extern bool ExecEvalBoolSubroutineTemplate(ExprState *state,\n> +\t\t\t\t\t\t\t\t\t\t struct ExprEvalStep *op,\n> +\t\t\t\t\t\t\t\t\t\t ExprContext *econtext);\n> +bool\n> +ExecEvalBoolSubroutineTemplate(ExprState *state,\n> +\t\t\t\t\t\t\t struct ExprEvalStep *op,\n> +\t\t\t\t\t\t\t ExprContext *econtext)\n> +{\n> +\tExecEvalBoolSubroutine fp PG_USED_FOR_ASSERTS_ONLY;\n> +\n> +\tfp = &ExecEvalBoolSubroutineTemplate;\n> +\treturn false;\n> +}\n> +\n\n\nThanks for working on this!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Aug 2023 11:09:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "\n> [...] Further changes are already needed for their \"main\" branch (LLVM \n> 17-to-be), so this won't quite be enough to shut seawasp up.\n\nFor information, the physical server which was hosting my 2 bf animals \n(seawasp and moonjelly) has given up rebooting after a power cut a few \nweeks/months ago, and I have not setup a replacement (yet).\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 15 Aug 2023 11:26:55 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Belated thanks Dmitry, Ronan, Andres for your feedback. Here's a new\nversion, also including Dmitry's patch for 17 which it is now also\ntime to push. It required a bit more trivial #if magic to be\nconditional, as Dmitry already mentioned. I just noticed that Dmitry\nhad the LLVMPassBuilderOptionsSetInlinerThreshold() function added to\nLLVM 17's C API for this patch. Thanks! (Better than putting stuff\nin llvmjit_wrap.c, if you can get it upstreamed in time.)\n\nI thought I needed to block users from building with too-old clang and\ntoo-new LLVM, but I haven't managed to find a combination that\nactually breaks anything. I wouldn't recommend it, but for example\nclang 10 bitcode seems to be inlinable without problems by LLVM 16 on\nmy system (I didn't use an assert build of LLVM though). I think that\ncould be a separate adjustment if we learn that we need to enforce or\ndocument a constraint there.\n\nSo far I've tested LLVM versions 10, 15, 16, 17, 18 (= their main\nbranch) against PostgreSQL versions 14, 15, 16. I've attached the\nversions that apply to master and 16, and pushed back-patches to 14\nand 15 to public branches if anyone's interested[1]. Back-patching\nfurther seems a bit harder. I'm quite willing to do it, but ... do we\nactually need to, ie does anyone really *require* old PostgreSQL\nrelease branches to work with new LLVM?\n\n(I'll start a separate thread about the related question of when we\nget to drop support for old LLVMs.)\n\nOne point from an earlier email:\n\nOn Sat, Aug 12, 2023 at 6:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > AttributeTemplate(PG_FUNCTION_ARGS)\n> > {\n> > + PGFunction fp PG_USED_FOR_ASSERTS_ONLY;\n> > +\n> > + fp = &AttributeTemplate;\n\n> Other parts of the file do this by putting the functions into\n> referenced_functions[], i'd copy that here and below.\n\nActually here I just wanted to assert that the 3 template functions\nmatch certain function pointer types. To restate what these functions\nare about: in the JIT code I need the function type, but we have only\nthe function pointer type, and it is now impossible to go from a\nfunction pointer type to a function type, so I needed to define some\nexample functions with the right prototype (well, one of them existed\nalready but I needed more), and then I wanted to assert that they are\nassignable to the appropriate function pointer types. Does that make\nsense?\n\nIn this version I changed it to what I hope is a more obvious/explicit\nexpression of that goal:\n\n+ AssertVariableIsOfType(&ExecEvalSubroutineTemplate,\n+ ExecEvalSubroutine);\n\n[1] https://github.com/macdice/postgres/tree/llvm16-14 and -15", "msg_date": "Thu, 21 Sep 2023 08:22:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "\nHi Thomas,\n\nOn Thu, 2023-09-21 at 08:22 +1200, Thomas Munro wrote:\n> So far I've tested LLVM versions 10, 15, 16, 17, 18 (= their main\n> branch) against PostgreSQL versions 14, 15, 16.  I've attached the\n> versions that apply to master and 16, and pushed back-patches to 14\n> and 15 to public branches if anyone's interested[1].  Back-patching\n> further seems a bit harder.  I'm quite willing to do it, but ... do we\n> actually need to, ie does anyone really *require* old PostgreSQL\n> release branches to work with new LLVM?\n\nRHEL releases new LLVM version along with their new minor releases every\n6 month, and we have to build older versions with new LLVM each time.\n From RHEL point of view, it would be great if we can back-patch back to\nv12 :(\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Thu, 21 Sep 2023 01:24:05 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Thu, Sep 21, 2023 at 12:24 PM Devrim Gündüz <devrim@gunduz.org> wrote:\n> On Thu, 2023-09-21 at 08:22 +1200, Thomas Munro wrote:\n> > So far I've tested LLVM versions 10, 15, 16, 17, 18 (= their main\n> > branch) against PostgreSQL versions 14, 15, 16. I've attached the\n> > versions that apply to master and 16, and pushed back-patches to 14\n> > and 15 to public branches if anyone's interested[1]. Back-patching\n> > further seems a bit harder. I'm quite willing to do it, but ... do we\n> > actually need to, ie does anyone really *require* old PostgreSQL\n> > release branches to work with new LLVM?\n>\n> RHEL releases new LLVM version along with their new minor releases every\n> 6 month, and we have to build older versions with new LLVM each time.\n> From RHEL point of view, it would be great if we can back-patch back to\n> v12 :(\n\nGot it. OK, I'll work on 12 and 13 now.\n\n\n", "msg_date": "Thu, 21 Sep 2023 12:47:35 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Thu, Sep 21, 2023 at 12:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Sep 21, 2023 at 12:24 PM Devrim Gündüz <devrim@gunduz.org> wrote:\n> > RHEL releases new LLVM version along with their new minor releases every\n> > 6 month, and we have to build older versions with new LLVM each time.\n> > From RHEL point of view, it would be great if we can back-patch back to\n> > v12 :(\n>\n> Got it. OK, I'll work on 12 and 13 now.\n\nThe back-patch to 12 was a little trickier than anticipated, but after\ntaking a break and trying again I now have PG 12...17 patches that\nI've tested against LLVM 10...18 (that's 54 combinations), in every\ncase only with the clang corresponding to LLVM.\n\nFor 12, I decided to back-patch the llvm_types_module variable that\nwas introduced in 13, to keep the code more similar.\n\nFor master, I had to rebase over Daniel's recent commits, which\nrequired re-adding unused variables removed by 2dad308e, and\nthen changing a bunch of LLVM type constructors like LLVMInt8Type() to\nthe LLVMInt8TypeInContext(lc, ...) variants following the example of\n9dce2203. Without that, type assertions in my LLVM 18 debug build\nwould fail (and maybe there could be a leak problem, though I'm not\nsure that really applied to integer (non-struct) types).\n\nI've attached only the patches for master, but the 12-16 versions are\navailable at https://github.com/macdice/postgres/tree/llvm16-$N in\ncase anyone has comments on those.", "msg_date": "Wed, 11 Oct 2023 21:59:50 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Le mercredi 11 octobre 2023, 10:59:50 CEST Thomas Munro a écrit :\n> The back-patch to 12 was a little trickier than anticipated, but after\n> taking a break and trying again I now have PG 12...17 patches that\n> I've tested against LLVM 10...18 (that's 54 combinations), in every\n> case only with the clang corresponding to LLVM.\n\nThank you Thomas for those patches, and the extensive testing, I will run my \nown and let you know.\n\n> I've attached only the patches for master, but the 12-16 versions are\n> available at https://github.com/macdice/postgres/tree/llvm16-$N in\n> case anyone has comments on those.\n\nFor PG13 and PG12, it looks like the ExecEvalBoolSubroutineTemplate is not \nused anywhere, as ExecEvalBoolSubroutine was introduced in PG14 if I'm not \nmistaken. \n\nBest regards,\n\n--\nRonan Dunklau\n\n\n\n\n\n", "msg_date": "Wed, 11 Oct 2023 11:31:43 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-10-11 21:59:50 +1300, Thomas Munro wrote:\n> +#else\n> +\tLLVMPassBuilderOptionsRef options;\n> +\tLLVMErrorRef err;\n> +\tint\t\t\tcompile_optlevel;\n> +\tchar\t *passes;\n> +\n> +\tif (context->base.flags & PGJIT_OPT3)\n> +\t\tcompile_optlevel = 3;\n> +\telse\n> +\t\tcompile_optlevel = 0;\n> +\n> +\tpasses = psprintf(\"default<O%d>,mem2reg,function(no-op-function),no-op-module\",\n> +\t\t\t\t\t compile_optlevel);\n\nI don't think the \"function(no-op-function),no-op-module\" bit does something\nparticularly useful?\n\nI also don't think we should add the mem2reg pass outside of -O0 - running it\nafter a real optimization pipeline doesn't seem useful and might even make the\ncode worse? mem2reg is included in default<O1> (and obviously also in O3).\n\n\nThanks for working on this stuff!\n\n\nI'm working on setting up buildfarm animals for 16, 17, each once with\na normal and an assertion enabled LLVM build.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Oct 2023 16:31:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "> On Thu, Oct 12, 2023 at 04:31:20PM -0700, Andres Freund wrote:\n> Hi,\n>\n> On 2023-10-11 21:59:50 +1300, Thomas Munro wrote:\n> > +#else\n> > +\tLLVMPassBuilderOptionsRef options;\n> > +\tLLVMErrorRef err;\n> > +\tint\t\t\tcompile_optlevel;\n> > +\tchar\t *passes;\n> > +\n> > +\tif (context->base.flags & PGJIT_OPT3)\n> > +\t\tcompile_optlevel = 3;\n> > +\telse\n> > +\t\tcompile_optlevel = 0;\n> > +\n> > +\tpasses = psprintf(\"default<O%d>,mem2reg,function(no-op-function),no-op-module\",\n> > +\t\t\t\t\t compile_optlevel);\n>\n> I don't think the \"function(no-op-function),no-op-module\" bit does something\n> particularly useful?\n\nRight, looks like leftovers after verifying which passes were actually\napplied. My bad, could be removed.\n\n> I also don't think we should add the mem2reg pass outside of -O0 - running it\n> after a real optimization pipeline doesn't seem useful and might even make the\n> code worse? mem2reg is included in default<O1> (and obviously also in O3).\n\nMy understanding was that while mem2reg is included everywhere above\n-O0, this set of passes won't hurt. But yeah, if you say it could\ndegrade the final result, it's better to not do this. I'll update this\npart.\n\n\n", "msg_date": "Fri, 13 Oct 2023 11:06:21 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "> On Fri, Oct 13, 2023 at 11:06:21AM +0200, Dmitry Dolgov wrote:\n> > On Thu, Oct 12, 2023 at 04:31:20PM -0700, Andres Freund wrote:\n> > I don't think the \"function(no-op-function),no-op-module\" bit does something\n> > particularly useful?\n>\n> Right, looks like leftovers after verifying which passes were actually\n> applied. My bad, could be removed.\n>\n> > I also don't think we should add the mem2reg pass outside of -O0 - running it\n> > after a real optimization pipeline doesn't seem useful and might even make the\n> > code worse? mem2reg is included in default<O1> (and obviously also in O3).\n>\n> My understanding was that while mem2reg is included everywhere above\n> -O0, this set of passes won't hurt. But yeah, if you say it could\n> degrade the final result, it's better to not do this. I'll update this\n> part.\n\nHere is what I had in mind (only this part in the second patch was changed).", "msg_date": "Fri, 13 Oct 2023 16:44:13 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-10-13 11:06:21 +0200, Dmitry Dolgov wrote:\n> > On Thu, Oct 12, 2023 at 04:31:20PM -0700, Andres Freund wrote:\n> > I also don't think we should add the mem2reg pass outside of -O0 - running it\n> > after a real optimization pipeline doesn't seem useful and might even make the\n> > code worse? mem2reg is included in default<O1> (and obviously also in O3).\n> \n> My understanding was that while mem2reg is included everywhere above\n> -O0, this set of passes won't hurt. But yeah, if you say it could\n> degrade the final result, it's better to not do this. I'll update this\n> part.\n\nIt's indeed included anywhere above that, but adding it explicitly to the\nschedule means it's excuted twice:\n\necho 'int foo(int a) { return a / 343; }' | clang-16 -emit-llvm -x c -c -o - -S -|sed -e 's/optnone//'|opt-17 -debug-pass-manager -passes='default<O1>,mem2reg' -o /dev/null 2>&1|grep Promote\nRunning pass: PromotePass on foo (2 instructions)\nRunning pass: PromotePass on foo (2 instructions)\n\nThe second one is in a point in the pipeline where it doesn't help. It also\nrequires another analysis pass to be executed unnecessarily.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Oct 2023 07:55:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On 2023-10-13 16:44:13 +0200, Dmitry Dolgov wrote:\n> Here is what I had in mind (only this part in the second patch was changed).\n\nMakes sense to me. I think we'll likely eventually want to use a custom\npipeline anyway, and I think we should consider using an optimization level\ninbetween \"not at all\" \"as hard as possible\"...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Oct 2023 07:56:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Wed, Oct 11, 2023 at 10:31 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> Le mercredi 11 octobre 2023, 10:59:50 CEST Thomas Munro a écrit :\n> > The back-patch to 12 was a little trickier than anticipated, but after\n> > taking a break and trying again I now have PG 12...17 patches that\n> > I've tested against LLVM 10...18 (that's 54 combinations), in every\n> > case only with the clang corresponding to LLVM.\n>\n> Thank you Thomas for those patches, and the extensive testing, I will run my\n> own and let you know.\n\nThanks! No news is good news, I hope? I'm hoping to commit this today.\n\n> > I've attached only the patches for master, but the 12-16 versions are\n> > available at https://github.com/macdice/postgres/tree/llvm16-$N in\n> > case anyone has comments on those.\n>\n> For PG13 and PG12, it looks like the ExecEvalBoolSubroutineTemplate is not\n> used anywhere, as ExecEvalBoolSubroutine was introduced in PG14 if I'm not\n> mistaken.\n\nRight, looks like I can remove that in those branches.\n\n\n", "msg_date": "Sat, 14 Oct 2023 09:32:13 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Sat, Oct 14, 2023 at 3:56 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-10-13 16:44:13 +0200, Dmitry Dolgov wrote:\n> > Here is what I had in mind (only this part in the second patch was changed).\n>\n> Makes sense to me. I think we'll likely eventually want to use a custom\n> pipeline anyway, and I think we should consider using an optimization level\n> inbetween \"not at all\" \"as hard as possible\"...\n\nThanks Dmitry and Andres. I'm planning to commit these today if there\nare no further comments.\n\n\n", "msg_date": "Sat, 14 Oct 2023 09:34:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Le vendredi 13 octobre 2023, 22:32:13 CEST Thomas Munro a écrit :\n> On Wed, Oct 11, 2023 at 10:31 PM Ronan Dunklau <ronan.dunklau@aiven.io> \nwrote:\n> > Le mercredi 11 octobre 2023, 10:59:50 CEST Thomas Munro a écrit :\n> > > The back-patch to 12 was a little trickier than anticipated, but after\n> > > taking a break and trying again I now have PG 12...17 patches that\n> > > I've tested against LLVM 10...18 (that's 54 combinations), in every\n> > > case only with the clang corresponding to LLVM.\n> > \n> > Thank you Thomas for those patches, and the extensive testing, I will run\n> > my own and let you know.\n> \n> Thanks! No news is good news, I hope? I'm hoping to commit this today.\n> \n> > > I've attached only the patches for master, but the 12-16 versions are\n> > > available at https://github.com/macdice/postgres/tree/llvm16-$N in\n> > > case anyone has comments on those.\n> > \n> > For PG13 and PG12, it looks like the ExecEvalBoolSubroutineTemplate is not\n> > used anywhere, as ExecEvalBoolSubroutine was introduced in PG14 if I'm not\n> > mistaken.\n> \n> Right, looks like I can remove that in those branches.\n\nOh sorry I thought I followed up. I ran the same stress testing involving \nseveral hours of sqlsmith with all jit costs set to zero and didn't notice \nanything with LLVM16.\n\nThank you !\n\n--\nRonan Dunklau\n\n\n\n\n\n", "msg_date": "Mon, 16 Oct 2023 10:31:32 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "I pushed the first patch, for LLVM 16, and the build farm told me that\nsome old LLVM versions don't like it. The problem seems to be the\nfunction LLVMGlobalGetValueType(). I can see that that was only added\nto the C API in 2018, so it looks like I may need to back-port that\n(trivial) wrapper into our own llvmjit_wrap.cpp for LLVM < 8.\n\n\n", "msg_date": "Thu, 19 Oct 2023 00:02:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Thu, Oct 19, 2023 at 12:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I pushed the first patch, for LLVM 16, and the build farm told me that\n> some old LLVM versions don't like it. The problem seems to be the\n> function LLVMGlobalGetValueType(). I can see that that was only added\n> to the C API in 2018, so it looks like I may need to back-port that\n> (trivial) wrapper into our own llvmjit_wrap.cpp for LLVM < 8.\n\nConcretely something like the attached should probably fix it, but\nit'll take me a little while to confirm that...", "msg_date": "Thu, 19 Oct 2023 01:06:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Thu, Oct 19, 2023 at 1:06 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Oct 19, 2023 at 12:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I pushed the first patch, for LLVM 16, and the build farm told me that\n> > some old LLVM versions don't like it. The problem seems to be the\n> > function LLVMGlobalGetValueType(). I can see that that was only added\n> > to the C API in 2018, so it looks like I may need to back-port that\n> > (trivial) wrapper into our own llvmjit_wrap.cpp for LLVM < 8.\n>\n> Concretely something like the attached should probably fix it, but\n> it'll take me a little while to confirm that...\n\nPushed, after digging up some old LLVM skeletons to test, and those\n\"old LLVM\" animals are turning green now. I went ahead and pushed the\nmuch smaller and simpler patch for LLVM 17.\n\nInterestingly, a new problem just showed up on the the RHEL9 s390x\nmachine \"lora\", where a previously reported problem [1] apparently\nre-appeared. It complains about incompatible layout, previously\nblamed on mismatch between clang and LLVM versions. I can see that\nits clang is v15 from clues in the conflig log, but I don't know which\nversion of LLVM is being used. However, I see now that --with-llvm\nwas literally just turned on, so there is no reason to think that this\nwould have worked before or this work is relevant. Strange though --\nwe must be able to JIT further than that on s390x because we have\ncrash reports in other threads (ie we made it past this and into other\nmore advanced brokenness).\n\n[1] https://www.postgresql.org/message-id/flat/20210319190047.7o4bwhbp5dzkqif3%40alap3.anarazel.de#ec51b488ca8eac8c603d91c0439d38b2\n\n\n", "msg_date": "Thu, 19 Oct 2023 06:20:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-10-19 06:20:26 +1300, Thomas Munro wrote:\n> On Thu, Oct 19, 2023 at 1:06 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Thu, Oct 19, 2023 at 12:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > I pushed the first patch, for LLVM 16, and the build farm told me that\n> > > some old LLVM versions don't like it. The problem seems to be the\n> > > function LLVMGlobalGetValueType(). I can see that that was only added\n> > > to the C API in 2018, so it looks like I may need to back-port that\n> > > (trivial) wrapper into our own llvmjit_wrap.cpp for LLVM < 8.\n> >\n> > Concretely something like the attached should probably fix it, but\n> > it'll take me a little while to confirm that...\n> \n> Pushed, after digging up some old LLVM skeletons to test, and those\n> \"old LLVM\" animals are turning green now. I went ahead and pushed the\n> much smaller and simpler patch for LLVM 17.\n\nI enabled a new set of buildfarm animals to test LLVM 16 and 17. I initially\nforgot to disable them for 11, which means we'll have those failed build on 11\nuntil they age out :/.\n\nParticularly for the LLVM debug builds it'll take a fair bit to run on all\nbranches. Each branch takes about 3h.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 18 Oct 2023 11:11:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "\nHi,\n\nOn Thu, 2023-10-19 at 06:20 +1300, Thomas Munro wrote:\n> Pushed, after digging up some old LLVM skeletons to test, and those\n> \"old LLVM\" animals are turning green now.  I went ahead and pushed the\n> much smaller and simpler patch for LLVM 17.\n\nThank you! I can confirm that RPMs built fine on Fedora 39 with those\npatches, which ships LLVM 17.0.2 as of today.\n\nI can also confirm that builds are not broken on RHEL 9 and 8 and Fedora\n37 which ship LLVM 15, and Fedora 38 (LLVM 16).\n\nThanks again!\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Wed, 18 Oct 2023 19:38:09 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Thu, Oct 19, 2023 at 6:20 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Interestingly, a new problem just showed up on the the RHEL9 s390x\n> machine \"lora\", where a previously reported problem [1] apparently\n> re-appeared. It complains about incompatible layout, previously\n> blamed on mismatch between clang and LLVM versions. I can see that\n> its clang is v15 from clues in the conflig log, but I don't know which\n> version of LLVM is being used. However, I see now that --with-llvm\n> was literally just turned on, so there is no reason to think that this\n> would have worked before or this work is relevant. Strange though --\n> we must be able to JIT further than that on s390x because we have\n> crash reports in other threads (ie we made it past this and into other\n> more advanced brokenness).\n\nI see that Mark has also just enabled --with-llvm on some POWER Linux\nanimals, and they have failed in various ways. The failures are\nstrangely lacking in detail. It seems we didn't have coverage before,\nand I recall that there were definitely versions of LLVM that *didn't*\nwork for our usage in the past, which I'll need to dredge out of the\narchives. I will try to get onto a cfarm POWER machine and see if I\ncan reproduce that, before and after these commits, and whose bug is\nit etc.\n\nI doubt I can get anywhere near an s390x though, and we definitely had\npre-existing problems on that arch.\n\n\n", "msg_date": "Sat, 21 Oct 2023 10:48:47 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Sat, Oct 21, 2023 at 10:48:47AM +1300, Thomas Munro wrote:\n> On Thu, Oct 19, 2023 at 6:20 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Interestingly, a new problem just showed up on the the RHEL9 s390x\n> > machine \"lora\", where a previously reported problem [1] apparently\n> > re-appeared. It complains about incompatible layout, previously\n> > blamed on mismatch between clang and LLVM versions. I can see that\n> > its clang is v15 from clues in the conflig log, but I don't know which\n> > version of LLVM is being used. However, I see now that --with-llvm\n> > was literally just turned on, so there is no reason to think that this\n> > would have worked before or this work is relevant. Strange though --\n> > we must be able to JIT further than that on s390x because we have\n> > crash reports in other threads (ie we made it past this and into other\n> > more advanced brokenness).\n> \n> I see that Mark has also just enabled --with-llvm on some POWER Linux\n> animals, and they have failed in various ways. The failures are\n> strangely lacking in detail. It seems we didn't have coverage before,\n> and I recall that there were definitely versions of LLVM that *didn't*\n> work for our usage in the past, which I'll need to dredge out of the\n> archives. I will try to get onto a cfarm POWER machine and see if I\n> can reproduce that, before and after these commits, and whose bug is\n> it etc.\n\nYeah, I'm slowing enabling --with-llvm on POWER, s390x, and aarch64 (but\nnone here yet as I write this)...\n\n> I doubt I can get anywhere near an s390x though, and we definitely had\n> pre-existing problems on that arch.\n\nIf you want to send me your ssh key, I have access to these systems\nthrough OSUOSL and LinuxFoundation programs.\n\nRegards,\nMark\n\n--\nMark Wong\nEDB https://enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Oct 2023 15:00:40 -0700", "msg_from": "Mark Wong <markwkm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I doubt I can get anywhere near an s390x though, and we definitely had\n> pre-existing problems on that arch.\n\nYeah. Too bad there's no s390x in the gcc compile farm.\n(I'm wondering how straight a line to draw between that fact\nand llvm's evident shortcomings on s390x.)\n\nI'm missing my old access to Red Hat's dev machines. But in\nthe meantime, Mark's clearly got beaucoup access to s390\nmachines, so I wonder if he can let you into any of them?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Oct 2023 18:03:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-10-21 10:48:47 +1300, Thomas Munro wrote:\n> On Thu, Oct 19, 2023 at 6:20 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Interestingly, a new problem just showed up on the the RHEL9 s390x\n> > machine \"lora\", where a previously reported problem [1] apparently\n> > re-appeared. It complains about incompatible layout, previously\n> > blamed on mismatch between clang and LLVM versions. I can see that\n> > its clang is v15 from clues in the conflig log, but I don't know which\n> > version of LLVM is being used. However, I see now that --with-llvm\n> > was literally just turned on, so there is no reason to think that this\n> > would have worked before or this work is relevant. Strange though --\n> > we must be able to JIT further than that on s390x because we have\n> > crash reports in other threads (ie we made it past this and into other\n> > more advanced brokenness).\n> \n> I see that Mark has also just enabled --with-llvm on some POWER Linux\n> animals, and they have failed in various ways. The failures are\n> strangely lacking in detail. It seems we didn't have coverage before,\n> and I recall that there were definitely versions of LLVM that *didn't*\n> work for our usage in the past, which I'll need to dredge out of the\n> archives. I will try to get onto a cfarm POWER machine and see if I\n> can reproduce that, before and after these commits, and whose bug is\n> it etc.\n\nI'm quite sure that jiting did pass on ppc64 at some point.\n\n\n> I doubt I can get anywhere near an s390x though, and we definitely had\n> pre-existing problems on that arch.\n\nIMO an LLVM bug, rather than a postgres bug, but I guess it's all a matter of\nperspective.\nhttps://github.com/llvm/llvm-project/issues/53009#issuecomment-1042748553\n\nI had made another bug report about this issue at some point, but I can't\nrefind it right now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Oct 2023 15:12:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Sat, Oct 21, 2023 at 11:12 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-10-21 10:48:47 +1300, Thomas Munro wrote:\n> > On Thu, Oct 19, 2023 at 6:20 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I see that Mark has also just enabled --with-llvm on some POWER Linux\n> > animals, and they have failed in various ways. The failures are\n> > strangely lacking in detail. It seems we didn't have coverage before,\n> > and I recall that there were definitely versions of LLVM that *didn't*\n> > work for our usage in the past, which I'll need to dredge out of the\n> > archives. I will try to get onto a cfarm POWER machine and see if I\n> > can reproduce that, before and after these commits, and whose bug is\n> > it etc.\n>\n> I'm quite sure that jiting did pass on ppc64 at some point.\n\nYeah, I remember debugging it on EDB's POWER machine. First off, we\nknow that LLVM < 7 doesn't work for us on POWER, because:\n\nhttps://www.postgresql.org/message-id/CAEepm%3D39F_B3Ou8S3OrUw%2BhJEUP3p%3DwCu0ug-TTW67qKN53g3w%40mail.gmail.com\n\nThat was fixed:\n\nhttps://github.com/llvm/llvm-project/commit/a95b0df5eddbe7fa1e9f8fe0b1ff62427e1c0318\n\nSo I think that means that we'd first have to go through those animals\nand figure out which ones have older LLVM, and ignore those results --\nthey just can't use --with-llvm. Unfortunately there doesn't seem to\nbe any clue on the version from the paths used by OpenSUSE. Mark, do\nyou know?\n\n> > I doubt I can get anywhere near an s390x though, and we definitely had\n> > pre-existing problems on that arch.\n>\n> IMO an LLVM bug, rather than a postgres bug, but I guess it's all a matter of\n> perspective.\n> https://github.com/llvm/llvm-project/issues/53009#issuecomment-1042748553\n\nAh, good to know about that. But there are also reports of crashes in\nreleased versions that manage to get passed that ABI-wobble business:\n\nhttps://www.postgresql.org/message-id/flat/CAF1DzPXjpPxnsgySz2Zjm8d2dx7%3DJ070C%2BMQBFh%2B9NBNcBKCAg%40mail.gmail.com\n\n\n", "msg_date": "Sat, 21 Oct 2023 12:02:51 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-10-21 12:02:51 +1300, Thomas Munro wrote:\n> On Sat, Oct 21, 2023 at 11:12 AM Andres Freund <andres@anarazel.de> wrote:\n> > > I doubt I can get anywhere near an s390x though, and we definitely had\n> > > pre-existing problems on that arch.\n> >\n> > IMO an LLVM bug, rather than a postgres bug, but I guess it's all a matter of\n> > perspective.\n> > https://github.com/llvm/llvm-project/issues/53009#issuecomment-1042748553\n> \n> Ah, good to know about that. But there are also reports of crashes in\n> released versions that manage to get passed that ABI-wobble business:\n> \n> https://www.postgresql.org/message-id/flat/CAF1DzPXjpPxnsgySz2Zjm8d2dx7%3DJ070C%2BMQBFh%2B9NBNcBKCAg%40mail.gmail.com\n\nTrying to debug that now, using access to an s390x box provided by Mark...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Oct 2023 16:07:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Sat, Oct 21, 2023 at 12:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Oct 21, 2023 at 11:12 AM Andres Freund <andres@anarazel.de> wrote:\n> > I'm quite sure that jiting did pass on ppc64 at some point.\n>\n> Yeah, I remember debugging it on EDB's POWER machine. First off, we\n> know that LLVM < 7 doesn't work for us on POWER, because:\n>\n> https://www.postgresql.org/message-id/CAEepm%3D39F_B3Ou8S3OrUw%2BhJEUP3p%3DwCu0ug-TTW67qKN53g3w%40mail.gmail.com\n>\n> That was fixed:\n>\n> https://github.com/llvm/llvm-project/commit/a95b0df5eddbe7fa1e9f8fe0b1ff62427e1c0318\n>\n> So I think that means that we'd first have to go through those animals\n> and figure out which ones have older LLVM, and ignore those results --\n> they just can't use --with-llvm. Unfortunately there doesn't seem to\n> be any clue on the version from the paths used by OpenSUSE. Mark, do\n> you know?\n\nAdding Mark to this subthread. Concretely, could you please disable\n--with-llvm on any of those machines running LLVM < 7? And report\nwhat version any remaining animals are running? (It'd be nice if the\nbuild farm logged \"$LLVM_CONFIG --version\" somewhere.) One of them\nseems to have clang 5 which is a clue -- if the LLVM is also 5 it's\njust not going to work, as LLVM is one of those forwards-only projects\nthat doesn't back-patch fixes like that.\n\n\n", "msg_date": "Sat, 21 Oct 2023 13:45:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> (It'd be nice if the\n> build farm logged \"$LLVM_CONFIG --version\" somewhere.)\n\nIt's not really the buildfarm script's responsibility to do that,\nbut feel free to make configure do so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Oct 2023 21:45:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-10-19 06:20:26 +1300, Thomas Munro wrote:\n> Interestingly, a new problem just showed up on the the RHEL9 s390x\n> machine \"lora\", where a previously reported problem [1] apparently\n> re-appeared. It complains about incompatible layout, previously\n> blamed on mismatch between clang and LLVM versions.\n\nI've attached a patch revision that I spent the last couple hours working\non. It's very very roughly based on a patch Tom Stellard had written (which I\nthink a few rpm packages use). But instead of encoding details about specific\nlayout details, I made the code check if the data layout works and fall back\nto the cpu / features used for llvmjit_types.bc. This way it's not s390x\nspecific, future odd architecture behaviour would \"automatically\" be handled\nthe same.\n\nWith that at least the main regression tests pass on s390x, even with\njit_above_cost=0.\n\n\n> I can see that its clang is v15 from clues in the conflig log, but I don't\n> know which version of LLVM is being used. However, I see now that\n> --with-llvm was literally just turned on, so there is no reason to think\n> that this would have worked before or this work is relevant. Strange though\n> -- we must be able to JIT further than that on s390x because we have crash\n> reports in other threads (ie we made it past this and into other more\n> advanced brokenness).\n\nYou can avoid the borkedness by a) running on an older cpu b) adding\ncompilation flags to change the code generation target\n(e.g. -march=native). And some RPM packages have applied the patch by Tom\nStellard.\n\n> [1] https://www.postgresql.org/message-id/flat/20210319190047.7o4bwhbp5dzkqif3%40alap3.anarazel.de#ec51b488ca8eac8c603d91c0439d38b2\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 20 Oct 2023 23:08:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Sat, Oct 21, 2023 at 7:08 PM Andres Freund <andres@anarazel.de> wrote:\n> I've attached a patch revision that I spent the last couple hours working\n> on. It's very very roughly based on a patch Tom Stellard had written (which I\n> think a few rpm packages use). But instead of encoding details about specific\n> layout details, I made the code check if the data layout works and fall back\n> to the cpu / features used for llvmjit_types.bc. This way it's not s390x\n> specific, future odd architecture behaviour would \"automatically\" be handled\n> the same\n\nThe explanation makes sense and this seems like a solid plan to deal\nwith it. I didn't try on a s390x, but I tested locally on our master\nbranch with LLVM 7, 10, 17, 18, and then I hacked your patch to take\nthe fallback path as if a layout mismatch had been detected, and it\nworked fine:\n\n2023-10-22 11:49:55.663 NZDT [12000] DEBUG: detected CPU \"skylake\",\nwith features \"...\", resulting in layout\n\"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128\"\n2023-10-22 11:49:55.664 NZDT [12000] DEBUG: detected CPU / features\nyield incompatible data layout, using values from module instead\n2023-10-22 11:49:55.664 NZDT [12000] DETAIL: module CPU \"x86-64\",\nfeatures \"...\", resulting in layout\n\"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128\"\n\n+ To deal with that, check if data layouts match during JIT\ninitialization. If\n+ the runtime detected cpu / features result in a different layout,\ntry if the\n+ cpu/features recorded in in llvmjit_types.bc work.\n\ns|try |check |\ns| in in | in |\n\n+ errmsg_internal(\"could not\ndetermine working CPU / feature comination for JIT compilation\"),\n\ns|comination|combination|\ns| / |/|g\n\n\n", "msg_date": "Sun, 22 Oct 2023 12:16:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Sat, Oct 21, 2023 at 2:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > (It'd be nice if the\n> > build farm logged \"$LLVM_CONFIG --version\" somewhere.)\n>\n> It's not really the buildfarm script's responsibility to do that,\n> but feel free to make configure do so.\n\nDone, copying the example of how we do it for perl and various other things.\n\n\n", "msg_date": "Sun, 22 Oct 2023 14:44:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Sun, Oct 22, 2023 at 2:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Oct 21, 2023 at 2:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > (It'd be nice if the\n> > > build farm logged \"$LLVM_CONFIG --version\" somewhere.)\n> >\n> > It's not really the buildfarm script's responsibility to do that,\n> > but feel free to make configure do so.\n>\n> Done, copying the example of how we do it for perl and various other things.\n\nBuild farm measles update:\n\nWith that we can see that nicator (LLVM 15 on POWER) is green. We can\nsee that cavefish (LLVM 6 on POWER) is red as expected. We can also\nsee that bonito (LLVM 7 on POWER) is red, so my earlier theory that\nthis might be due to the known bug we got fixed in LLVM 7 is not\nenough. Maybe there are other things fixed on POWER somewhere between\nthose LLVM versions? I suspect it'll be hard to figure out without\ndebug builds and backtraces.\n\nOne thing is definitely our fault, though. xenodermus shows failures\non REL_12_STABLE and REL_13_STABLE, using debug LLVM 6 on x86. I\ncouldn't reproduce this locally on (newer) debug LLVM, so I bugged\nAndres for access to the host/libraries and chased it down. There is\nsome type punning for a function parameter REL_13_STABLE and earlier,\nremoved by Andres in REL_14_STABLE, and when I back-patched my\nrefactoring I effectively back-patched a few changes from his commit\ndf99ddc70b97 that removed the type punning, but I should have brought\none more line from that commit to remove another trace of it. See\nattached.", "msg_date": "Mon, 23 Oct 2023 13:15:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Mon, Oct 23, 2023 at 01:15:04PM +1300, Thomas Munro wrote:\n> On Sun, Oct 22, 2023 at 2:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Sat, Oct 21, 2023 at 2:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > > (It'd be nice if the\n> > > > build farm logged \"$LLVM_CONFIG --version\" somewhere.)\n> > >\n> > > It's not really the buildfarm script's responsibility to do that,\n> > > but feel free to make configure do so.\n> >\n> > Done, copying the example of how we do it for perl and various other things.\n> \n> Build farm measles update:\n> \n> With that we can see that nicator (LLVM 15 on POWER) is green. We can\n> see that cavefish (LLVM 6 on POWER) is red as expected. We can also\n> see that bonito (LLVM 7 on POWER) is red, so my earlier theory that\n> this might be due to the known bug we got fixed in LLVM 7 is not\n> enough. Maybe there are other things fixed on POWER somewhere between\n> those LLVM versions? I suspect it'll be hard to figure out without\n> debug builds and backtraces.\n\nI haven't gotten around to disabling llvm on any of my animals with llvm\n< 7 yet. Do you still want to hold on that?\n\n> One thing is definitely our fault, though. xenodermus shows failures\n> on REL_12_STABLE and REL_13_STABLE, using debug LLVM 6 on x86. I\n> couldn't reproduce this locally on (newer) debug LLVM, so I bugged\n> Andres for access to the host/libraries and chased it down. There is\n> some type punning for a function parameter REL_13_STABLE and earlier,\n> removed by Andres in REL_14_STABLE, and when I back-patched my\n> refactoring I effectively back-patched a few changes from his commit\n> df99ddc70b97 that removed the type punning, but I should have brought\n> one more line from that commit to remove another trace of it. See\n> attached.\n\nHere are my list of llvm-config versions and distros for s390x and POWER\n(didn't see any issues on aarch64 but I grabbed all the info at the same\ntime.)\n\ns390x:\n\nbranta: 10.0.0 Ubuntu 20.04.4 LTS\ncotinga: 6.0.0 Ubuntu 18.04.6 LTS\nperch: 6.0.0 Ubuntu 18.04.6 LTS\nsarus: 14.0.0 Ubuntu 22.04.1 LTS\naracari: 15.0.7 Red Hat Enterprise Linux 8.6\npipit: 15.0.7 Red Hat Enterprise Linux 8.6\nlora: 15.0.7 Red Hat Enterprise Linux 9.2\nmamushi: 15.0.7 Red Hat Enterprise Linux 9.2\npike: 11.0.1 Debian GNU/Linux 11\nrinkhals: 11.0.1 Debian GNU/Linux 11\n\n\nPOWER:\n\nbonito: 7.0.1 Fedora 29\ncavefish: 6.0.0 Ubuntu 18.04.6 LTS\ndemoiselle: 5.0.1 openSUSE Leap 15.0\nelasmobranch: 5.0.1 openSUSE Leap 15.0\nbabbler: 15.0.7 AlmaLinux 8.8\npytilia: 15.0.7 AlmaLinux 8.8\nnicator: 15.0.7 AlmaLinux 9.2\ntwinspot: 15.0.7 AlmaLinux 9.2\ncascabel: 11.0.1 Debian GNU/Linux 11\nhabu: 16.0.6 Fedora Linux 38\nkingsnake: 16.0.6 Fedora Linux 38\nkrait: CentOS 16.0.6 Stream 8\nlancehead: CentOS 16.0.6 Stream 8\n\n\naarch64:\n\nboiga: 14.0.5 Fedora Linux 36\ncorzo: 14.0.5 Fedora Linux 36\ndesman: 16.0.6 Fedora Linux 38\nmotmot: 16.0.6 Fedora Linux 38\nwhinchat: 11.0.1 Debian GNU/Linux 11\njackdaw: 11.0.1 Debian GNU/Linux 11\nblackneck: 7.0.1 Debian GNU/Linux 10\nalimoche: 7.0.1 Debian GNU/Linux 10\nbulbul: 15.0.7 AlmaLinux 8.8\nbroadbill: 15.0.7 AlmaLinux 8.8\noystercatcher: 15.0.7 AlmaLinux 9.2\npotoo: 15.0.7 AlmaLinux 9.2\nwhiting: 6.0.0 Ubuntu 18.04.5 LTS\nvimba: 6.0.0 Ubuntu 18.04.5 LTS\nsplitfin: 10.0.0 Ubuntu 20.04.6 LTS\nrudd: 10.0.0 Ubuntu 20.04.6 LTS\nturbot: 14.0.0 Ubuntu 22.04.3 LTS\nshiner: 14.0.0 Ubuntu 22.04.3 LTS\nziege: 16.0.6 CentOS Stream 8\nchevrotain: 11.0.1 Debian GNU/Linux 11\n\nRegards,\nMark\n\n\n", "msg_date": "Mon, 23 Oct 2023 08:27:23 -0700", "msg_from": "Mark Wong <markwkm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Tue, Oct 24, 2023 at 4:27 AM Mark Wong <markwkm@gmail.com> wrote:\n> I haven't gotten around to disabling llvm on any of my animals with llvm\n> < 7 yet. Do you still want to hold on that?\n\nYes, please disable --with-llvm on s390x and POWER animals with LLVM <\n7 (see below). Also, you have a bunch of machines with LLVM 16 that\nare failing to compile on REL_11_STABLE. That is expected, because we\nagreed not to back-patch the LLVM 16 API changes into REL_11_STABLE:\n\n> kingsnake: 16.0.6 Fedora Linux 38\n> krait: CentOS 16.0.6 Stream 8\n> lancehead: CentOS 16.0.6 Stream 8\n\nThese POWER machines fail as expected, and it's unfixable:\n\n> elasmobranch: 5.0.1 openSUSE Leap 15.0\n> demoiselle: 5.0.1 openSUSE Leap 15.0\n> cavefish: 6.0.0 Ubuntu 18.04.6 LTS\n\n(Well, we could in theory supply our own fixed\nllvm::orc::createLocalIndirectStubsManagerBuilder() function to hide\nthe broken one in LLVM <= 6, but that way lies madness IMHO. An LTS\ndistro that cares could look into back-patching LLVM's fixes, but for\nus, let us focus on current software.)\n\nThis POWER animal fails, unexpectedly to me:\n\n> bonito: 7.0.1 Fedora 29\n\nWe could try to chase that down, or we could rejoice that at least it\nworks on current release. It must begin working somewhere between 7\nand 11, but when I checked which LLVM releases I could easily install\non eg cascabel (if I could get access) using the repo at apt.llvm.org,\nI saw that they don't even have anything older than 11. So someone\nwith access who wants to figure this out might have many days or weeks\nof compiling ahead of them.\n\nThese POWER animals are passing, as expected:\n\n> cascabel: 11.0.1 Debian GNU/Linux 11\n> babbler: 15.0.7 AlmaLinux 8.8\n> pytilia: 15.0.7 AlmaLinux 8.8\n> nicator: 15.0.7 AlmaLinux 9.2\n> twinspot: 15.0.7 AlmaLinux 9.2\n> habu: 16.0.6 Fedora Linux 38\n> kingsnake: 16.0.6 Fedora Linux 38\n> krait: CentOS 16.0.6 Stream 8\n> lancehead: CentOS 16.0.6 Stream 8\n\nThese s390x animals are passing:\n\n> branta: 10.0.0 Ubuntu 20.04.4 LTS\n> pike: 11.0.1 Debian GNU/Linux 11\n> rinkhals: 11.0.1 Debian GNU/Linux 11\n> sarus: 14.0.0 Ubuntu 22.04.1 LTS\n\nThese s390x animals are failing, but don't show the layout complaint.\nI can see that LLVM 6 also lacked a case for s390x in\nllvm::orc::createLocalIndirectStubsManagerBuilder(), the thing that\nwas fixed in 7 with the addition of a default case. Therefore these\npresumably fail just like old LLVM on POWER, and it's unfixable. So I\nsuggest turning off --with-llvm on these two:\n\n> cotinga: 6.0.0 Ubuntu 18.04.6 LTS\n> perch: 6.0.0 Ubuntu 18.04.6 LTS\n\nThese s390x animals are failing with the mismatched layout problem,\nwhich should be fixed by Andres's patch to tolerate the changing\nz12/z13 ABI thing by falling back to whatever clang picked (at a cost\nof not using all the features of your newer CPU, unless you explicitly\ntell clang to target it):\n\n> aracari: 15.0.7 Red Hat Enterprise Linux 8.6\n> pipit: 15.0.7 Red Hat Enterprise Linux 8.6\n> lora: 15.0.7 Red Hat Enterprise Linux 9.2\n\nThis s390x animal doesn't actually have --with-llvm enabled so it\npasses, but surely it'd be just like lora:\n\n> mamushi: 15.0.7 Red Hat Enterprise Linux 9.2\n\n\n", "msg_date": "Tue, 24 Oct 2023 10:17:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Tue, Oct 24, 2023 at 10:17:22AM +1300, Thomas Munro wrote:\n> On Tue, Oct 24, 2023 at 4:27 AM Mark Wong <markwkm@gmail.com> wrote:\n> > I haven't gotten around to disabling llvm on any of my animals with llvm\n> > < 7 yet. Do you still want to hold on that?\n> \n> Yes, please disable --with-llvm on s390x and POWER animals with LLVM <\n> 7 (see below). Also, you have a bunch of machines with LLVM 16 that\n> are failing to compile on REL_11_STABLE. That is expected, because we\n> agreed not to back-patch the LLVM 16 API changes into REL_11_STABLE:\n> \n> > kingsnake: 16.0.6 Fedora Linux 38\n> > krait: CentOS 16.0.6 Stream 8\n> > lancehead: CentOS 16.0.6 Stream 8\n\nI should have updated these to not use --with-llvm for REL_11_STABLE.\n\n> These POWER machines fail as expected, and it's unfixable:\n> \n> > elasmobranch: 5.0.1 openSUSE Leap 15.0\n> > demoiselle: 5.0.1 openSUSE Leap 15.0\n> > cavefish: 6.0.0 Ubuntu 18.04.6 LTS\n\nThese should now be updated to not use --with-llvm at all.\n\n> These s390x animals are failing, but don't show the layout complaint.\n> I can see that LLVM 6 also lacked a case for s390x in\n> llvm::orc::createLocalIndirectStubsManagerBuilder(), the thing that\n> was fixed in 7 with the addition of a default case. Therefore these\n> presumably fail just like old LLVM on POWER, and it's unfixable. So I\n> suggest turning off --with-llvm on these two:\n> \n> > cotinga: 6.0.0 Ubuntu 18.04.6 LTS\n> > perch: 6.0.0 Ubuntu 18.04.6 LTS\n\nOk, I should have removed --with-llvm here too.\n\n> This s390x animal doesn't actually have --with-llvm enabled so it\n> passes, but surely it'd be just like lora:\n> \n> > mamushi: 15.0.7 Red Hat Enterprise Linux 9.2\n\nOops, I think I added it now.\n\n\nI think I made all the recommended changes, and trimmed out the lines\nwhere I didn't need to do anything. :)\n\nAndres pointed out to me that my animals aren't set up to collect core\nfile so I'm also trying to update that too...\n\nRegards,\nMark\n\n\n", "msg_date": "Mon, 23 Oct 2023 16:05:44 -0700", "msg_from": "Mark Wong <markwkm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Hi,\n\nOn 2023-10-24 10:17:22 +1300, Thomas Munro wrote:\n> This POWER animal fails, unexpectedly to me:\n> \n> > bonito: 7.0.1 Fedora 29\n> \n> We could try to chase that down, or we could rejoice that at least it\n> works on current release. It must begin working somewhere between 7\n> and 11, but when I checked which LLVM releases I could easily install\n> on eg cascabel (if I could get access) using the repo at apt.llvm.org,\n> I saw that they don't even have anything older than 11. So someone\n> with access who wants to figure this out might have many days or weeks\n> of compiling ahead of them.\n\nI could reproduce the failure on bonito. The stack trace is:\n#0 0x00007fffb83541e8 in raise () from /lib64/libc.so.6\n#1 0x00007fffb833448c in abort () from /lib64/libc.so.6\n#2 0x00007fff9c68dd78 in std::__replacement_assert (_file=<optimized out>, _line=<optimized out>, _function=<optimized out>, _condition=<optimized out>)\n at /usr/include/c++/8/ppc64le-redhat-linux/bits/c++config.h:447\n#3 0x00007fff9df90838 in std::unique_ptr<llvm::orc::JITCompileCallbackManager, std::default_delete<llvm::orc::JITCompileCallbackManager> >::operator* (\n this=0x1b946cb8) at ../include/llvm/Support/MemAlloc.h:29\n#4 llvm::OrcCBindingsStack::OrcCBindingsStack(llvm::TargetMachine&, std::function<std::unique_ptr<llvm::orc::IndirectStubsManager, std::default_delete<llvm::orc::IndirectStubsManager> > ()>) (this=0x1b946be0, TM=..., IndirectStubsMgrBuilder=...) at ../lib/ExecutionEngine/Orc/OrcCBindingsStack.h:242\n#5 0x00007fff9df90940 in LLVMOrcCreateInstance (TM=0x1b933ae0) at /usr/include/c++/8/bits/move.h:182\n#6 0x00007fffa0618f8c in llvm_session_initialize () at /home/andres/src/postgres/src/backend/jit/llvm/llvmjit.c:981\n#7 0x00007fffa06179a8 in llvm_create_context (jitFlags=25) at /home/andres/src/postgres/src/backend/jit/llvm/llvmjit.c:219\n#8 0x00007fffa0626cbc in llvm_compile_expr (state=0x1b8ef390) at /home/andres/src/postgres/src/backend/jit/llvm/llvmjit_expr.c:142\n#9 0x0000000010a76fc8 in jit_compile_expr (state=0x1b8ef390) at /home/andres/src/postgres/src/backend/jit/jit.c:177\n#10 0x0000000010404550 in ExecReadyExpr (state=0x1b8ef390) at /home/andres/src/postgres/src/backend/executor/execExpr.c:875\n\nwith this assertion message printed:\n/usr/include/c++/8/bits/unique_ptr.h:328: typename std::add_lvalue_reference<_Tp>::type std::unique_ptr<_Tp, Dp>::operator*() const [with Tp = llvm::orc::JITCompileCallbackManager; _Dp = std::default_delete<llvm::orc::JITCompileCallbackManager>; typename std::add_lvalue_reference<_Tp>::type = llvm::orc::JITCompileCallbackManager&]: Assertion 'get() != pointer()' failed.\n\n\nI wanted to use a debug build to investigate in more detail, but bonito is a\nsmall VM. Thus I built llvm 7 on a more powerful gcc cfarm machine, running on\nAlmaLinux 9.2. The problem doesn't reproduce there.\n\nGiven the crash in some c++ standard library code, that the fc29 patches to\nllvm look harmless, that building/using llvm 7 on a newer distro does not show\nissues on PPC, it seems likely that this is a compiler / standard library\nissue.\n\nFC 29 is well out of support, so I don't think it makes sense to invest any\nfurther time in this. Personally, I don't think it's useful to have years old\nfedora in the buildfarm...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Oct 2023 19:40:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> FC 29 is well out of support, so I don't think it makes sense to invest any\n> further time in this. Personally, I don't think it's useful to have years old\n> fedora in the buildfarm...\n\n+1. It's good to test old LTS distros, but Fedora releases have a\nshort shelf life by design.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 22:47:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" }, { "msg_contents": "On Mon, Oct 23, 2023 at 10:47:24PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > FC 29 is well out of support, so I don't think it makes sense to invest any\n> > further time in this. Personally, I don't think it's useful to have years old\n> > fedora in the buildfarm...\n> \n> +1. It's good to test old LTS distros, but Fedora releases have a\n> short shelf life by design.\n\nI'll start retiring those old Fedora ones I have. :)\n\nRegards,\nMark\n\n\n", "msg_date": "Tue, 24 Oct 2023 09:53:31 -0700", "msg_from": "Mark Wong <markwkm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LLVM 16 (opaque pointers)" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], I thought there seems to be unnecessary #include \n\"pg_getopt.h\".\ngetopt_long.h has already included pg_getopt.h, but some files include \nboth getopt.h and getopt_long.h.\n\n\n[1] \nhttps://www.postgresql.org/message-id/flat/d660ef741ce3d82f3b4283f1cafd576c@oss.nttdata.com\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Mon, 22 May 2023 18:48:37 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "unnecessary #include \"pg_getopt.h\"?" }, { "msg_contents": "On Mon, May 22, 2023 at 06:48:37PM +0900, torikoshia wrote:\n> While working on [1], I thought there seems to be unnecessary #include\n> \"pg_getopt.h\".\n> getopt_long.h has already included pg_getopt.h, but some files include both\n> getopt.h and getopt_long.h.\n\nRight, these could be removed. I am not seeing other places in the\ntree that include both. That's always nice to clean up.\n--\nMichael", "msg_date": "Wed, 24 May 2023 09:59:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: unnecessary #include \"pg_getopt.h\"?" }, { "msg_contents": "Hi,\n\nOn 2023-05-24 09:59:18 +0900, Michael Paquier wrote:\n> On Mon, May 22, 2023 at 06:48:37PM +0900, torikoshia wrote:\n> > While working on [1], I thought there seems to be unnecessary #include\n> > \"pg_getopt.h\".\n> > getopt_long.h has already included pg_getopt.h, but some files include both\n> > getopt.h and getopt_long.h.\n> \n> Right, these could be removed. I am not seeing other places in the\n> tree that include both. That's always nice to clean up.\n\nThis feels more like a matter of taste to me than anything. At least some of\nthe files touched in the patch use optarg, opterr etc. - which are declared in\npg_getopt.h. Making it reasonable to directly include pg_getopt.h.\n\nI don't really see a need to change anything here?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 May 2023 18:37:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: unnecessary #include \"pg_getopt.h\"?" }, { "msg_contents": "On Tue, May 23, 2023 at 06:37:59PM -0700, Andres Freund wrote:\n> This feels more like a matter of taste to me than anything.\n\nYup, it is.\n\n> At least some of\n> the files touched in the patch use optarg, opterr etc. - which are declared in\n> pg_getopt.h. Making it reasonable to directly include pg_getopt.h.\n\ngetopt_long.h is included in 21 places of src/bin/, with all of them\ntouching optarg, while only four of them include pg_getopt.h. So\nremoving them as suggested makes sense. I agree with the tasting\nmatter as well, still there is also a consistency matter.\n--\nMichael", "msg_date": "Thu, 25 May 2023 15:59:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: unnecessary #include \"pg_getopt.h\"?" } ]
[ { "msg_contents": "Hello,\n\nThe way that pgbench handled SIGINT changed in\n1d468b9ad81b9139b4a0b16b416c3597925af4b0. Unfortunately this had a\ncouple of unintended consequences, at least from what I can tell[1].\n\n- CTRL-C no longer stops the program unless the right point in pgbench\n execution is hit\n- pgbench no longer exits with a non-zero exit code\n\nAn easy reproduction of these problems is to run with a large scale\nfactor like: pgbench -i -s 500000. Then try to CTRL-C the program.\n\nThe attached set of patches fixes this problem by allowing callers of\nsetup_cancel_handler() to attach a post-PQcancel callback. In this case,\nwe just call _exit(2). In addition, I noticed that psql had an EXIT_USER\nconstant, so I moved the common exit codes from src/bin/psql/settings.h\nto src/include/fe_utils/exit_codes.h and made pgbench exit with\nEXIT_USER.\n\n[1]: https://www.postgresql.org/message-id/alpine.DEB.2.21.1910311939430.27369@lancre\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 22 May 2023 10:02:02 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "Here is a v2 that handles the Windows case that I seemingly missed in my\nfirst readthrough of this code.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 22 May 2023 12:31:54 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Mon, May 22, 2023 at 10:02:02AM -0500, Tristan Partin wrote:\n> The way that pgbench handled SIGINT changed in\n> 1d468b9ad81b9139b4a0b16b416c3597925af4b0. Unfortunately this had a\n> couple of unintended consequences, at least from what I can tell[1].\n> \n> - CTRL-C no longer stops the program unless the right point in pgbench\n> execution is hit\n> - pgbench no longer exits with a non-zero exit code\n> \n> An easy reproduction of these problems is to run with a large scale\n> factor like: pgbench -i -s 500000. Then try to CTRL-C the program.\n\nThis comes from the code path where the data is generated client-side,\nand where the current CancelRequested may not be that responsive,\nisn't it?\n--\nMichael", "msg_date": "Wed, 24 May 2023 09:31:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Tue May 23, 2023 at 7:31 PM CDT, Michael Paquier wrote:\n> On Mon, May 22, 2023 at 10:02:02AM -0500, Tristan Partin wrote:\n> > The way that pgbench handled SIGINT changed in\n> > 1d468b9ad81b9139b4a0b16b416c3597925af4b0. Unfortunately this had a\n> > couple of unintended consequences, at least from what I can tell[1].\n> > \n> > - CTRL-C no longer stops the program unless the right point in pgbench\n> > execution is hit\n> > - pgbench no longer exits with a non-zero exit code\n> > \n> > An easy reproduction of these problems is to run with a large scale\n> > factor like: pgbench -i -s 500000. Then try to CTRL-C the program.\n>\n> This comes from the code path where the data is generated client-side,\n> and where the current CancelRequested may not be that responsive,\n> isn't it?\n\nIt comes from the fact that CancelRequested is only checked within the\ngeneration of the pgbench_accounts table, but with a large enough scale\nfactor or enough latency, say cross-continent communication, generating\nthe other tables can be time consuming too. But, yes you are more likely\nto run into this issue when generating client-side data.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 24 May 2023 08:58:46 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Tue May 23, 2023 at 7:31 PM CDT, Michael Paquier wrote:\n> On Mon, May 22, 2023 at 10:02:02AM -0500, Tristan Partin wrote:\n> > The way that pgbench handled SIGINT changed in\n> > 1d468b9ad81b9139b4a0b16b416c3597925af4b0. Unfortunately this had a\n> > couple of unintended consequences, at least from what I can tell[1].\n> > \n> > - CTRL-C no longer stops the program unless the right point in pgbench\n> > execution is hit\n> > - pgbench no longer exits with a non-zero exit code\n> > \n> > An easy reproduction of these problems is to run with a large scale\n> > factor like: pgbench -i -s 500000. Then try to CTRL-C the program.\n>\n> This comes from the code path where the data is generated client-side,\n> and where the current CancelRequested may not be that responsive,\n> isn't it?\n\nYes, that is exactly it. There is only a single check for\nCancelRequested in the client-side data generation at the moment.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 30 May 2023 09:35:27 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "Did not even remember sending an original reply. Disregard.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 30 May 2023 09:42:37 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Wed, 24 May 2023 08:58:46 -0500\n\"Tristan Partin\" <tristan@neon.tech> wrote:\n\n> On Tue May 23, 2023 at 7:31 PM CDT, Michael Paquier wrote:\n> > On Mon, May 22, 2023 at 10:02:02AM -0500, Tristan Partin wrote:\n> > > The way that pgbench handled SIGINT changed in\n> > > 1d468b9ad81b9139b4a0b16b416c3597925af4b0. Unfortunately this had a\n> > > couple of unintended consequences, at least from what I can tell[1].\n> > > \n> > > - CTRL-C no longer stops the program unless the right point in pgbench\n> > > execution is hit \n> > > - pgbench no longer exits with a non-zero exit code\n> > > \n> > > An easy reproduction of these problems is to run with a large scale\n> > > factor like: pgbench -i -s 500000. Then try to CTRL-C the program.\n> >\n> > This comes from the code path where the data is generated client-side,\n> > and where the current CancelRequested may not be that responsive,\n> > isn't it?\n> \n> It comes from the fact that CancelRequested is only checked within the\n> generation of the pgbench_accounts table, but with a large enough scale\n> factor or enough latency, say cross-continent communication, generating\n> the other tables can be time consuming too. But, yes you are more likely\n> to run into this issue when generating client-side data.\n\nIf I understand correctly, your patch allows to exit pgbench immediately\nduring a client-side data generation even while the tables other than\npgbench_accounts are processed. It can be useful when the scale factor\nis large. However, the same purpose seems to be achieved by you other patch [1].\nIs the latter your latest proposal that replaces the patch in this thread?ad?\n\n[1] https://www.postgresql.org/message-id/flat/CSTU5P82ONZ1.19XFUGHMXHBRY%40c3po\n\nAlso, your proposal includes to change the exit code when pgbench is cancelled by\nSIGINT. I think it is nice because this will make it easy to understand and clarify \nthe result of the initialization. \n\nCurrently, pgbench returns 0 when the initialization is cancelled while populating\npgbench_branches or pgbench_tellers, but the resultant pgbench_accounts has only\none row, which is an inconsistent result. Returning the specific value for SIGINT\ncancel can let user know it explicitly. In addition, it seems better if an error\nmessage to inform would be output. \n\nFor the above purpose, the patch moved exit codes of psql to fe_utils to be shared.\nHowever, I am not sure this is good way. Each frontend-tool may have it own exit code,\nfor example. \"2\" means \"bad connection\" in psql [2], but \"errors during the run\" in\npgbench [3]. So, I think it is better to define them separately.\n\n[2] https://www.postgresql.org/docs/current/app-psql.html#id-1.9.4.20.7\n[3] https://www.postgresql.org/docs/current/pgbench.html#id=id-1.9.4.11.7\n \nRegards,\nYugo Nagata\n\n> -- \n> Tristan Partin\n> Neon (https://neon.tech)\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 19 Jun 2023 22:39:37 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Mon Jun 19, 2023 at 6:39 AM PDT, Yugo NAGATA wrote:\n> On Wed, 24 May 2023 08:58:46 -0500\n> \"Tristan Partin\" <tristan@neon.tech> wrote:\n>\n> > On Tue May 23, 2023 at 7:31 PM CDT, Michael Paquier wrote:\n> > > On Mon, May 22, 2023 at 10:02:02AM -0500, Tristan Partin wrote:\n> > > > The way that pgbench handled SIGINT changed in\n> > > > 1d468b9ad81b9139b4a0b16b416c3597925af4b0. Unfortunately this had a\n> > > > couple of unintended consequences, at least from what I can tell[1].\n> > > > \n> > > > - CTRL-C no longer stops the program unless the right point in pgbench\n> > > > execution is hit \n> > > > - pgbench no longer exits with a non-zero exit code\n> > > > \n> > > > An easy reproduction of these problems is to run with a large scale\n> > > > factor like: pgbench -i -s 500000. Then try to CTRL-C the program.\n> > >\n> > > This comes from the code path where the data is generated client-side,\n> > > and where the current CancelRequested may not be that responsive,\n> > > isn't it?\n> > \n> > It comes from the fact that CancelRequested is only checked within the\n> > generation of the pgbench_accounts table, but with a large enough scale\n> > factor or enough latency, say cross-continent communication, generating\n> > the other tables can be time consuming too. But, yes you are more likely\n> > to run into this issue when generating client-side data.\n>\n> If I understand correctly, your patch allows to exit pgbench immediately\n> during a client-side data generation even while the tables other than\n> pgbench_accounts are processed. It can be useful when the scale factor\n> is large. However, the same purpose seems to be achieved by you other patch [1].\n> Is the latter your latest proposal that replaces the patch in this thread?ad?\n>\n> [1] https://www.postgresql.org/message-id/flat/CSTU5P82ONZ1.19XFUGHMXHBRY%40c3po\n\nThe other patch does not replace this one. They are entirely separate.\n\n> Also, your proposal includes to change the exit code when pgbench is cancelled by\n> SIGINT. I think it is nice because this will make it easy to understand and clarify \n> the result of the initialization. \n>\n> Currently, pgbench returns 0 when the initialization is cancelled while populating\n> pgbench_branches or pgbench_tellers, but the resultant pgbench_accounts has only\n> one row, which is an inconsistent result. Returning the specific value for SIGINT\n> cancel can let user know it explicitly. In addition, it seems better if an error\n> message to inform would be output. \n>\n> For the above purpose, the patch moved exit codes of psql to fe_utils to be shared.\n> However, I am not sure this is good way. Each frontend-tool may have it own exit code,\n> for example. \"2\" means \"bad connection\" in psql [2], but \"errors during the run\" in\n> pgbench [3]. So, I think it is better to define them separately.\n>\n> [2] https://www.postgresql.org/docs/current/app-psql.html#id-1.9.4.20.7\n> [3] https://www.postgresql.org/docs/current/pgbench.html#id=id-1.9.4.11.7\n\nI can change it. I just figured that sharing exit code definitions\nwould've been preferrable. I will post a v3 some time soon which removes\nthat patch.\n\nThanks for your review :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 19 Jun 2023 16:49:05 -0700", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Mon, 19 Jun 2023 16:49:05 -0700\n\"Tristan Partin\" <tristan@neon.tech> wrote:\n\n> On Mon Jun 19, 2023 at 6:39 AM PDT, Yugo NAGATA wrote:\n> > On Wed, 24 May 2023 08:58:46 -0500\n> > \"Tristan Partin\" <tristan@neon.tech> wrote:\n> >\n> > > On Tue May 23, 2023 at 7:31 PM CDT, Michael Paquier wrote:\n> > > > On Mon, May 22, 2023 at 10:02:02AM -0500, Tristan Partin wrote:\n> > > > > The way that pgbench handled SIGINT changed in\n> > > > > 1d468b9ad81b9139b4a0b16b416c3597925af4b0. Unfortunately this had a\n> > > > > couple of unintended consequences, at least from what I can tell[1].\n> > > > > \n> > > > > - CTRL-C no longer stops the program unless the right point in pgbench\n> > > > > execution is hit \n> > > > > - pgbench no longer exits with a non-zero exit code\n> > > > > \n> > > > > An easy reproduction of these problems is to run with a large scale\n> > > > > factor like: pgbench -i -s 500000. Then try to CTRL-C the program.\n> > > >\n> > > > This comes from the code path where the data is generated client-side,\n> > > > and where the current CancelRequested may not be that responsive,\n> > > > isn't it?\n> > > \n> > > It comes from the fact that CancelRequested is only checked within the\n> > > generation of the pgbench_accounts table, but with a large enough scale\n> > > factor or enough latency, say cross-continent communication, generating\n> > > the other tables can be time consuming too. But, yes you are more likely\n> > > to run into this issue when generating client-side data.\n> >\n> > If I understand correctly, your patch allows to exit pgbench immediately\n> > during a client-side data generation even while the tables other than\n> > pgbench_accounts are processed. It can be useful when the scale factor\n> > is large. However, the same purpose seems to be achieved by you other patch [1].\n> > Is the latter your latest proposal that replaces the patch in this thread?ad?\n> >\n> > [1] https://www.postgresql.org/message-id/flat/CSTU5P82ONZ1.19XFUGHMXHBRY%40c3po\n> \n> The other patch does not replace this one. They are entirely separate.\n\nAfter applying the other patch, CancelRequested would be checked enough frequently\neven during initialization of pgbench_branches and pgbench_tellers, and it will\nallow the initialization step to be cancelled immediately even if the scale factor\nis high. So, is it unnecessary to modify setup_cancel_handler anyway?\n\nI think it would be still nice to introduce a new exit code for query cancel separately.\n\n> \n> > Also, your proposal includes to change the exit code when pgbench is cancelled by\n> > SIGINT. I think it is nice because this will make it easy to understand and clarify \n> > the result of the initialization. \n> >\n> > Currently, pgbench returns 0 when the initialization is cancelled while populating\n> > pgbench_branches or pgbench_tellers, but the resultant pgbench_accounts has only\n> > one row, which is an inconsistent result. Returning the specific value for SIGINT\n> > cancel can let user know it explicitly. In addition, it seems better if an error\n> > message to inform would be output. \n> >\n> > For the above purpose, the patch moved exit codes of psql to fe_utils to be shared.\n> > However, I am not sure this is good way. Each frontend-tool may have it own exit code,\n> > for example. \"2\" means \"bad connection\" in psql [2], but \"errors during the run\" in\n> > pgbench [3]. So, I think it is better to define them separately.\n> >\n> > [2] https://www.postgresql.org/docs/current/app-psql.html#id-1.9.4.20.7\n> > [3] https://www.postgresql.org/docs/current/pgbench.html#id=id-1.9.4.11.7\n> \n> I can change it. I just figured that sharing exit code definitions\n> would've been preferrable. I will post a v3 some time soon which removes\n> that patch.\n\nGreat!\n\n> \n> Thanks for your review :).\n> \n> -- \n> Tristan Partin\n> Neon (https://neon.tech)\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 22 Jun 2023 14:58:14 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Thu, Jun 22, 2023 at 02:58:14PM +0900, Yugo NAGATA wrote:\n> On Mon, 19 Jun 2023 16:49:05 -0700\n> \"Tristan Partin\" <tristan@neon.tech> wrote:\n>> On Mon Jun 19, 2023 at 6:39 AM PDT, Yugo NAGATA wrote:\n>>> [1] https://www.postgresql.org/message-id/flat/CSTU5P82ONZ1.19XFUGHMXHBRY%40c3po\n>> \n>> The other patch does not replace this one. They are entirely separate.\n> \n> After applying the other patch, CancelRequested would be checked enough frequently\n> even during initialization of pgbench_branches and pgbench_tellers, and it will\n> allow the initialization step to be cancelled immediately even if the scale factor\n> is high. So, is it unnecessary to modify setup_cancel_handler anyway?\n> \n> I think it would be still nice to introduce a new exit code for query cancel separately.\n\nI have the same impression as Nagata-san while going again through\nthe proposal of this thread. COPY is more responsive to\ninterruptions, and is always going to have a much better performance \nthan individual or multi-value INSERTs for the bulk-loading of data,\nso we could just move on with what's proposed on the other thread and\nsolve the problem of this thread on top of improving the loading\nperformance.\n\nWhat are the benefits we gain with the proposal of this thread once we\nswitch to COPY as method for the client-side data generation? If\nthe argument is to be able switch to a different error code on\ncancellations for pgbench, that sounds a bit thin to me versus the\nargument of keeping the cancellation callback path as simple as\npossible.\n--\nMichael", "msg_date": "Fri, 23 Jun 2023 08:19:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Thu Jun 22, 2023 at 6:19 PM CDT, Michael Paquier wrote:\n> On Thu, Jun 22, 2023 at 02:58:14PM +0900, Yugo NAGATA wrote:\n> > On Mon, 19 Jun 2023 16:49:05 -0700\n> > \"Tristan Partin\" <tristan@neon.tech> wrote:\n> >> On Mon Jun 19, 2023 at 6:39 AM PDT, Yugo NAGATA wrote:\n> >>> [1] https://www.postgresql.org/message-id/flat/CSTU5P82ONZ1.19XFUGHMXHBRY%40c3po\n> >> \n> >> The other patch does not replace this one. They are entirely separate.\n> > \n> > After applying the other patch, CancelRequested would be checked enough frequently\n> > even during initialization of pgbench_branches and pgbench_tellers, and it will\n> > allow the initialization step to be cancelled immediately even if the scale factor\n> > is high. So, is it unnecessary to modify setup_cancel_handler anyway?\n> > \n> > I think it would be still nice to introduce a new exit code for query cancel separately.\n>\n> I have the same impression as Nagata-san while going again through\n> the proposal of this thread. COPY is more responsive to\n> interruptions, and is always going to have a much better performance \n> than individual or multi-value INSERTs for the bulk-loading of data,\n> so we could just move on with what's proposed on the other thread and\n> solve the problem of this thread on top of improving the loading\n> performance.\n>\n> What are the benefits we gain with the proposal of this thread once we\n> switch to COPY as method for the client-side data generation? If\n> the argument is to be able switch to a different error code on\n> cancellations for pgbench, that sounds a bit thin to me versus the\n> argument of keeping the cancellation callback path as simple as\n> possible.\n\nI would say there probably isn't much benefit if you think the polling\nfor CancelRequested is fine. Looking at the other patch, I think it\nmight be simple to add an exit code for SIGINT. But I think it might be\nbest to do it after that patch is merged. I added the other patch to the\ncommitfest for July. Holding off on this one.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 27 Jun 2023 09:42:05 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Tue, Jun 27, 2023 at 09:42:05AM -0500, Tristan Partin wrote:\n> I would say there probably isn't much benefit if you think the polling\n> for CancelRequested is fine. Looking at the other patch, I think it\n> might be simple to add an exit code for SIGINT. But I think it might be\n> best to do it after that patch is merged. I added the other patch to the\n> commitfest for July. Holding off on this one.\n\nOkay, I am going to jump on the patch to switch to COPY, then.\n--\nMichael", "msg_date": "Tue, 11 Jul 2023 12:29:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" }, { "msg_contents": "On Mon Jul 10, 2023 at 10:29 PM CDT, Michael Paquier wrote:\n> On Tue, Jun 27, 2023 at 09:42:05AM -0500, Tristan Partin wrote:\n> > I would say there probably isn't much benefit if you think the polling\n> > for CancelRequested is fine. Looking at the other patch, I think it\n> > might be simple to add an exit code for SIGINT. But I think it might be\n> > best to do it after that patch is merged. I added the other patch to the\n> > commitfest for July. Holding off on this one.\n>\n> Okay, I am going to jump on the patch to switch to COPY, then.\n\nAfter looking at the other patch some more, I don't think there is a\ngood way to reliably exit from SIGINT. The only time pgbench ever polls\nfor CancelRequested is in initPopulateTable(). So if you hit CTRL+C at\nany time during the execution of the other initalization steps, you're\nout of luck. The default initialization steps are dtgvp, so after g we\nhave vacuuming and primary keys. From what I can tell pgbench will\ncontinue to run these steps even after it has received a SIGINT.\n\nThis behavior seems unintended and odd to me. Though, maybe I am missing\nsomething.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 11 Jul 2023 09:58:11 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Make pgbench exit on SIGINT more reliably" } ]
[ { "msg_contents": "Hi,\n\nSome files were missing information from the c.h header.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 22 May 2023 10:10:31 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Add missing includes" }, { "msg_contents": "Hi,\n\nOn 2023-May-22, Tristan Partin wrote:\n\n> Some files were missing information from the c.h header.\n\nActually, these omissions are intentional, and we have bespoke handling\nfor this in our own header-checking scripts (src/tools/pginclude). I\nimagine this is documented somewhere, but ATM I can't remember where.\n(And if not, maybe that's something we should do.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 22 May 2023 17:16:57 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add missing includes" }, { "msg_contents": "On Mon May 22, 2023 at 10:16 AM CDT, Alvaro Herrera wrote:\n> > Some files were missing information from the c.h header.\n>\n> Actually, these omissions are intentional, and we have bespoke handling\n> for this in our own header-checking scripts (src/tools/pginclude). I\n> imagine this is documented somewhere, but ATM I can't remember where.\n> (And if not, maybe that's something we should do.)\n\nInteresting. Thanks for the information. Thought I had seen in a\ndifferent email thread that header files were supposed to include all\nthat they needed to. Anyways, ignore the patch :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 22 May 2023 10:27:37 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Add missing includes" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2023-May-22, Tristan Partin wrote:\n>> Some files were missing information from the c.h header.\n\n> Actually, these omissions are intentional, and we have bespoke handling\n> for this in our own header-checking scripts (src/tools/pginclude). I\n> imagine this is documented somewhere, but ATM I can't remember where.\n> (And if not, maybe that's something we should do.)\n\nYeah, the general policy is that .h files should not explicitly include\nc.h (nor postgres.h nor postgres_fe.h). Instead, .c files should include\nthe appropriate one of those three files first. This allows sharing of\n.h files more easily across frontend/backend/common environments.\n\nI'm not sure where this is documented either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 May 2023 11:28:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add missing includes" }, { "msg_contents": "On Mon May 22, 2023 at 10:28 AM CDT, Tom Lane wrote:\n> >> Some files were missing information from the c.h header.\n>\n> > Actually, these omissions are intentional, and we have bespoke handling\n> > for this in our own header-checking scripts (src/tools/pginclude). I\n> > imagine this is documented somewhere, but ATM I can't remember where.\n> > (And if not, maybe that's something we should do.)\n>\n> Yeah, the general policy is that .h files should not explicitly include\n> c.h (nor postgres.h nor postgres_fe.h). Instead, .c files should include\n> the appropriate one of those three files first. This allows sharing of\n> .h files more easily across frontend/backend/common environments.\n>\n> I'm not sure where this is documented either.\n\nThanks for the added context. I'll try to keep this in mind going\nforward.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 22 May 2023 10:32:36 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Add missing includes" }, { "msg_contents": "\"Tristan Partin\" <tristan@neon.tech> writes:\n> Interesting. Thanks for the information. Thought I had seen in a\n> different email thread that header files were supposed to include all\n> that they needed to. Anyways, ignore the patch :).\n\nThere is such a policy, but not with respect to those particular\nheaders. You might find src/tools/pginclude/headerscheck and\nsrc/tools/pginclude/cpluspluscheck to be interesting reading,\nas those are the tests we run to verify this policy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 May 2023 11:34:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add missing includes" }, { "msg_contents": "On Mon, May 22, 2023 at 11:34:14AM -0400, Tom Lane wrote:\n> There is such a policy, but not with respect to those particular\n> headers. You might find src/tools/pginclude/headerscheck and\n> src/tools/pginclude/cpluspluscheck to be interesting reading,\n> as those are the tests we run to verify this policy.\n\nThe standalone compile policy is documented in\nsrc/tools/pginclude/README, AFAIK, so close enough :p\n--\nMichael", "msg_date": "Tue, 23 May 2023 07:55:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add missing includes" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> The standalone compile policy is documented in\n> src/tools/pginclude/README, AFAIK, so close enough :p\n\nHmm, that needs an update --- the scripts are slightly smarter\nnow than that text gives them credit for. I'll see to it after\nthe release freeze lifts.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 May 2023 19:19:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add missing includes" } ]
[ { "msg_contents": "Hello,\n\nThis is a way that would solve bug #17698[1]. It just reuses the same\nhandler as SIGINT (with a function rename).\n\nThis patch works best if it is combined with my previous submission[2].\nI can rebase that submission if and when this patch is pulled in.\n\n[1]: https://www.postgresql.org/message-id/17698-58a6ab8caec496b0%40postgresql.org\n[2]: https://www.postgresql.org/message-id/CSSWBAX56CVY.291H6ZNNHK7EO%40c3po\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 22 May 2023 12:26:34 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Handle SIGTERM in fe_utils/cancel.c" }, { "msg_contents": "On Mon, May 22, 2023 at 12:26:34PM -0500, Tristan Partin wrote:\n> This is a way that would solve bug #17698[1]. It just reuses the same\n> handler as SIGINT (with a function rename).\n> \n> This patch works best if it is combined with my previous submission[2].\n> I can rebase that submission if and when this patch is pulled in.\n\nNot sure that this is a good idea long-term. Currently, the code\npaths calling setup_cancel_handler() from cancel.c don't have a custom\nhandling for SIGTERM, but that may not be the case forever.\n--\nMichael", "msg_date": "Wed, 24 May 2023 09:51:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Handle SIGTERM in fe_utils/cancel.c" }, { "msg_contents": "On Tue May 23, 2023 at 7:51 PM CDT, Michael Paquier wrote:\n> On Mon, May 22, 2023 at 12:26:34PM -0500, Tristan Partin wrote:\n> > This is a way that would solve bug #17698[1]. It just reuses the same\n> > handler as SIGINT (with a function rename).\n> > \n> > This patch works best if it is combined with my previous submission[2].\n> > I can rebase that submission if and when this patch is pulled in.\n>\n> Not sure that this is a good idea long-term. Currently, the code\n> paths calling setup_cancel_handler() from cancel.c don't have a custom\n> handling for SIGTERM, but that may not be the case forever.\n\nI am more than happy to essentially just copy & paste some code that\nwill be specific to pgbench if that is preferrable for the purposes of\nmerging this patch. Another idea would be to change the signature of\nsetup_cancel_handler() to something like:\n\nvoid\nsetup_cancel_handler(cb pre, cb post, int signal, ...); (null-terminate)\n\nThen a client could state exactly what signals it wants to register with\nthis generic cancel handler.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 30 May 2023 09:39:28 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: Handle SIGTERM in fe_utils/cancel.c" } ]
[ { "msg_contents": "I came across $subject on master and here is the query I'm running.\n\ncreate table t (a int unique, b int);\n\nexplain (costs off)\nselect 1 from t t1\n left join t t2 on true\n inner join t t3 on true\n left join t t4 on t2.a = t4.a and t2.a = t3.a;\nERROR: no relation entry for relid 6\n\nI looked into it and it should be caused by some problem in outer-join\nremoval. After we've decided that the join to t4 is removable, which is\nno problem, one of the things we need to do is to remove any joinquals\nreferencing t4 from the joininfo lists. In this query, there are two\nsuch quals, 't2.a = t4.a' and 't2.a = t3.a'. And both quals have two\nversions, one with t1/t2 join in their nulling bitmap and one without.\nThe former version would be treated as being \"pushed down\" because its\nrequired_relids exceed the scope of joinrelids, due to the t1/t2 join\nincluded in the qual's nulling bitmap but not included in joinrelids.\nAnd as a result this version of quals would be put back. I think this\nis where the problem is. Ideally we should not put them back.\n\nThis issue seems to have existed for a while, and is revealed by the\nchange in c8b881d2 recently. I'm not sure how to fix it yet. What I'm\nthinking is that maybe this has something to do with the loose ends we\nhave in make_outerjoininfo. Actually in this query the t1/t2 join\ncannot commute with the join to t4. If we can know that in\nmake_outerjoininfo, we'd have added t1/t2 join to the min_lefthand of\nthe join to t4, and that would avoid this issue.\n\nThanks\nRichard\n\nI came across $subject on master and here is the query I'm running.create table t (a int unique, b int);explain (costs off)select 1 from t t1    left join t t2 on true   inner join t t3 on true    left join t t4 on t2.a = t4.a and t2.a = t3.a;ERROR:  no relation entry for relid 6I looked into it and it should be caused by some problem in outer-joinremoval.  After we've decided that the join to t4 is removable, which isno problem, one of the things we need to do is to remove any joinqualsreferencing t4 from the joininfo lists.  In this query, there are twosuch quals, 't2.a = t4.a' and 't2.a = t3.a'.  And both quals have twoversions, one with t1/t2 join in their nulling bitmap and one without.The former version would be treated as being \"pushed down\" because itsrequired_relids exceed the scope of joinrelids, due to the t1/t2 joinincluded in the qual's nulling bitmap but not included in joinrelids.And as a result this version of quals would be put back.  I think thisis where the problem is.  Ideally we should not put them back.This issue seems to have existed for a while, and is revealed by thechange in c8b881d2 recently.  I'm not sure how to fix it yet.  What I'mthinking is that maybe this has something to do with the loose ends wehave in make_outerjoininfo.  Actually in this query the t1/t2 joincannot commute with the join to t4.  If we can know that inmake_outerjoininfo, we'd have added t1/t2 join to the min_lefthand ofthe join to t4, and that would avoid this issue.ThanksRichard", "msg_date": "Tue, 23 May 2023 14:38:35 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "ERROR: no relation entry for relid 6" }, { "msg_contents": "On Tue, May 23, 2023 at 2:38 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> I came across $subject on master and here is the query I'm running.\n>\n> create table t (a int unique, b int);\n>\n> explain (costs off)\n> select 1 from t t1\n> left join t t2 on true\n> inner join t t3 on true\n> left join t t4 on t2.a = t4.a and t2.a = t3.a;\n> ERROR: no relation entry for relid 6\n>\n> I looked into it and it should be caused by some problem in outer-join\n> removal. After we've decided that the join to t4 is removable, which is\n> no problem, one of the things we need to do is to remove any joinquals\n> referencing t4 from the joininfo lists. In this query, there are two\n> such quals, 't2.a = t4.a' and 't2.a = t3.a'. And both quals have two\n> versions, one with t1/t2 join in their nulling bitmap and one without.\n> The former version would be treated as being \"pushed down\" because its\n> required_relids exceed the scope of joinrelids, due to the t1/t2 join\n> included in the qual's nulling bitmap but not included in joinrelids.\n> And as a result this version of quals would be put back. I think this\n> is where the problem is. Ideally we should not put them back.\n>\n> This issue seems to have existed for a while, and is revealed by the\n> change in c8b881d2 recently. I'm not sure how to fix it yet. What I'm\n> thinking is that maybe this has something to do with the loose ends we\n> have in make_outerjoininfo. Actually in this query the t1/t2 join\n> cannot commute with the join to t4. If we can know that in\n> make_outerjoininfo, we'd have added t1/t2 join to the min_lefthand of\n> the join to t4, and that would avoid this issue.\n>\n\nConsidering that clone clauses should always be outer-join clauses not\nfilter clauses, I'm wondering if we can add an additional check for that\nin RINFO_IS_PUSHED_DOWN, something like\n\n #define RINFO_IS_PUSHED_DOWN(rinfo, joinrelids) \\\n- ((rinfo)->is_pushed_down || \\\n- !bms_is_subset((rinfo)->required_relids, joinrelids))\n+ (!((rinfo)->has_clone || ((rinfo)->is_clone)) && \\\n+ ((rinfo)->is_pushed_down || \\\n+ !bms_is_subset((rinfo)->required_relids, joinrelids)))\n\nThis change can fix the case shown upthread. But I doubt it's the\nperfect fix we want.\n\nThanks\nRichard\n\nOn Tue, May 23, 2023 at 2:38 PM Richard Guo <guofenglinux@gmail.com> wrote:I came across $subject on master and here is the query I'm running.create table t (a int unique, b int);explain (costs off)select 1 from t t1    left join t t2 on true   inner join t t3 on true    left join t t4 on t2.a = t4.a and t2.a = t3.a;ERROR:  no relation entry for relid 6I looked into it and it should be caused by some problem in outer-joinremoval.  After we've decided that the join to t4 is removable, which isno problem, one of the things we need to do is to remove any joinqualsreferencing t4 from the joininfo lists.  In this query, there are twosuch quals, 't2.a = t4.a' and 't2.a = t3.a'.  And both quals have twoversions, one with t1/t2 join in their nulling bitmap and one without.The former version would be treated as being \"pushed down\" because itsrequired_relids exceed the scope of joinrelids, due to the t1/t2 joinincluded in the qual's nulling bitmap but not included in joinrelids.And as a result this version of quals would be put back.  I think thisis where the problem is.  Ideally we should not put them back.This issue seems to have existed for a while, and is revealed by thechange in c8b881d2 recently.  I'm not sure how to fix it yet.  What I'mthinking is that maybe this has something to do with the loose ends wehave in make_outerjoininfo.  Actually in this query the t1/t2 joincannot commute with the join to t4.  If we can know that inmake_outerjoininfo, we'd have added t1/t2 join to the min_lefthand ofthe join to t4, and that would avoid this issue.Considering that clone clauses should always be outer-join clauses notfilter clauses, I'm wondering if we can add an additional check for thatin RINFO_IS_PUSHED_DOWN, something like #define RINFO_IS_PUSHED_DOWN(rinfo, joinrelids) \\-   ((rinfo)->is_pushed_down || \\-    !bms_is_subset((rinfo)->required_relids, joinrelids))+   (!((rinfo)->has_clone || ((rinfo)->is_clone)) && \\+    ((rinfo)->is_pushed_down || \\+     !bms_is_subset((rinfo)->required_relids, joinrelids)))This change can fix the case shown upthread.  But I doubt it's theperfect fix we want.ThanksRichard", "msg_date": "Tue, 23 May 2023 19:45:15 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n>> create table t (a int unique, b int);\n>> \n>> explain (costs off)\n>> select 1 from t t1\n>> left join t t2 on true\n>> inner join t t3 on true\n>> left join t t4 on t2.a = t4.a and t2.a = t3.a;\n>> ERROR: no relation entry for relid 6\n\nUgh.\n\n> Considering that clone clauses should always be outer-join clauses not\n> filter clauses, I'm wondering if we can add an additional check for that\n> in RINFO_IS_PUSHED_DOWN, something like\n\n> #define RINFO_IS_PUSHED_DOWN(rinfo, joinrelids) \\\n> - ((rinfo)->is_pushed_down || \\\n> - !bms_is_subset((rinfo)->required_relids, joinrelids))\n> + (!((rinfo)->has_clone || ((rinfo)->is_clone)) && \\\n> + ((rinfo)->is_pushed_down || \\\n> + !bms_is_subset((rinfo)->required_relids, joinrelids)))\n\nI don't like that one bit; it seems way too likely to introduce\nbad side-effects elsewhere.\n\nI think the real issue is that \"is pushed down\" has never been a\nconceptually accurate way of thinking about what\nremove_rel_from_query needs to do here. Using RINFO_IS_PUSHED_DOWN\nhappened to work up to now, but it's an abuse of that macro, and\nchanging the macro's behavior isn't the right way to fix it.\n\nHaving said that, I'm not sure what is a better way to think about it.\nIt seems like our data structure doesn't have a clear tie between\nrestrictinfos and their original join level, or at least, to the extent\nthat it did the \"clone clause\" mechanism has broken it.\n\nI wonder if we could do something involving recording the\nrinfo_serial numbers of all the clauses extracted from a particular\nsyntactic join level (we could keep this in a bitmapset attached\nto each SpecialJoinInfo, perhaps) and then filtering the joinclauses\non the basis of serial numbers instead of required_relids.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 May 2023 14:48:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "On Wed, May 24, 2023 at 2:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Considering that clone clauses should always be outer-join clauses not\n> > filter clauses, I'm wondering if we can add an additional check for that\n> > in RINFO_IS_PUSHED_DOWN, something like\n>\n> > #define RINFO_IS_PUSHED_DOWN(rinfo, joinrelids) \\\n> > - ((rinfo)->is_pushed_down || \\\n> > - !bms_is_subset((rinfo)->required_relids, joinrelids))\n> > + (!((rinfo)->has_clone || ((rinfo)->is_clone)) && \\\n> > + ((rinfo)->is_pushed_down || \\\n> > + !bms_is_subset((rinfo)->required_relids, joinrelids)))\n>\n> I don't like that one bit; it seems way too likely to introduce\n> bad side-effects elsewhere.\n\n\nAgreed. I also do not have too much confidence in it.\n\n\n> I think the real issue is that \"is pushed down\" has never been a\n> conceptually accurate way of thinking about what\n> remove_rel_from_query needs to do here. Using RINFO_IS_PUSHED_DOWN\n> happened to work up to now, but it's an abuse of that macro, and\n> changing the macro's behavior isn't the right way to fix it.\n>\n> Having said that, I'm not sure what is a better way to think about it.\n> It seems like our data structure doesn't have a clear tie between\n> restrictinfos and their original join level, or at least, to the extent\n> that it did the \"clone clause\" mechanism has broken it.\n>\n> I wonder if we could do something involving recording the\n> rinfo_serial numbers of all the clauses extracted from a particular\n> syntactic join level (we could keep this in a bitmapset attached\n> to each SpecialJoinInfo, perhaps) and then filtering the joinclauses\n> on the basis of serial numbers instead of required_relids.\n\n\nI think this is a better way to fix the issue. I went ahead and drafted\na patch as attached. But I doubt that the collecting of rinfo_serial\nnumbers is thorough in the patch. Should we also collect the\nrinfo_serial of quals generated in reconsider_outer_join_clauses()? I\nbelieve that we do not need to consider the quals from\ngenerate_base_implied_equalities(), since they are all supposed to be\nrestriction clauses not join clauses.\n\nThanks\nRichard", "msg_date": "Wed, 24 May 2023 13:53:50 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Wed, May 24, 2023 at 2:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wonder if we could do something involving recording the\n>> rinfo_serial numbers of all the clauses extracted from a particular\n>> syntactic join level (we could keep this in a bitmapset attached\n>> to each SpecialJoinInfo, perhaps) and then filtering the joinclauses\n>> on the basis of serial numbers instead of required_relids.\n\n> I think this is a better way to fix the issue. I went ahead and drafted\n> a patch as attached. But I doubt that the collecting of rinfo_serial\n> numbers is thorough in the patch. Should we also collect the\n> rinfo_serial of quals generated in reconsider_outer_join_clauses()?\n\nNot sure. However, I thought of a possible fix that doesn't require\nso much new mechanism: we could consider potentially-commuted outer\njoins as part of the relid set that's okay to discard, as in the\nattached. This is still relying on RINFO_IS_PUSHED_DOWN, which I\ncontinue to feel is not quite the right thing here, but on the other\nhand that logic survived for years without trouble. What broke it\nwas addition of outer-join relids to the mix. I briefly considered\nproposing that we simply ignore all outer-join relids in the test that\ndecides whether to keep or discard a joinqual, but this way seems at\nleast slightly more principled.\n\nA couple of notes:\n\n1. The test case you give upthread only needs to ignore commute_below_l,\nthat is it still passes without the lines\n\n+ join_plus_commute = bms_add_members(join_plus_commute,\n+ removable_sjinfo->commute_above_r);\n\nBased on what deconstruct_distribute_oj_quals is doing, it seems\nlikely to me that there are cases that require ignoring\ncommute_above_r, but I've not tried to devise one. It'd be good to\nhave one to include in the commit, if we can find one.\n\n2. We could go a little further in refactoring, specifically the\ncomputation of joinrelids could be moved into remove_rel_from_query,\nsince remove_useless_joins itself doesn't need it. Not sure if\nthis'd be an improvement or not. Certainly it'd be a loser if\nremove_useless_joins grew a reason to need the value; but I can't\nforesee a reason for that to happen --- remove_rel_from_query is\nwhere basically all the work is going on.\n\n3. I called the parameter removable_sjinfo because sjinfo is already\nused within another loop, leading to a shadowed-parameter warning.\nIn a green field we'd probably have called the parameter sjinfo\nand used another name for the loop's local variable, but that would\nmake the patch bulkier without adding anything. Haven't decided\nwhether to rename before commit or leave it as-is.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 25 May 2023 18:06:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "On Fri, May 26, 2023 at 6:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> 1. The test case you give upthread only needs to ignore commute_below_l,\n> that is it still passes without the lines\n>\n> + join_plus_commute = bms_add_members(join_plus_commute,\n> +\n> removable_sjinfo->commute_above_r);\n>\n> Based on what deconstruct_distribute_oj_quals is doing, it seems\n> likely to me that there are cases that require ignoring\n> commute_above_r, but I've not tried to devise one. It'd be good to\n> have one to include in the commit, if we can find one.\n\n\nIt seems that queries of the second form of identity 3 require ignoring\ncommute_above_r.\n\nselect 1 from t t1 left join (t t2 left join t t3 on t2.a = t3.a) on\nt1.a = t2.a;\n\nWhen removing t2/t3 join, the clone of 't2.a = t3.a' with t1 join in the\nnulling bitmap would be put back if we do not ignore commute_above_r.\nThere is no observable problem though because it would be rejected later\nin subbuild_joinrel_restrictlist, but still I think it should not be put\nback in the first place.\n\n\n> 2. We could go a little further in refactoring, specifically the\n> computation of joinrelids could be moved into remove_rel_from_query,\n> since remove_useless_joins itself doesn't need it. Not sure if\n> this'd be an improvement or not. Certainly it'd be a loser if\n> remove_useless_joins grew a reason to need the value; but I can't\n> foresee a reason for that to happen --- remove_rel_from_query is\n> where basically all the work is going on.\n\n\n+1 to move the computation of joinrelids into remove_rel_from_query().\nWe also do that in join_is_removable().\n\nBTW, I doubt that the check of 'sjinfo->ojrelid != 0' in\nremove_useless_joins() is needed. Since we've known that the join is\nremovable, I think we can just Assert that sjinfo->ojrelid cannot be 0.\n\n--- a/src/backend/optimizer/plan/analyzejoins.c\n+++ b/src/backend/optimizer/plan/analyzejoins.c\n@@ -91,8 +91,8 @@ restart:\n\n /* Compute the relid set for the join we are considering */\n joinrelids = bms_union(sjinfo->min_lefthand, sjinfo->min_righthand);\n- if (sjinfo->ojrelid != 0)\n- joinrelids = bms_add_member(joinrelids, sjinfo->ojrelid);\n+ Assert(sjinfo->ojrelid != 0);\n+ joinrelids = bms_add_member(joinrelids, sjinfo->ojrelid);\n\n remove_rel_from_query(root, innerrelid, sjinfo, joinrelids);\n\n\n> 3. I called the parameter removable_sjinfo because sjinfo is already\n> used within another loop, leading to a shadowed-parameter warning.\n> In a green field we'd probably have called the parameter sjinfo\n> and used another name for the loop's local variable, but that would\n> make the patch bulkier without adding anything. Haven't decided\n> whether to rename before commit or leave it as-is.\n\n\nPersonally I prefer to rename before commit but I'm OK with both ways.\n\nThanks\nRichard\n\nOn Fri, May 26, 2023 at 6:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n1. The test case you give upthread only needs to ignore commute_below_l,\nthat is it still passes without the lines\n\n+    join_plus_commute = bms_add_members(join_plus_commute,\n+                                        removable_sjinfo->commute_above_r);\n\nBased on what deconstruct_distribute_oj_quals is doing, it seems\nlikely to me that there are cases that require ignoring\ncommute_above_r, but I've not tried to devise one.  It'd be good to\nhave one to include in the commit, if we can find one.It seems that queries of the second form of identity 3 require ignoringcommute_above_r.select 1 from t t1 left join (t t2 left join t t3 on t2.a = t3.a) ont1.a = t2.a;When removing t2/t3 join, the clone of 't2.a = t3.a' with t1 join in thenulling bitmap would be put back if we do not ignore commute_above_r.There is no observable problem though because it would be rejected laterin subbuild_joinrel_restrictlist, but still I think it should not be putback in the first place. \n2. We could go a little further in refactoring, specifically the\ncomputation of joinrelids could be moved into remove_rel_from_query,\nsince remove_useless_joins itself doesn't need it.  Not sure if\nthis'd be an improvement or not.  Certainly it'd be a loser if\nremove_useless_joins grew a reason to need the value; but I can't\nforesee a reason for that to happen --- remove_rel_from_query is\nwhere basically all the work is going on.+1 to move the computation of joinrelids into remove_rel_from_query().We also do that in join_is_removable().BTW, I doubt that the check of 'sjinfo->ojrelid != 0' inremove_useless_joins() is needed.  Since we've known that the join isremovable, I think we can just Assert that sjinfo->ojrelid cannot be 0.--- a/src/backend/optimizer/plan/analyzejoins.c+++ b/src/backend/optimizer/plan/analyzejoins.c@@ -91,8 +91,8 @@ restart:        /* Compute the relid set for the join we are considering */        joinrelids = bms_union(sjinfo->min_lefthand, sjinfo->min_righthand);-       if (sjinfo->ojrelid != 0)-           joinrelids = bms_add_member(joinrelids, sjinfo->ojrelid);+       Assert(sjinfo->ojrelid != 0);+       joinrelids = bms_add_member(joinrelids, sjinfo->ojrelid);        remove_rel_from_query(root, innerrelid, sjinfo, joinrelids); \n3. I called the parameter removable_sjinfo because sjinfo is already\nused within another loop, leading to a shadowed-parameter warning.\nIn a green field we'd probably have called the parameter sjinfo\nand used another name for the loop's local variable, but that would\nmake the patch bulkier without adding anything.  Haven't decided\nwhether to rename before commit or leave it as-is.Personally I prefer to rename before commit but I'm OK with both ways.ThanksRichard", "msg_date": "Fri, 26 May 2023 18:30:06 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Fri, May 26, 2023 at 6:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Based on what deconstruct_distribute_oj_quals is doing, it seems\n>> likely to me that there are cases that require ignoring\n>> commute_above_r, but I've not tried to devise one. It'd be good to\n>> have one to include in the commit, if we can find one.\n\n> It seems that queries of the second form of identity 3 require ignoring\n> commute_above_r.\n> select 1 from t t1 left join (t t2 left join t t3 on t2.a = t3.a) on\n> t1.a = t2.a;\n> When removing t2/t3 join, the clone of 't2.a = t3.a' with t1 join in the\n> nulling bitmap would be put back if we do not ignore commute_above_r.\n> There is no observable problem though because it would be rejected later\n> in subbuild_joinrel_restrictlist, but still I think it should not be put\n> back in the first place.\n\nAh. I realized that we could make the problem testable by adding\nassertions that a joinclause we're not removing doesn't contain\nany surviving references to the target rel or join. That turns\nout to fire (without the bug fix) in half a dozen existing test\ncases, so I felt that we didn't need to add another one.\n\nI did the other refactoring we discussed and pushed it.\nThanks for the report and review!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 May 2023 12:16:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "On Sat, May 27, 2023 at 12:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ah. I realized that we could make the problem testable by adding\n> assertions that a joinclause we're not removing doesn't contain\n> any surviving references to the target rel or join. That turns\n> out to fire (without the bug fix) in half a dozen existing test\n> cases, so I felt that we didn't need to add another one.\n>\n> I did the other refactoring we discussed and pushed it.\n> Thanks for the report and review!\n\n\nThanks for pushing it!\n\nI've managed to find some problems on master with the help of the new\nassertions. First the query below would trigger the new assertions.\n\ncreate table t (a int unique, b int);\n\nexplain (costs off)\nselect 1 from t t1 left join\n (select t2.a, 1 as c from\n t t2 left join t t3 on t2.a = t3.a) s\non true left join t t4 on true where s.a < s.c;\nserver closed the connection unexpectedly\n\nNote that 's.c' would be wrapped in PHV so the qual 's.a < s.c' is\nactually 't2.a < PHV(1)', and it is one of t3's joinquals. When we're\nremoving the t2/t3 join, we decide that this PHV is no longer needed so\nwe remove it entirely rather than just remove references from it. But\nactually its phrels still have references to t3 and t2/t3 join. So when\nwe put back the qual 's.a < s.c', we will trigger the new assertions.\n\nAt first glance I thought we can just remove the new assertions. But\nthen I figured out that the problem is more complex than that. If the\nPHV contains lateral references, removing the PHV entirely would cause\nus to lose the information about the lateral references, and that may\ncause wrong query results in some cases. Consider the query below\n(after removing the two new assertions, or in a non-assert build).\n\nexplain (costs off)\nselect 1 from t t1 left join\n lateral (select t2.a, coalesce(t1.a, 1) as c from\n t t2 left join t t3 on t2.a = t3.a) s\non true left join t t4 on true where s.a < s.c;\n QUERY PLAN\n--------------------------------------------------\n Nested Loop Left Join\n -> Nested Loop\n -> Seq Scan on t t1\n -> Materialize\n -> Seq Scan on t t2\n Filter: (a < COALESCE(a, 1))\n -> Materialize\n -> Seq Scan on t t4\n(8 rows)\n\nThe PHV of 'coalesce(t1.a, 1)' has lateral reference to t1 but we'd lose\nthis information because we've removed this PHV entirely in\nremove_rel_from_query. As a consequence, we'd fail to extract the\nlateral dependency for t2 and fail to build the nestloop parameters for\nthe t1/t2 join. And that causes wrong query results. We can see that\nif we insert some data into the table.\n\ninsert into t select 1,1;\ninsert into t select 2,2;\n\nOn v15 the query above gives\n\n ?column?\n----------\n 1\n 1\n(2 rows)\n\nbut on master it gives\n\n ?column?\n----------\n(0 rows)\n\nI haven't thought through how to fix it, but I suspect that we may need\nto do more checking before we decide to remove PHVs in\nremove_rel_from_query.\n\nThanks\nRichard\n\nOn Sat, May 27, 2023 at 12:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nAh.  I realized that we could make the problem testable by adding\nassertions that a joinclause we're not removing doesn't contain\nany surviving references to the target rel or join.  That turns\nout to fire (without the bug fix) in half a dozen existing test\ncases, so I felt that we didn't need to add another one.\n\nI did the other refactoring we discussed and pushed it.\nThanks for the report and review!Thanks for pushing it!I've managed to find some problems on master with the help of the newassertions.  First the query below would trigger the new assertions.create table t (a int unique, b int);explain (costs off)select 1 from t t1 left join    (select t2.a, 1 as c from     t t2 left join t t3 on t2.a = t3.a) son true left join t t4 on true where s.a < s.c;server closed the connection unexpectedlyNote that 's.c' would be wrapped in PHV so the qual 's.a < s.c' isactually 't2.a < PHV(1)', and it is one of t3's joinquals.  When we'reremoving the t2/t3 join, we decide that this PHV is no longer needed sowe remove it entirely rather than just remove references from it.  Butactually its phrels still have references to t3 and t2/t3 join.  So whenwe put back the qual 's.a < s.c', we will trigger the new assertions.At first glance I thought we can just remove the new assertions.  Butthen I figured out that the problem is more complex than that.  If thePHV contains lateral references, removing the PHV entirely would causeus to lose the information about the lateral references, and that maycause wrong query results in some cases.  Consider the query below(after removing the two new assertions, or in a non-assert build).explain (costs off)select 1 from t t1 left join    lateral (select t2.a, coalesce(t1.a, 1) as c from             t t2 left join t t3 on t2.a = t3.a) son true left join t t4 on true where s.a < s.c;                    QUERY PLAN-------------------------------------------------- Nested Loop Left Join   ->  Nested Loop         ->  Seq Scan on t t1         ->  Materialize               ->  Seq Scan on t t2                     Filter: (a < COALESCE(a, 1))   ->  Materialize         ->  Seq Scan on t t4(8 rows)The PHV of 'coalesce(t1.a, 1)' has lateral reference to t1 but we'd losethis information because we've removed this PHV entirely inremove_rel_from_query.  As a consequence, we'd fail to extract thelateral dependency for t2 and fail to build the nestloop parameters forthe t1/t2 join.  And that causes wrong query results.  We can see thatif we insert some data into the table.insert into t select 1,1;insert into t select 2,2;On v15 the query above gives ?column?----------        1        1(2 rows)but on master it gives ?column?----------(0 rows)I haven't thought through how to fix it, but I suspect that we may needto do more checking before we decide to remove PHVs inremove_rel_from_query.ThanksRichard", "msg_date": "Tue, 30 May 2023 10:28:26 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "On Tue, May 30, 2023 at 10:28 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> I haven't thought through how to fix it, but I suspect that we may need\n> to do more checking before we decide to remove PHVs in\n> remove_rel_from_query.\n>\n\nHmm, maybe we can additionally check if the PHV needs to be evaluated\nabove the join. If so it cannot be removed.\n\n--- a/src/backend/optimizer/plan/analyzejoins.c\n+++ b/src/backend/optimizer/plan/analyzejoins.c\n@@ -425,7 +425,8 @@ remove_rel_from_query(PlannerInfo *root, int relid,\nSpecialJoinInfo *sjinfo)\n\n Assert(!bms_is_member(relid, phinfo->ph_lateral));\n if (bms_is_subset(phinfo->ph_needed, joinrelids) &&\n- bms_is_member(relid, phinfo->ph_eval_at))\n+ bms_is_member(relid, phinfo->ph_eval_at) &&\n+ !bms_is_member(ojrelid, phinfo->ph_eval_at))\n {\n root->placeholder_list =\nforeach_delete_current(root->placeholder_list,\n l);\n\nDoes this make sense?\n\nThanks\nRichard\n\nOn Tue, May 30, 2023 at 10:28 AM Richard Guo <guofenglinux@gmail.com> wrote:I haven't thought through how to fix it, but I suspect that we may needto do more checking before we decide to remove PHVs inremove_rel_from_query.Hmm, maybe we can additionally check if the PHV needs to be evaluatedabove the join.  If so it cannot be removed.--- a/src/backend/optimizer/plan/analyzejoins.c+++ b/src/backend/optimizer/plan/analyzejoins.c@@ -425,7 +425,8 @@ remove_rel_from_query(PlannerInfo *root, int relid, SpecialJoinInfo *sjinfo)        Assert(!bms_is_member(relid, phinfo->ph_lateral));        if (bms_is_subset(phinfo->ph_needed, joinrelids) &&-           bms_is_member(relid, phinfo->ph_eval_at))+           bms_is_member(relid, phinfo->ph_eval_at) &&+           !bms_is_member(ojrelid, phinfo->ph_eval_at))        {            root->placeholder_list = foreach_delete_current(root->placeholder_list,                                                            l);Does this make sense?ThanksRichard", "msg_date": "Tue, 30 May 2023 17:05:26 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Tue, May 30, 2023 at 10:28 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>> I haven't thought through how to fix it, but I suspect that we may need\n>> to do more checking before we decide to remove PHVs in\n>> remove_rel_from_query.\n\nOh, I like this example! It shows a place where we are now smarter\nthan we used to be, because v15 and earlier fail to recognize that\nthe join could be removed. But we do have to clean up the query\nproperly afterwards.\n\n> Hmm, maybe we can additionally check if the PHV needs to be evaluated\n> above the join. If so it cannot be removed.\n\nYeah, that seems to make sense, and it squares with the existing\ncomment saying that PHVs used above the join can't be removed.\nPushed that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Jun 2023 17:13:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: no relation entry for relid 6" }, { "msg_contents": "On Fri, Jun 9, 2023 at 5:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Hmm, maybe we can additionally check if the PHV needs to be evaluated\n> > above the join. If so it cannot be removed.\n> Yeah, that seems to make sense, and it squares with the existing\n> comment saying that PHVs used above the join can't be removed.\n> Pushed that way.\n\n\nThanks for pushing it! I've closed the open item for it.\n\nThanks\nRichard\n\nOn Fri, Jun 9, 2023 at 5:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> Hmm, maybe we can additionally check if the PHV needs to be evaluated\n> above the join.  If so it cannot be removed.\nYeah, that seems to make sense, and it squares with the existing\ncomment saying that PHVs used above the join can't be removed.\nPushed that way.Thanks for pushing it!  I've closed the open item for it.ThanksRichard", "msg_date": "Fri, 9 Jun 2023 11:16:20 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: no relation entry for relid 6" } ]
[ { "msg_contents": "When creating a private service for another instance of PostgreSQL I used the template of postgresql-15.service file installed into /usr/lib/systemd/system on Fedora 38 provided through the installation for postgres 15.3 from PGDG repositories.\n\n\nThere I noticed that the line ExecStart still uses the postmaster link.\n\n\nI would propose to change it to\n\nExecStart=/usr/pgsql-15/bin/postgres -D $(PGDATA)\n\n\nThis change should apply also to back branches to avoid using deprecated links in PGDG software.\n\nThis seems to be necessesary on upcoming PG16.\n\n\n(BTW: where is this all documented?)\n\n\nHans Buschmann\n\n\n\n\n\n\n\n\nWhen creating a private service for another instance of PostgreSQL I used the template of postgresql-15.service file installed into /usr/lib/systemd/system on Fedora 38 provided through the installation for postgres 15.3 from PGDG repositories.\n\n\nThere I noticed that the line ExecStart still uses the postmaster link.\n\n\nI would propose to change it to\nExecStart=/usr/pgsql-15/bin/postgres -D $(PGDATA)\n\n\nThis change should apply also to back branches to avoid using deprecated links in PGDG software.\nThis seems to be necessesary on upcoming PG16.\n\n\n(BTW: where is this all documented?)\n\n\nHans Buschmann", "msg_date": "Tue, 23 May 2023 15:09:17 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": true, "msg_subject": "Re: drop postmaster symlink" } ]